메뉴 건너뛰기
.. 내서재 .. 알림
소속 기관/학교 인증
인증하면 논문, 학술자료 등을  무료로 열람할 수 있어요.
한국대학교, 누리자동차, 시립도서관 등 나의 기관을 확인해보세요
(국내 대학 90% 이상 구독 중)
로그인 회원가입 고객센터 ENG
주제분류

추천
검색

논문 기본 정보

자료유형
학술저널
저자정보
저널정보
한국멀티미디어언어교육학회 멀티미디어 언어교육 멀티미디어 언어교육 제15권 제4호
발행연도
2012.1
수록면
39 - 60 (22page)

이용수

표지
📌
연구주제
📖
연구배경
🔬
연구방법
🏆
연구결과
AI에게 요청하기
추천
검색

초록· 키워드

오류제보하기
An ever-growing demand for teaching English writing skills has been impeded by the logistic problem posed by the inherently time-consuming nature of the sophisticated process of writing assessment, whose validity and inter-rater reliability can be deteriorated by idiosyncratic human ratings. Thus, a wide variety of computer-based automatic essay scoring (AES) schemes have been developed and employed to enable language educators to cope with the issue of practicality and subjectivity in the scoring of performance tests. The first part of the present research (pilot study) is intended to probe the validity of AES-based ratings in comparison with human ratings. A comparative analysis of AES-based ratings and human ratings appears to substantiate the robustness of the AES scheme. Based on the positive results of AES, the second part of the research (main study) attempts to capitalize on quantitative analysis of corpus linguistic indices and a systematic qualitative error analysis to explore the validity of human assessment of essays collected from a writing test. The results of data analyses are provided to discuss the possibility of utilizing a corpus-based analysis in order to enhance the validity of the essay assessment process, which will lead to more effective and plausible EFL writing teaching and testing

목차

등록된 정보가 없습니다.

참고문헌 (26)

참고문헌 신청

이 논문의 저자 정보

최근 본 자료

전체보기

댓글(0)

0