메뉴 건너뛰기
.. 내서재 .. 알림
소속 기관/학교 인증
인증하면 논문, 학술자료 등을  무료로 열람할 수 있어요.
한국대학교, 누리자동차, 시립도서관 등 나의 기관을 확인해보세요
(국내 대학 90% 이상 구독 중)
로그인 회원가입 고객센터 ENG
주제분류

추천
검색

논문 기본 정보

자료유형
학술저널
저자정보
Prashantkumar M. Gavali (DKTE Society’s Textile and Engineering Institute & Shivaji University) Suresh K. Shirgave (DKTE Society’s Textile and Engineering Institute)
저널정보
서울대학교 인지과학연구소 Journal of Cognitive Science Journal of Cognitive Science Vol.25 No.2
발행연도
2024.6
수록면
199 - 236 (38page)

이용수

표지
📌
연구주제
📖
연구배경
🔬
연구방법
🏆
연구결과
AI에게 요청하기
추천
검색

초록· 키워드

오류제보하기
Sentiment analysis employs classification models to discern people's opinions automatically. Recent strides made with Large Language Models (LLMs) have significantly enhanced the accuracy of binary-level sentiment classification, particularly through zero-shot and few-shot learning approaches. However, when it comes to fine-level sentiment classification, LLMs face challenges because they are not specifically trained for this downstream task. In contrast, other classification models utilize word embedding, a vector representation of words, as input data. Contemporary word embedding algorithms create these embeddings by considering the surrounding context of each word. Nonetheless, these word embeddings often fail to capture the nuances of intensity differences between words. For example, words like 'more' and 'less' have embeddings closely positioned in the semantic space, despite representing distinct intensity levels. These intensity words such as 'much', 'more', and 'less' are frequently used to convey the strength of opinions. Their intensity distinctions are crucial in fine-level sentiment classification. This paper introduces an innovative intensity-aware feed-forward neural network, equipped with a novel referential loss function designed to capture these intensity differences between words. The proposed model effectively separates words of varying intensities while bringing together words sharing the same intensity in the semantic space. To assess the effectiveness of this refined word embedding in sentiment analysis tasks, diverse fine-level sentiment datasets are employed. The results demonstrate that the refined word embedding surpasses original embeddings and popular LLMs for fine-level sentiment analysis.

목차

등록된 정보가 없습니다.

참고문헌 (0)

참고문헌 신청

함께 읽어보면 좋을 논문

논문 유사도에 따라 DBpia 가 추천하는 논문입니다. 함께 보면 좋을 연관 논문을 확인해보세요!

최근 본 자료

전체보기

댓글(0)

0