메뉴 건너뛰기
.. 내서재 .. 알림
소속 기관/학교 인증
인증하면 논문, 학술자료 등을  무료로 열람할 수 있어요.
한국대학교, 누리자동차, 시립도서관 등 나의 기관을 확인해보세요
(국내 대학 90% 이상 구독 중)
로그인 회원가입 고객센터 ENG
주제분류

추천
검색

논문 기본 정보

자료유형
학위논문
저자정보

진종현 (아주대학교 )

지도교수
신현정
발행연도
2023
저작권
아주대학교 논문은 저작권에 의해 보호받습니다.

이용수3

표지
AI에게 요청하기
추천
검색

이 논문의 연구 히스토리 (3)

초록· 키워드

오류제보하기
Graph convolutional networks (GCNs) and derived models are known to be effective in semi-supervised learning, which improves the performance of the model by using both unlabeled and labeled data through graph structures. Also, GCN shows high performance in various problems such as node classification and link prediction. However, GCNs and derivatives model have the disadvantage of having to construct the model deeply to use the information of distant nodes because they reflect the graph structure through adjacency matrix. In addition, if models are constructed deeply, features of nodes are represented similarly, and the classification performance is deteriorated which define as oversmoothing. In this paper, we propose a Latent Label Propagation (LLP) model that combines label propagation with GCNs to solve the aforementioned problems. (e.g. undersmoothing, oversmoothing etc.) Different to existing GNNs models, which utilize only adjacency matrix and node features as input values, we use labels as input values and improve performance in node classification problems by adjusting the propagation degree of nodes with labels during training through parameters. In experiments, we confirm that updating node representation using features across global structure of graph shows improved performance in classification tasks rather than using only information from defined neighbor nodes by comparing performance with previously proposed models for various datasets. Finally, we evaluate the effectiveness of the label and global aggregation.

목차

1 Introduction 1
2 Fundamentals 6
3 Proposed Methods 11
3.1 Latent Label Propagation 12
3.1.1 Smoothed Aggregation 12
3.1.2 Labeled Features 12
4 Experiments 16
4.1 Semi-supervised Classification 19
4.2 Classification Performance by Aggregation Range 23
4.3 Feature Smoothness Comparison 26
4.4 Ablation Study 29
5 Conclusion 33
References 35

최근 본 자료

전체보기

댓글(0)

0