메뉴 건너뛰기
.. 내서재 .. 알림
소속 기관/학교 인증
인증하면 논문, 학술자료 등을  무료로 열람할 수 있어요.
한국대학교, 누리자동차, 시립도서관 등 나의 기관을 확인해보세요
(국내 대학 90% 이상 구독 중)
로그인 회원가입 고객센터 ENG
주제분류

추천
검색

논문 기본 정보

자료유형
학위논문
저자정보

이소하 (경북대학교, 경북대학교 대학원)

지도교수
박혜영
발행연도
2021
저작권
경북대학교 논문은 저작권에 의해 보호받습니다.

이용수74

표지
AI에게 요청하기
추천
검색

이 논문의 연구 히스토리 (2)

초록· 키워드

오류제보하기
The backpropagation learning algorithm that has been developed in the 1980s is still a core algorithm used in the learning of most deep learning models. However, its weight transport problem, which means that error signals are sequentially transmitted by using the weight values of upper layers to correct the weights of lower layers, leads to inefficiency of information transmission and lack of biological plausibility. As for alternative methods, the feedback alignment method has been proposed to utilize random weights that are independent of the forward weight for transmitting the reverse error signal. The direct feedback alignment method and the sign symmetry algorithm have also been developed to improve the performance of feedback alignment. Though these methods have shown their potential through basic experiments using deep learning models, the performances are not sufficiently good compared to the original backpropagation learning. In addition, the previous works are still in their early stage, and there is room for improvement.
This thesis studies the possibility of the performance improvement of feedback alignment method by combining various optimization algorithms. Through computational experiments, it is confirmed that the learning performance can be improved when the adaptive learning rate optimization algorithms such as Adam are combined with the feedback alignment method as well as sign symmetry method. Moreover, based on the previous study on the effect of weight initialization method on the learning performance, this thesis proposes a strategic selection of random weights for feedback alignment. Through experimental comparison, it is confirmed that the orthogonal initialization can improve the learning performance. Additionally, it is shown that applying kaiming uniform initialization or sparse initialization for the last layer can give positive effect on the learning performances. Experimental results also indicate that the effect of the proposed learning strategies become clear as the model deepens. As further studies, it would be necessary to conduct comprehensive experiments on large data sets and more complicated deep network structures. An in-depth analysis of the relationship among randomized weight vectors could give more insight on the effect of the proposed randomization strategies.

목차

목 차
Ⅰ. 서론 1
Ⅱ. 관련 연구 5
2.1 랜덤 가중치를 이용한 학습 방법 5
2.2 최적화 알고리즘 7
2.3 가중치 초기화 12
Ⅲ. 제안하는 학습 성능 개선 전략 14
3.1 랜덤 가중치를 이용한 오류 역전파 학습 14
3.2 최적화 알고리즘의 결합 17
3.3 랜덤 가중치 설정 방법 19
Ⅳ. 실험 및 분석 23
4.1. 실험 데이터 23
4.2. 실험 설계 24
4.3. 결과 분석 26
4.3.1 최적화 알고리즘의 적용 결과 26
4.3.1.1 MNIST 데이터 실험 결과 26
4.3.1.2 CIFAR-10 데이터 실험 결과 29
4.3.2 랜덤 가중치 설정 방법에 대한 성능 분석 32
4.3.2.1 MNIST 데이터 실험 결과 32
4.3.2.2 CIFAR-10 데이터 실험 결과 35
Ⅴ. 결론 38
참고 문헌 40
영문 초록 44

최근 본 자료

전체보기

댓글(0)

0