메뉴 건너뛰기
.. 내서재 .. 알림
소속 기관/학교 인증
인증하면 논문, 학술자료 등을  무료로 열람할 수 있어요.
한국대학교, 누리자동차, 시립도서관 등 나의 기관을 확인해보세요
(국내 대학 90% 이상 구독 중)
로그인 회원가입 고객센터 ENG
주제분류

추천
검색

논문 기본 정보

자료유형
학위논문
저자정보

Aziz Siyaev (인하대학교, 인하대학교 대학원)

지도교수
조근식
발행연도
2020
저작권
인하대학교 논문은 저작권에 의해 보호받습니다.

이용수1

표지
AI에게 요청하기
추천
검색

이 논문의 연구 히스토리 (2)

초록· 키워드

오류제보하기
Generative Adversarial Networks (GANs) made a huge contribution to the development of content creation technologies. An important place in this advancement takes video generation due to the need for human animation applications, automatic trailer or movie generation. Therefore, taking advantage of various GANs, we proposed own method for Human Action Video Generation called GOHAG: GANs Orchestration for Human Actions Generation. GOHAG is a combination of three GANs, where Poses generation GAN (PGAN) creates a sequence of poses, Poses Optimization GAN (POGAN) optimizes them, and Frames generation GAN (FGAN) attaches texture for the sequence, creating a video.
Extensive experiments were conducted on a public dataset and compared with existing work in the field of video generation. The evaluation showed that GOHAG is outperforming the recent technique in various metrics. In addition, a private dataset of various human actions was collected to widen the experiments, where GOHAG performed better to compare with the public dataset. The proposed technique proved to have the ability to produce a smooth and plausible video of high-quality, where the human poses are created and being used to control the texturing phase.

목차

Chapter 1 Introduction 6
Chapter 2 Related Works 9
Chapter 3 Human Action Video Generation using GANs 11
3.1 Action Representation 12
3.2 Pose Sequence Generation 13
Poses Generator GAN 13
Poses Optimization GAN 16
3.3 Frames Sequence Generation. 18
Frames Generator GAN 18
3.4 Implementation Details 20
Chapter 4 Experimental Results 21
4.1 Datasets 22
4.2 Quantitative Evaluation 23
L2 Distance 24
Video Quality Evaluation 26
Inception Score and Frechet Inception Distance 28
4.3 Qualitative Evaluation 29
User Evaluation. 29
Chapter 5 Conclusions and Future Work 32
References 33
Acknowledgement 35

최근 본 자료

전체보기

댓글(0)

0