메뉴 건너뛰기
.. 내서재 .. 알림
소속 기관/학교 인증
인증하면 논문, 학술자료 등을  무료로 열람할 수 있어요.
한국대학교, 누리자동차, 시립도서관 등 나의 기관을 확인해보세요
(국내 대학 90% 이상 구독 중)
로그인 회원가입 고객센터 ENG
주제분류

추천
검색

논문 기본 정보

자료유형
학위논문
저자정보

최태준 (부산외국어대학교, 부산외국어대학교 대학원)

지도교수
김응수
발행연도
2016
저작권
부산외국어대학교 논문은 저작권에 의해 보호받습니다.

이용수9

표지
AI에게 요청하기
추천
검색

이 논문의 연구 히스토리 (4)

초록· 키워드

오류제보하기
In this modern society, the spread of smart devices with the development of IT technology increases the demands for 3D contents in various areas such as in game, advertising, exhibition contents, etc., those which are usually 2D image based contents.
The 3D contents are not only the representation of information but they enhance value added products and productivity in various industries and services such as in health care, education, architecture, etc.
This is called 3D fusion industry, and as it is selected as the next generation strategic industries, many countries are trying to develop technologies using 3D contents to preempt the global markets.
But 3D fusion industry is one of the high level entry barrier industries requiring professional and technical experts and expensive equipment. Therefore, there exists a demand for new method to quickly and easily create 3D models for the public. The design and implementation of 3D models are one of the professional areas. Normally, people have difficulties in making 3D models.
This dissertation is about 3D model generation algorithm which can easily convert a 2D image into 3D model. The traditional methods and technology for 3D conversion especially from 2D images are surveyed. The traditional ways of getting 3D model data, such as 3D scanner, Stereo Images, Shape Form Shading(SFS) and 3D Modeling Tool are discussed.
The converting technology is for converting conventional contents into 3D contents by using specially designed software or by manual labor. There are three kinds of converting technologies: real time automatic converting, partially manual works and full manual works. The converting technologies still need to be improved in areas of views of quality, work hours, costs, and errors.
In this dissertation, the 2D image drawn by an user is processed to separate the foreground image and background image, where only the foreground image is used for 3D modeling data.
The first step is segmentation process where the foreground image area is extracted from the background image.
The edge detection operation is employed as the segmentation process with Waterfront algorithm.
There are many edge detection operations in image processing area, but the Canny Edge detection operation is used in this study because of its fast and correct edge detection nature.
If, for example, Waterfront algorithm considers background image as an ocean, then foreground image is considered as an island, in which its algorithm detects border lines between the ocean and an island.
The algorithm assumes that waters run only horizontally and vertically. The first path where waters start from upper conner left and run horizontly right into edges to stop, and the process also applied to vertical down ways. The ways where waters run are labeled as background area.
In the second path, waters start to run from bottom conner right and run horizontally left into edges and vertical up ways with labeling as background.
Those two paths are called one cycle and the algorithm cycles until there are no more space to run. Most backgrounds can easily be separated through one cycle, but foreground and background areas are more accurately separated when there are more cycles
The foreground area can be composed of many labels. Each label area is used as to make a 3D model independently. The overall 3D models are combination of all separated The 3D models are composed of vertices and faces and one face uses 3 vertices. The 3D models form triangles using 3 vertices. The same number of vertices is distributed on the foreground area as the same horizontal image plane. As the number of vertices on a horizontal image plane increases, the 3D models become fine resolution models.
In this dissertation, the right hand coordinate system is employed where horizontal lines on an image plane are used as the ways where vertices are located. Two closing face on the top and bottom are obtained with adding extra vertices at the center of the distributed vertices on the top and bottom.
The x and y values on the 3D model local coordinate are determined by the position on the image plane and the z-values are calculated by using the relative position on the horizontal image lines. The z-value depends on each application.
In this dissertation, this 3D conversion algorithm is tested on the fish images and human face images.
The drawing fish images on a paper or a screen and the photo images of human face are used as the input source images.
The color images are converted into gray images to operate edge detection and Waterfront algorithm.
The vertices distributed on the images planes are positioned with symmetric ways for initial calculation.
The initial vertices are relocated for each application differently for better appearance and the original texture image is applied on the corresponding 3D model faces.
The human head model is not symmetric for front face and rear part of head, but the fish 3D models are symmetric between front and reverse side. That is why the initial vertices are needed to be relocated.
The vertices can also be relocated by time based Sine function for fish animation application case. The model can be moved with S character shape animation.
As an experiment for usefulness of this algorithm, the 3D fish models made by using this proposed algorithm are animated on the large scale display where multiple users can share their 3D models swimming on the screen.
The algorithm can be implemented by using Javascript for the web browsers or any dedicated native codes for each platform or server based program languages. The animation program for large scale display can get 3D model data from database server.
The algorithm is tested and evaluated on Windows7(64-bit) operating system, Intel(R) Core(TM) i7-2600 CPU, Main Memory 6.0GB.
The 3D animation program is implemented by using Unity 5.0.1f1(64-bit) Editor where manual drawing 3D models by using

목차

Ⅰ. 서론 1
1.1 연구 필요성 1
1.2 연구의 목적 6
1.3 연구 방법 및 개요 8
1.4 논문의 구성 10
Ⅱ. 이론적 배경 11
2.1 3D Scanner 11
2.1.1 접촉식 3차원 스캐너 14
2.1.2 비접촉식 3차원 스캐너 15
2.2 Stereo Image 16
2.2.1 Stereo 카메라의 구조 17
2.2.2 스테레오 이미지 정합 과정 및 계산방법 18
2.3 Shape Form Shading(SFS) 19
2.4 3D Modeling 20
2.4.1 3D 모델 제작 툴(Tool) 21
2.4.2 폴리곤(Polygon) 24
2.4.3 스플라인(Spline) 25
2.5 2D to 3D 컨버팅 26
2.5.1 3D 컨버팅 기술이란 27
2.5.2 2D to 3D 컨버팅 기술 28
(가) 수작업 변환과정 29
(나) 비실시간 소프트웨어 변환 작업 30
(다) 실시간 변환 작업 32
Ⅲ. 3차원 모델링 33
3.1 세그멘테이션(Segmentatoion) 34
3.1.1 이미지 전경과 배경의 분리 34
3.1.2 WATERFRONT 알고리즘 39
3.2 Vertex 분배 42
3.3 면의 구성 43
3.4 모델의 완성 43
3.5 OBJ 모델 파일 45
Ⅳ. 물고기 모델링 및 3D 애니메이션 46
4.1 물고기 이미지의 3D 모델변환 46
4.2 3D 애니메이션 49
4.3 멀티유저 환경의 전시용 시스템 51
4.4 실험 결과 52
Ⅴ. 얼굴 이미지 3D 모델링 56
5.1 얼굴 모델링 57
5.2 전방 Vertex 재설정 61
5.3 얼굴영역 검출 및 수정 62
5.4 실험결과 및 분석 65
Ⅵ. 결론 72
참고문헌 74
영문초록 77

최근 본 자료

전체보기

댓글(0)

0