메뉴 건너뛰기
.. 내서재 .. 알림
소속 기관/학교 인증
인증하면 논문, 학술자료 등을  무료로 열람할 수 있어요.
한국대학교, 누리자동차, 시립도서관 등 나의 기관을 확인해보세요
(국내 대학 90% 이상 구독 중)
로그인 회원가입 고객센터 ENG
주제분류

추천
검색

논문 기본 정보

자료유형
학위논문
저자정보

강동훈 (고려대학교, 고려대학교 대학원)

지도교수
박신석
발행연도
2017
저작권
고려대학교 논문은 저작권에 의해 보호받습니다.

이용수4

표지
AI에게 요청하기
추천
검색

이 논문의 연구 히스토리 (2)

초록· 키워드

오류제보하기
The direction of technology development of automation in the past was simple repetition of work instead for human, though nowadays, automation expands its fields to automate parts of our daily lives closely. One popular field of automation is the development of autonomous vehicles. As the development of autonomous vehicles becomes realistic, many automobile manufacturers and components producers are in progress of the research and moreover, the National Highway Traffic Safety Administration of the U.S.A. provides the guideline that puts the autonomous driving system in 5 categories. Most companies aim to develop the final stage introduced in the guideline ? ‘completely autonomous driving’. ADAS (Advanced Driver Assistance Systems) which has been applied in automobile recently, supports the driver in controlling lane maintenance, speed and direction in a single lane based on limited road environment. Although technologies of obstacles avoidance on the obstacle environment have been developed, it concentrates on simple obstacle avoidances, not considering the control of the actual vehicle in the real situation which makes drivers feel unsafe from the sudden change of the wheel and the speed of the vehicle. In order to develop the ‘completely autonomous driving’ automobile which perceives the surrounding environment by itself and operates, ability of the vehicle should enhance in a way human driver does. In this sense, this paper intends to establish the algorithm with which autonomous vehicles behave human-friendly based on Vehicle Dynamics through the reinforcement learning that is based on Q-learning, a type of machine learning. Experiments has progressed on assumptions that the location of a car can be pointed accurately on a 2D map with a GPS device installed in the car and the car perceives types and locations of obstacles in advance. The obstacle avoidance reinforcement learning proceeded in 5 simulations: Four different situations arbitrarily chosen and Slalom test environment based on ISO 3888-2 are given as obstacle conditions. The reward rule has been set in the experiment so that the car can learn by itself with recurring events, allowing the experiment to have the similar environment to the one when humans drive. Driving Simulator has been used to verify results of the reinforcement learning. The simulator was programmed in equivalent obstacle environment in order to produce results when human-being steers and puts pedal. The data from the simulator was eventually compared with the reinforcement learning. The ultimate goal of this study is to enable autonomous vehicles avoid obstacles in a human-friendly way when obstacles appear in their sight, using controlling methods that have previously been learned in various conditions through the reinforcement learning.

목차

Abstract i
목 차 v
List of Figures v
List of Tables vii
제 1 장. 서 론 1
1.1 연구의 배경 1
1.2 연구의 목적 3
1.3 논문의 구성 4
제 2 장. 자율주행 제어 알고리즘 5
2.1 기존 경로 생성 알고리즘 5
2.2 기계학습법 (Machine Learning) 8
2.3 강화학습 (Reinforcement Learning) 10
2.4 Q - Learning 14
제 3 장. Vehicle Model 16
3.1 Bicycle Model (2dof) 16
3.2 Bicycle Model (3dof) 21
제 4 장. 장애물 환경에 대한 강화학습 24
4.1 장애물 환경 및 가정 사항 24
4.2 CARSIM ? SIMULINK 연동 26
4.3 강화학습 결과 31
제 5 장. 강화학습 결과 검증 34
5.1 검증 방법 및 장비 34
5.2 Driving Simulator 결과와 비교 35
제 6 장. 결 론 40
Reference 42

최근 본 자료

전체보기

댓글(0)

0