메뉴 건너뛰기
.. 내서재 .. 알림
소속 기관/학교 인증
인증하면 논문, 학술자료 등을  무료로 열람할 수 있어요.
한국대학교, 누리자동차, 시립도서관 등 나의 기관을 확인해보세요
(국내 대학 90% 이상 구독 중)
로그인 회원가입 고객센터 ENG
주제분류

추천
검색

논문 기본 정보

자료유형
학위논문
저자정보

공진혁 (경희대학교, 경희대학교 대학원)

지도교수
이승룡
발행연도
2017
저작권
경희대학교 논문은 저작권에 의해 보호받습니다.

이용수1

표지
AI에게 요청하기
추천
검색

이 논문의 연구 히스토리 (2)

초록· 키워드

오류제보하기
As the healthcare industry evolved, research for providing personalized services became more active. Activity Recognition techniques have been studied from the past as a representative method of recognizing user''s daily life patterns. In addition, due to the development of wearable devices such as Smart Watch, Smart Phone, Activity Recognition technology is being studied using various sensors.
Conventional Activity Recognition system based on Multi-Modal sensor pre-define sensor location and activities, and recognize them using all sensors. Since this method uses all the sensor data, there is a problem that if one sensor is not worn, the activity recognition system does not operate or the accuracy is lowered. Also, this system has a disadvantage in that it is difficult to construct an optimized system that uses different models according to the number of attached sensors, because all the activities to be recognized are defined beforehand and the optimized modeling study is not performed according to the sensor attachment positions.
In this paper, we analyze the most suitable activities according to user ''s sensor location and propose a method to recognize optimal activity through individual activity modeling. In this paper, we propose a method to calculate feature importance by guided random forest algorithm and to calculate the contribution of sensor contributed by each activity using sensor. Also, . In addition, we propose a method to generate an individual activity model using only the appropriate sensor attachment locations and the features for each action. By using only optimized features for each sensor and activity, optimized results for each activity and sensor combination can be derived.
Experimental results show that the recognition accuracy is about 7% higher than that of each existing activity recognition technique.

목차

Abstract 1
1. Introduction 1
2. Related Work 3
2.1 Wearable acceleration sensor based user activity recognition 3
2.2 A comparison of single sensor-based activity recognition and multimodal sensor-based activity recognition 5
2.3 Co-recognition sensor location and activity 7
2.4 Analysis of activity recognition accuracy by sensor position and sensor combination 9
3.Individual Activity Recognition Modeling Methodology Considering Attachment Location of Sensor Based on Multimodal Sensors 12
3.1 Significant activity analysis based on sensor attachment location 13
3.1.1 Analysis sensor contribution by each activity 15
3.2 Methodology of individual activity recognition modeling 18
3.2.1 Modeling for each activity 19
3.2.2 Selection of final result 21
4. Experiment and result 23
4.1 Experiment environment 23
4.2 Feature extraction 24
4.3 Experiment Scenario 25
4.4 Result of discrimination of suitable sensor combination according to activity 26
4.5 Experiment result 27
5. Conclusion and Future Work 29
Reference 30
Appendix 31

최근 본 자료

전체보기

댓글(0)

0