메뉴 건너뛰기
.. 내서재 .. 알림
소속 기관/학교 인증
인증하면 논문, 학술자료 등을  무료로 열람할 수 있어요.
한국대학교, 누리자동차, 시립도서관 등 나의 기관을 확인해보세요
(국내 대학 90% 이상 구독 중)
로그인 회원가입 고객센터 ENG
주제분류

추천
검색

논문 기본 정보

자료유형
학위논문
저자정보

이예나 (서울여자대학교, 서울여자대학교 일반대학원)

지도교수
김명주
발행연도
2019
저작권
서울여자대학교 논문은 저작권에 의해 보호받습니다.

이용수74

표지
AI에게 요청하기
추천
검색

이 논문의 연구 히스토리 (2)

초록· 키워드

오류제보하기
With the development of artificial intelligence, artificial intelligence based decision making system is used in various fields of society. Although AI technology is applied to various service, artificial intelligence is based on the black box model so we can not know how or why we reached the result. Because artificial intelligence learns past data, if the data are biased then AI learning result can also be biased. To get reliable AI, it is required these characteristics such as fairness, accountability, transparency. In this paper, we propose a methodology to mitigate biases in using AI. The proposed method is to mitigate biases by distinguishing previous studies that define fairness using statistical approaches and machine learning structured approaches. At first at the pre-processing stage, AI learns from the biased data and statistical treatment data each. And then at the in-processing stage, the threshold of the biased data are compared with the threshold of the mitigated data. The process of treating again and comparing data are repeated until the difference between the biased data and the previous data is minimized. At the final stage of post-processing, the characteristics that are not relevant to decision-making results are removed by counterfactual analysis. The methodology proposed in this study repeated the process of comparing the biased dataset with the de-biased dataset to mitigate bias. Based on this, it is possible to contribute to the development of artificial intelligence technology by mitigating the bias that can occur when using artificial intelligence.

목차

제 1 장 서 론 ······························································································ 1
1.1. 연구배경 및 필요성 ··················································································· 1
1.2. 연구 내용 ····································································································· 2
1.3. 논문의 구성 ································································································· 3
제 2 장 관련 연구 ······················································································ 4
2.1. 인공지능의 활용 ························································································· 4
2.2. 편향성의 정의 ····························································································· 5
2.3. 편향성 분석 방법론 ··················································································· 6
제 3 장 심층 사례 연구 ·········································································· 11
3.1. Propublica의 COMPAS 재분석 ································································ 11
3.2. 기술 현황 분석 ························································································· 11
3.2.1. IBM AIF360 ·························································································· 12
3.2.2. Google What-If Tool ········································································· 13
3.2.3. Microsoft FairLearn ··········································································· 14
3.2.4. Themis-ML ·························································································· 14
3.2.5. 기술 현황 분석 결과 ········································································ 15
제 4 장 본 론 ···························································································· 17
4.1. 편향성이 완화된 인공지능 활용을 위한 연구 방향 ························· 17
4.2. 편향성 완화 방법론 ················································································· 19
제 5 장 실험결과 ······················································································ 25
제 6 장 결 론 ·························································································· 28
참 고 문 헌 ······························································································ 29
부 록 ·········································································································· 31
ABSTRACT ································································································ 40

최근 본 자료

전체보기

댓글(0)

0