자료유형 | E-Book |
---|---|
개인저자 | Papernot, Nicolas. |
단체저자명 | The Pennsylvania State University. Computer Science and Engineering. |
서명/저자사항 | Characterizing the Limits and Defenses of Machine Learning in Adversarial Settings. |
발행사항 | [S.l.] : The Pennsylvania State University., 2018 |
발행사항 | Ann Arbor : ProQuest Dissertations & Theses, 2018 |
형태사항 | 178 p. |
소장본 주기 | School code: 0176. |
ISBN | 9780438135536 |
일반주기 |
Source: Dissertation Abstracts International, Volume: 79-12(E), Section: B.
|
요약 | Advances in machine learning (ML) in recent years have enabled a dizzying array of applications such as object recognition, autonomous systems, security diagnostics, and playing the game of Go. Machine learning is not only a new paradigm for bui |
요약 | In this thesis, I focus my study on the integrity of ML models. Integrity refers here to the faithfulness of model predictions with respect to an expected outcome. This property is at the core of traditional machine learning evaluation, as demon |
요약 | A large fraction of ML techniques were designed for benign execution environments. Yet, the presence of adversaries may invalidate some of these underlying assumptions by forcing a mismatch between the distributions on which the model is trained |
요약 | I explore the space of attacks against ML integrity at test time. Given full or limited access to a trained model, I devise strategies that modify the test data to create a worst-case drift between the training and test distributions. The implic |
요약 | Hence, my efforts to increase the robustness of models to these adversarial manipulations strive to decrease the confidence of predictions made far from the training distribution. Informed by my progress on attacks operating in the black-box thr |
요약 | I then describe recent defensive efforts addressing these shortcomings. To this end, I introduce the Deep k-Nearest Neighbors classifier, which augments deep neural networks with an integrity check at test time. The approach compares internal re |
요약 | This research calls for future efforts to investigate the robustness of individual layers of deep neural networks rather than treating the model as a black-box. This aligns well with the modular nature of deep neural networks, which orchestrate |
일반주제명 | Computer science. |
언어 | 영어 |
기본자료 저록 | Dissertation Abstracts International79-12B(E). Dissertation Abstract International |
대출바로가기 | http://www.riss.kr/pdu/ddodLink.do?id=T15000703 |
인쇄
No. | 등록번호 | 청구기호 | 소장처 | 도서상태 | 반납예정일 | 예약 | 서비스 | 매체정보 |
---|---|---|---|---|---|---|---|---|
1 | WE00028633 | 004 | 가야대학교/전자책서버(컴퓨터서버)/ | 대출가능 |