가야대학교 분성도서관

상단 글로벌/추가 메뉴

회원 로그인


자료검색

자료검색

상세정보

부가기능

Avoiding Communication in First Order Methods for Optimization

상세 프로파일

상세정보
자료유형E-Book
개인저자Devarakonda, Aditya.
단체저자명University of California, Berkeley. Electrical Engineering & Computer Sciences.
서명/저자사항Avoiding Communication in First Order Methods for Optimization.
발행사항[S.l.] : University of California, Berkeley., 2018
발행사항Ann Arbor : ProQuest Dissertations & Theses, 2018
형태사항125 p.
소장본 주기School code: 0028.
ISBN9780438325531
일반주기 Source: Dissertation Abstracts International, Volume: 80-01(E), Section: B.
Adviser: James W. Demmel.
요약Machine learning has gained renewed interest in recent years due to advances in computer hardware (processing power and high-capacity storage) and the availability of large amounts of data which can be used to develop accurate, robust models. Wh
요약In addition to hardware improvements, algorithm redesign is also an important direction to further reduce running times. On modern computer architectures, the cost of moving data (communication) from main memory to caches in a single machine is
요약Many problems in machine learning solve mathematical optimization problems which, in most non-linear and non-convex cases, requires iterative methods. This thesis is focused on deriving communication-avoiding variants of the block coordinate des
요약This thesis adapts well-known techniques from existing work on communication-avoiding (CA) Krylov and s-step Krylov methods. CA-Krylov methods unroll vector recurrences and rearrange the sequence of computation in way that defers communication f
요약We apply a similar recurrence unrolling technique to block coordinate descent in order to obtain communication-avoiding variants which solve the L2-regularized least-squares, L1-regularized least-squares, Support Vector Machines, and Kernel prob
요약Our experimental results illustrate that our new, communication-avoiding methods can obtain speedups of up to 6.1x on a Cray XC30 supercomputer using MPI for parallel processing. For CA-kernel methods we show modeled speedups of 26x, 120x, and 1
요약Finally, we also present an adaptive batch size technique which reduces the latency cost of training convolutional neural networks (CNN). With this technique we have achieved speedups of up to 6.25x when training CNNs on up to 4 NVIDIA P100 GPUs
일반주제명Computer science.
언어영어
기본자료 저록Dissertation Abstracts International80-01B(E).
Dissertation Abstract International
대출바로가기http://www.riss.kr/pdu/ddodLink.do?id=T14999755

소장정보

  • 소장정보

인쇄 인쇄

메세지가 없습니다
No. 등록번호 청구기호 소장처 도서상태 반납예정일 예약 서비스 매체정보
1 WE00025602 DP 004 가야대학교/전자책서버(컴퓨터서버)/ 대출불가(별치) 인쇄 이미지  

서평

  • 서평

태그

  • 태그

나의 태그

나의 태그 (0)

모든 이용자 태그

모든 이용자 태그 (0) 태그 목록형 보기 태그 구름형 보기
 

퀵메뉴

대출현황/연장
예약현황조회/취소
자료구입신청
상호대차
FAQ
교외접속
사서에게 물어보세요
메뉴추가
quickBottom

카피라이터

  • 개인정보보호방침
  • 이메일무단수집거부

김해캠퍼스 | 621-748 | 경남 김해시 삼계로 208 | TEL:055-330-1033 | FAX:055-330-1032
			Copyright 2012 by kaya university Bunsung library All rights reserved.