LDR | | 02894nmm uu200481 4500 |
001 | | 000000332398 |
005 | | 20240805170723 |
008 | | 181129s2018 |||||||||||||||||c||eng d |
020 | |
▼a 9780438144613 |
035 | |
▼a (MiAaPQ)AAI10785357 |
035 | |
▼a (MiAaPQ)umd:18835 |
040 | |
▼a MiAaPQ
▼c MiAaPQ
▼d 248032 |
082 | 0 |
▼a 004 |
100 | 1 |
▼a Kabkab, Maya. |
245 | 10 |
▼a Learning Along the Edge of Deep Neural Networks. |
260 | |
▼a [S.l.] :
▼b University of Maryland, College Park.,
▼c 2018 |
260 | 1 |
▼a Ann Arbor :
▼b ProQuest Dissertations & Theses,
▼c 2018 |
300 | |
▼a 157 p. |
500 | |
▼a Source: Dissertation Abstracts International, Volume: 79-12(E), Section: B. |
500 | |
▼a Adviser: Rama Chellappa. |
502 | 1 |
▼a Thesis (Ph.D.)--University of Maryland, College Park, 2018. |
520 | |
▼a While Deep Neural Networks (DNNs) have recently achieved impressive results on many classification tasks, it is still unclear why they perform so well and how to properly design them. It has been observed that while training and testing deep net |
520 | |
▼a In this dissertation, we analyze each of these individual conditions to understand their effects on the performance of deep networks. Furthermore, we devise mitigation strategies when the ideal conditions may not be met. |
520 | |
▼a We, first, investigate the relationship between the performance of a convolutional neural network (CNN), its depth, and the size of its training set. Designing a CNN is a challenging task and the most common approach to picking the right archite |
520 | |
▼a Next, we study the structure of the CNN layers, by examining the convolutional, activation, and pooling layers, and showing a parallelism between this structure and another well-studied problem: Convolutional Sparse Coding (CSC). The sparse repr |
520 | |
▼a Then, we investigate three of the ideal conditions previously mentioned: the availability of vast amounts of noiseless and balanced training data. We overcome the difficulties resulting from deviating from this ideal scenario by modifying the tr |
520 | |
▼a Finally, we consider the case where testing (and potentially training) samples are lossy, leading to the well-known compressed sensing framework. We use Generative Adversarial Networks (GANs) to impose structure in compressed sensing problems, r |
590 | |
▼a School code: 0117. |
650 | 4 |
▼a Computer science. |
650 | 4 |
▼a Electrical engineering. |
650 | 4 |
▼a Artificial intelligence. |
690 | |
▼a 0984 |
690 | |
▼a 0544 |
690 | |
▼a 0800 |
710 | 20 |
▼a University of Maryland, College Park.
▼b Electrical Engineering. |
773 | 0 |
▼t Dissertation Abstracts International
▼g 79-12B(E). |
773 | |
▼t Dissertation Abstract International |
790 | |
▼a 0117 |
791 | |
▼a Ph.D. |
792 | |
▼a 2018 |
793 | |
▼a English |
856 | 40 |
▼u http://www.riss.kr/pdu/ddodLink.do?id=T14997306
▼n KERIS |
980 | |
▼a 201812
▼f 2019 |
990 | |
▼a 관리자 |