Skip to yearly menu bar Skip to main content


Poster

Insights from the Use of Previously Unseen Neural Architecture Search Datasets

Rob Geada · David Towers · Matthew Forshaw · Amir Atapour-Abarghouei · Stephen McGough

Arch 4A-E Poster #287
[ ] [ Project Page ] [ Paper PDF ]
[ Poster
Fri 21 Jun 10:30 a.m. PDT — noon PDT

Abstract:

The boundless possibility of neural networks which can be used to solve a problem -- each with different performance -- leads to a situation where a Deep Learning (DL) expert is required to identify the best neural network. This goes against the hopes for DL to remove the need for experts. Neural Architecture Search (NAS) offers a solution for this by automatically identifying the best architecture. However, to date, NAS work has focused on a small set of datasets which we argue are not representative of real-world problems. We introduce eight new datasets that were created for a series of NAS Challenges (More details will be provided post review to maintain anonymity); AddNIST, Language, MultNIST, CIFARTile, Gutenberg, Isabella, GeoClassing, and Chesseract. The datasets and the challenges they were a part of were developed to direct attention to issues in NAS development and to encourage authors to consider how their models will perform on datasets unknown to them at development time. We present experimentation using standard Deep Learning methods as well as the best results from the participants of the challenge.

Chat is not available.