Neural Architecture Search (NAS) is a method of autonomously designing deep learning models to achieve top performance for tasks such as data classification and data retrieval by using defined search spaces and strategies. These strategies have demonstrated improvements in a variety of tasks over ad-hoc deep neural architectures, but have presented unique challenges related to bias in search spaces, the intensive training requirements of various search strategies, and inefficient model performance evaluation. These challenges have been a primary focus for NAS research until recently. However, artificial intelligence (AI) on the edge has emerged as a significant area of research and producing models that achieve top performance on small devices with limited resources has become a priority. NAS research has primarily been focused on improving models by using more diverse search spaces, improving search strategies, and evaluating models faster. A limitation when applied to edge devices is that NAS has historically been finding superior deep neural networks that have become increasingly more difficult to port to embedded devices due to memory limitations, computational bottlenecks, latency requirements, and power restrictions. In recent years, researchers have begun to consider these limitations and develop methods for porting deep neural networks to these embedded devices, but few methods have incorporated the device itself in the training process efficiently. In this paper, we compile a list of methods actively being explored and discuss their limitations. We also present our evidence in support of the use of genetic algorithms as a method for hardware-aware NAS that efficiently considers hardware, power, and latency requirements during training.
|