Scalable automated machine learning

Date

2020

Journal Title

Journal ISSN

Volume Title

Publisher

Abstract

Automated machine learning holds a lot of promise to revolutionize and democratize the field of artificial intelligence. Neural architecture search is one of the main components of AutoML and is usually very computationally expensive. Autokeras is a framework that proposes a Bayesian optimization approach to neural architecture search in order to make it more efficient [8]. AutoKeras suffers from two major limitations: (i) the lack of support for parallel Bayesian optimization, which limits applicability in distributed settings and (ii) a slow start issue which limits the performance when time is limited. Solving these two problems would make Autokeras more flexible, and allow it to scale to the available resources of the user. We address these two problems. First we design and implement two algorithms for parallel bayesian optimization. Then we incorporate a greedy algorithm to tackle the slow start problem. To evaluate the performance of those algorithms, we first evaluate the Autokeras Bayesian searcher and compare the results to the algorithms we have implemented. On a Tesla T4 GPU, running for 12 hours, the Bayesian searcher got to 80.9% for. Our first parallel algorithm, GP-UCB-PE got 81.85% on 4 GPUs for 12 hours. Our second parallel algorithm, GP-BUCB got 81.89% on GPUs for 12 hours. By incorporating the greedy approach, we achieved 86.78% after running for 3 hours.

Description

Undergraduate thesis submitted to the Department of Computer Science, Ashesi University, in partial fulfillment of Bachelor of Science degree in Computer Science, May 2020

Keywords

Bayesian optimisation, Autokeras, automated machine learning

Citation

DOI