LPS enables rapid discovery of expertise and serves as a conduit between researchers, subject matter experts, investors and innovators by providing multi-faceted search capability across numerous technology areas and across the National Laboratories. Learn more about LPS.

This portal is meant to enable connection to U.S. Department of Energy (DOE) patents and experts, not to provide information about coronavirus or COVID-19. DO NOT contact the individuals and researchers included in LPS for general questions about COVID-19. For information about the virus, please visit the Centers for Disease Control (CDC) website.

Method for Designing Optimal Convolutional Neural Networks using Parallel Computers

Stage: Development

Deep Learning (DL) is a sub-specialty of machine learning inspired by the brain. It involves systems receiving unstructured and unsupervised data from a variety of sources. Training DL algorithms can be computationally expensive, require trial and error, and involve significant manpower. Therefore, ORNL’s inventors created a method that optimizes convolutional neural networks on parallel computers. Their method consists of an evolutionary algorithm that successfully developed nearly 700,000 networks on 18,000 nodes of Titan in 24 hours for the neutron scattering data. Their algorithm found the best network that achieved 76.07% accuracy (vs the seed’s 68.71% accuracy).



ORNL’s inventors created a system to optimize convolutional neural networks using evolutionary algorithms. Their approach can be highly optimized for a specific, labeled data set. The method uses a master/worker system where the master hosts the selection, crossover, and mutation processes, while the workers evaluate the fitness of the individuals. ORNL’s team has successfully applied this system to correctly estimating GENIE neutrino vectors and to optimizing ORNL’s Titan supercomputer performance. In the case of their Multi-node Evolutionary Neural Networks for Deep Learning (MENNDL) study, they demonstrated an example with 18,000 nodes where evolutionary algorithms saved months and petaflops of computing expense by automating neural network design. In their algorithm, selection is performed by removing any individual from the population that has a fitness value below the average fitness for their generation. Remaining individuals then create the next generation through crossover and mutation. These mutations can be directed towards a specific goal (i.e. a targeted data set). For example, the network can be optimized for the highest accuracy in classification tasks, reduce energy consumption, minimize memory use, etc. 

Applications and Industries

  • Meteorology/climate modeling
  • Banking/financial modeling
  • Drug development/manufacturing
  • Autonomous vehicle control/navigation" 


Benefits

  • This invention can reduce the financial costs of deep learning
  • This technology can accelerate neural network development
  • The inventors' methods can reduce the expertise required for parameter tuning
  • The method can be translated to multiple areas, including sub-cellular structures, satellite images and high-energy physics datasets."