Volltext-Downloads (blau) und Frontdoor-Views (grau)
The search result changed since you submitted your search request. Documents might be displayed in a different sort order.
  • search hit 1 of 2824
Back to Result List

Weight Agnostic Neural Networks

  • Not all neural network architectures are created equal, some perform much better than others for certain tasks. But how important are the weight parameters of a neural network compared to its architecture? In this work, we question to what extent neural network architectures alone, without learning any weight parameters, can encode solutions for a given task. We propose a search method for neural network architectures that can already perform a task without any explicit weight training. To evaluate these networks, we populate the connections with a single shared weight parameter sampled from a uniform random distribution, and measure the expected performance. We demonstrate that our method can find minimal neural network architectures that can perform several reinforcement learning tasks without weight training. On a supervised learning domain, we find network architectures that achieve much higher than chance accuracy on MNIST using random weights. Interactive version of this paper at https://weightagnostic.github.io/

Export metadata

Additional Services

Share in Twitter Search Google Scholar Availability


Document Type:Preprint
Author:Adam Gaier, David Ha
ArXiv Id:http://arxiv.org/abs/1906.04358
Date of first publication:2019/06/11
Submission status:To appear at NeurIPS 2019, selected for a spotlight presentation.
Departments, institutes and facilities:Fachbereich Informatik
Dewey Decimal Classification (DDC):0 Informatik, Informationswissenschaft, allgemeine Werke / 00 Informatik, Wissen, Systeme / 004 Datenverarbeitung; Informatik
Entry in this database:2019/06/18
Licence (German):License LogoCreative Commons - CC BY - Namensnennung 4.0 International