New Framework Improves Performance of Deep Neural Networks

Typography

North Carolina State University researchers have developed a new framework for building deep neural networks via grammar-guided network generators.

North Carolina State University researchers have developed a new framework for building deep neural networks via grammar-guided network generators. In experimental testing, the new networks – called AOGNets – have outperformed existing state-of-the-art frameworks, including the widely-used ResNet and DenseNet systems, in visual recognition tasks.

“AOGNets have better prediction accuracy than any of the networks we’ve compared it to,” says Tianfu Wu, an assistant professor of electrical and computer engineering at NC State and corresponding author of a paper on the work. “AOGNets are also more interpretable, meaning users can see how the system reaches its conclusions.”

The new framework uses a compositional grammar approach to system architecture that draws on best practices from previous network systems to more effectively extract useful information from raw data.

“We found that hierarchical and compositional grammar gave us a simple, elegant way to unify the approaches taken by previous system architectures, and to our best knowledge, it is the first work that makes use of grammar for network generation,” Wu says.

Read more at North Carolina State University