Insights into Top Paper Nominee, “Integral Neural Networks”
A Q&A with the Authors
Kirill Solodskikh, Azim Kurbanov, Ruslan Aydarkhanov, Irina Zhelavskaya, Yury Parfenov, Dehua Song, Stamatios Lefkimmiatis
Paper Presentation: Thursday, 22 June, 3:50 p.m. PDT, East Exhibit Halls A-B
The CVPR 2023 paper, “Integral Neural Networks,” demonstrates that integral neural networks (INNs) have comparable performance to the analogous deep neural network, even with a high rate (up to 30%) of structural pruning. The team is distilling their work into a full-stack AI platform. The following Q&A takes a closer look at this research and its applications.
CVPR: Will you please share a little more about your work and results? How is it different than the standard approaches to date?
Deep Neural Networks (DNNs) have demonstrated superior performance across various tasks, encompassing computer vision, natural language processing, and audio processing. It is a well-established mathematical principle that appropriately constructed DNNs can model any dependencies within data. DNNs are constructed using fundamental units known as layers. These layers define basic discrete transformations of input data using a set of parameters represented as tensors (multi-dimensional numerical tables).
In our work, we propose a novel continuous or analog formulation of deep neural networks, which we refer to as Integral Neural Networks (INNs). INNs utilize integral operators (integrals) to implement transformations of analog input data, such as sounds, images, and videos, using a set of multivariable functions (continuous or analog parameters). Integral operators are prevalent in various fields, from solving differential equations to measuring particle systems in quantum mechanics.
Once trained, an INN can be evaluated using fewer integration points. This feature allows for the pruning of neural networks to a size suitable for edge devices, without the need for fine-tuning. In essence, this means that INNs can be resized just like dynamically zooming in or out of an image on a smartphone screen. Interestingly, the technology we employ for this task is similar to that used for image resizing. One notable advantage of this method is that these operations are already highly optimized in Graphics Processing Units (GPUs).
According to multiple experiments conducted by our team, INNs have consistently demonstrated comparable performance to the analogous DNN even with a high rate (up to 30%) of structural pruning, and that too without fine-tuning. In stark contrast, conventional pruning methods experience a staggering 65% drop in accuracy under the same conditions. The difference is indeed substantial!
CVPR: How did your model outperform other options? What was the key factor in these results?
INNs vs DNNs
An INN can be regarded as a continuous, or what we might call an analog, generalization of a DNN. Given the same number of parameters, an INN delivers performance on par with a DNN. However, training with varying discretization promotes smooth representation along continuous axes, thus providing more structure and regularization to the weights. This smooth structure plays a critical role in addressing a host of problems, such as the efficient pruning of neural networks.
Additionally, any pre-trained DNN can be converted to an INN without any quality degradation. This is made possible through our channel permutation algorithm, which leverages solutions from the well-known combinatorial Traveling Salesman Problem.
This means that we can transform DNN into INN and vice versa. It opens up the possibility to use existing inference frameworks for INN and to use all existing pre-trained DNNs to obtain pre-trained INNs.
CVPR: So, what’s next? What do you see as the future of your research?
INNs open up significant potential for further research and better understanding of fundamental questions regarding DNNs, such as model capacity. Our future plans include utilizing differential analysis to explore the smallest configurations of DNNs suitable for specific tasks.
Additionally, we aim to investigate the application of INNs to other architectures such as GANs, and to examine adversarial attacks for INNs. Preliminary analysis suggests that INNs could provide more stability during GAN training and demonstrate greater robustness against adversarial attacks.
Another exciting direction is the search for physical (analog) systems that could compute INNs entirely in an analog manner, which would allow for speed-of-light inference in neural networks.
We also plan to make our TorchIntegral framework open source, which allows build INNs in the same manner as DNNs. In addition, the framework makes it possible to compute any integrals using PyTorch-optimized operations. We will provide a 'model zoo' of INNs, ready for further expansion by the open-source community. We currently believe that the INN technology and the innovations it drives will deliver highly optimized networks for edge computing.
CVPR: What more would you like to add?
Our team has dedicated substantial effort to research and practical applications, reaching the farthest corners of AI space. We are now in the process of distilling this expertise and technology into a full-stack AI platform, TheStage.ai. This platform will serve as a toolkit for other researchers and developers, simplifying their processes and enhancing efficiency. Users will be able to find AI models, train them using their data with hardware provided directly through the platform, and utilize our technology to optimize and speed up their models. Ultimately, they will be able to deploy their models to almost any platform of their choice. All of this is offered in a cost-effective manner, a concept we are deeply committed to.
Follow our research, harness our cutting-edge AI tools, and get to the forefront of the AI transition.
Annually, CVPR recognizes top research in the field through its prestigious “Best Paper Awards.” This year, from more than 9,000 paper submissions, the CVPR 2023 Paper Awards Committee selected 12 candidates for the coveted honor of Best Paper. Join us for the Award Session on Wednesday, 21 June at 8:30 a.m. to find out which nominees take home the distinction of “Best Paper” at CVPR 2023.