Recent studies on classification add extra layers or sub-networks to increase the accuracy of existing networks. More recent methods employ multiple networks coupled with varying learning strategies. However, these approaches demand larger memory and computational requirement due to additional layers, prohibiting usage in devices with limited computing power.
In this work, we propose an efficient convolutional block which minimizes the computational requirements of a network while maintaining information flow through concatenation and element-wise addition. We design a classification architecture, called Half-Append Half-Add Network (HAHANet), built using our efficient convolutional block. Our approach achieves state-of-the-art accuracy on several challenging fine-grained classification tasks. More importantly, HAHANet outperforms top networks while reducing parameter count by at most 54 times. Our code and trained models are publicly available here.
Our Method
Half-Append Half-Add (HAHA) block implements efficient connections by appending half of the output as input to succeeding layers and propagates the rest through the network via element-wise addition, as shown below. HAHA block maintains strong information flow within the network while reducing its parameter count.
Our Results
We evaluated our proposed model on various fine-grained classification datasets, such as Foodx-251, IP102, Logo2k+, Web-Aircraft, and Web-Car. Results of our experiments using two configurations of our proposed network are shown below. Our network surpasses the performance of current top classification architectures with larger network size, both single models and ensemble networks, while significantly reducing parameter count.