DilateFormer: Multi-Scale Dilated Transformer for Visual Recognition

Jiayu Jiao1,3*, Yu-Ming Tang1,3*, Kun-Yu Lin1,3, Yipeng Gao1,3, Jinhua Ma1,3, Yaowei Wang2, Wei-Shi Zheng1,2,3,*

1School of Computer Science and Engineering, Sun Yat-sen University, China

2Peng Cheng Laboratory, Shenzhen, China

3Key Laboratory of Machine Intelligence and Advanced Computing, Ministry of Education, China

*Corresponding author

[Code & paper]

Introduction

As a de facto solution, the vanilla Vision Transformers (ViTs) are encouraged to model long-range dependencies between arbitrary image patches while the global attended receptive field leads to quadratic computational cost. Another branch of Vision Transformers exploits local attention inspired by CNNs, which only models the interactions between patches in small neighborhoods. Although such a solution reduces the computational cost, it naturally suffers from small attended receptive fields, which may limit the performance. In this work, we explore effective Vision Transformers to pursue a preferable trade-off between the computational complexity and size of the attended receptive field. By analyzing the patch interaction of global attention in ViTs, we observe two key properties in the shallow layers, namely locality and sparsity, indicating the redundancy of global dependency modeling in shallow layers of ViTs. Accordingly, we propose Multi-Scale Dilated Attention (MSDA) to model local and sparse patch interaction within the sliding window. With a pyramid architecture, we construct a Multi-Scale Dilated Transformer (DilateFormer) by stacking MSDA blocks at low-level stages and global multi-head self-attention blocks at high-level stages. Our experiment results show that our DilateFormer achieves state-ofthe-art performance on various vision tasks. On ImageNet-1K classification task, DilateFormer achieves comparable performance with 70% fewer FLOPs compared with existing state-of-the-art models. Our DilateFormer-Base achieves 85.6% top-1 accuracy on ImageNet-1K classification task, 53.5% box mAP/46.1% mask mAP on COCO object detection/instance segmentation task and 51.1% MS mIoU on ADE20K semantic segmentation task.



Figure 1. Visualization of attention maps of shallow layers of ViT-Small (left) and illustration of Multi-Scale Dilated Attention (MSDA).(right).

Figure 2. The overall architecture of our DilateFormer. Click on the image for higher resulution.

Citation

If you use this paper/code in your research, please consider citing us: