background
logo
ArxivPaperAI

$V_kD:$ Improving Knowledge Distillation using Orthogonal Projections

Author:
Roy Miles, Ismail Elezi, Jiankang Deng
Keyword:
Computer Science, Computer Vision and Pattern Recognition, Computer Vision and Pattern Recognition (cs.CV), Artificial Intelligence (cs.AI)
journal:
--
date:
2024-03-10 00:00:00
Abstract
Knowledge distillation is an effective method for training small and efficient deep learning models. However, the efficacy of a single method can degenerate when transferring to other tasks, modalities, or even other architectures. To address this limitation, we propose a novel constrained feature distillation method. This method is derived from a small set of core principles, which results in two emerging components: an orthogonal projection and a task-specific normalisation. Equipped with both of these components, our transformer models can outperform all previous methods on ImageNet and reach up to a 4.4% relative improvement over the previous state-of-the-art methods. To further demonstrate the generality of our method, we apply it to object detection and image generation, whereby we obtain consistent and substantial performance improvements over state-of-the-art. Code and models are publicly available: https://github.com/roymiles/vkd
PDF: $V_kD:$ Improving Knowledge Distillation using Orthogonal Projections.pdf
Empowered by ChatGPT