The workshop will consist of the three main parts, including: 1) a detailed overview of deep learning inference on mobile platforms, 2) workshop challenges where the participants can get the actual hands-on experience while solving several computer vision tasks and evaluating their solutions on mobile devices, and 3) presentations from mobile SoC vendors covering several important aspects of mobile AI inference. To address this problem, we introduce the first Mobile AI Workshop, where all solutions and deep learning models will be evaluated on the actual mobile AI accelerators. While many research papers targeted at efficient deep learning models have been proposed recently, the evaluation of the obtained solutions is usually happening on desktop CPUs and GPUs, making it nearly impossible to estimate the actual inference time and memory consumption on real mobile hardware. The performance of mobile NPUs and DSPs is also increasing dramatically, making it possible to run complex deep learning models and to achieve fast runtime in the majority of tasks. Various deep learning models can now be found on any mobile device starting from smartphones running portrait segmentation, image enhancement, face recognition and natural language processing models, to IoT platforms performing real-time image classification or smart-TV boards coming with sophisticated image super-resolution algorithms. Over the past years, mobile AI-based applications are becoming more and more ubiquitous. MAI: Mobile AI workshop and challenges 2022Ĭontact: radu.timofte
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |