This article covers an overall summary of the MobileViT: Light-Weight, General-Purpose, and Mobile-Friendly Vision Transformers research paper. MobileViT is a lightweight and general-purpose vision transformer for mobile vision tasks. It combines the strength of the standard CNN (Convolutional Neural Network) and the Vision Transformers. It has outperformed several CNNs and Continue Reading
deep learning
Vision Transformer – An Image is Worth 16×16 Words: Transformers for Image Recognition at Scale
In this blog post, we are going to learn about the Vision Transformer (ViT). It is a pure Transformer based architecture used for image classification tasks. Vision Transformer (ViT) has the ability to replace the standard CNNs while achieving excellent results. The Vision Transformer (ViT) attains excellent results when pre-trained Continue Reading
MODNet: Real-Time Trimap-Free Portrait Matting via Objective Decomposition
In this work, we present a lightweight matting objective decomposition network (MODNet) for portrait matting in real-time with a single input image. MODNet inputs a single RGB image and applies explicit constraints to solve matting sub-objectives simultaneously in one stage. The research paper is accepted at AAAI 2022 conference. Research Continue Reading
VGG19 UNET Implementation in TensorFlow
In this tutorial, we are going to implement the U-Net architecture in TensorFlow, where we will replace its encoder with a pre-trained VGG19 architecture. The VGG19 is already trained on the ImageNet classification dataset. Therefore, it would have already learned the required features, which would help to boost the overall Continue Reading
Why Deep Learning is not Artificial General Intelligence (AGI)
With the development in the field of deep learning, it has become a frontier in solving multiple challenging problems in computer vision, games, self-driving cars and many more. Deep learning has even achieved superhuman performance in some tasks, but still, it lacks some fundamental features which are required for a Continue Reading
PP-LiteSeg: A Superior Real-Time Semantic Segmentation Model
PP-LiteSeg is a lightweight encoder-decoder architecture designed for real-time semantic segmentation. It consists of three modules: Encoder: Lightweight network Aggregation: Simple Pyramid Pooling Module (SPPM) Decoder: Flexible and Lightweight Decoder (FLD) and Unified Attention Fusion Module (UAFM) Encoder The STDCNet is the encoder for the proposed PP-LiteSeg for its high Continue Reading
Custom Layer in TensorFlow using Keras API
The majority of the people interested in deep learning must have used the TensorFlow library. It is the most popular and widely used deep learning framework. We have used the different layers provided by the tf.keras API to build different types of deep neural networks. But, there are many times Continue Reading
VGG16 UNET Implementation in TensorFlow
In this article, we are going to implement the most widely used image segmentation architecture called UNET. We are going to replace the UNET encoder with the VGG16 implementation from the TensorFlow library. The UNET encoder would learn the features from scratch, while the VGG16 is already trained on the Continue Reading
Squeeze and Excitation Implementation in TensorFlow and PyTorch
The Squeeze and Excitation network is a channel-wise attention mechanism that is used to improve the overall performance of the network. In today’s article, we are going to implement the Squeeze and Excitation module in TensorFlow and PyTorch. What is Squeeze and Excitation Network? The squeeze and excitation attention mechanism Continue Reading
Semi-supervised Learning – Fundamentals of Deep Learning
Semi-supervised learning is a type of machine learning where we use a combination of a large amount of unlabelled data and a small amount of labelled data to train the model. It is a hybrid approach between supervised learning and unsupervised learning. The basic difference between the two is that Continue Reading