What is Residual Network or ResNet?

Deep neural networks have become popular due to their high performance in real-world applications, such as image classification, speech recognition, machine translation and many more.  Over time deep neural networks are becoming deeper and deeper to solve more complex tasks. Adding more layers to a deep neural network can improve Continue Reading

What is Transfer Learning? – A Simple Introduction.

Transfer Learning is a technique in machine learning where we reuse a pre-trained model to solve a different but related problem. It is one of the popular methods to train the deep neural network. It is generally used for image classification tasks where the amount of the dataset is small.  Continue Reading

What is UNET?

UNET is an architecture developed by Olaf Ronneberger and his team at the University of Freiburg in 2015 for biomedical image segmentation. It is a highly popular approach for semantic segmentation tasks. It is a fully convolutional neural network that is designed to learn from fewer training samples. This architecture Continue Reading

Data Augmentation for Semantic Segmentation – Deep Learning

All the technological advancements in the field of Artificial Intelligence (AI) is facilitated due to the availability large amount of dataset and the computational hardware’s like GPU’s and TPU’s. In some fields like medical imaging, the availability of huge amount of data is not possible, as it takes good amount Continue Reading

DCGAN – Implementing Deep Convolutional Generative Adversarial Network in TensorFlow

In this tutorial, we are going to implement a Deep Convolutional Generative Adversarial Network (DCGAN) on Anime faces dataset. The code is written in TensorFlow 2.2 and Python3.8 .  According to Yann LeCun, the director of Facebook AI, GAN is the “most interesting idea in the last 10 years of Continue Reading

GAN – What is Generative Adversarial Network?

Generative Adversarial Network or GAN is a machine learning approach used for generative modelling designed by Ian Goodfellow and his colleagues in 2014. It is made of two neural networks: generator network and a discriminator network. The generator network learns to generate new examples, while the discriminator network tries to Continue Reading