Posts

  • Digging into Drag GAN and its effectiveness

    I was recently intrigued by the Drag GAN paper, which made headlines few months ago. It achieves seamless transferring of semantic features like faces, noses, legs, and hands from the source to the target location, resulting in remarkably realistic outcomes. In this blog I have delved deep into its underlying mechanism

  • Uncovering bias and uncertainty in model using Semi-Supervised VAEs

    Bias in a model can favor specific features, which is concerning in applications like face detection. Detecting such bias, especially for sensitive attributes, is challenging due to implicit learning from imbalanced datasets. We'll explore an, primitive yet, exciting method to automatically uncover model bias in this blog.

  • Writer Independent Verification Challenge

    Sharing my experience while taking part in the NCVPRIPG writer verification challenge. This blog encompasses topics such as Vision Transformers, the use of collate function in PyTorch, masked self-attention, and general lessons learned.

  • Why we need KL divergence loss in VAEs

    The standard loss for Variational Autoencoders involves utilizing Reconstruction loss and KL divergence. The application and advantages of KL divergence is a point of confusion for many beginners. In this blog I aim to succinctly share the intuition behind it.

  • Rubiks Cube Color extractor

    Drawn from Andrej Karpathy's Cube Color Extractor concept, this blog encapsulates key lessons I've gained and outlines my thought process during the code implementation journey.

  • Regular Expression in Python

    The awesomeness of regular expressions becomes evident once we establish a solid grasp of the fundamentals. I've made sure to take detailed notes for my future reference and for anyone out there

subscribe via RSS