#BuildingInPublic Week 1: Getting Started with stockoMJ

Week 1 Learnings Lots of busy work and not much of a plan I got a bunch of work done, and if I’m not being harsh on myself, I can say that I did achieve some goals. But over the weekend I had a deep sense of dissatisfaction with the way things are. Lots of busy work, with a sense of progress—feel good at the end of the day—but no real progress. ...

September 8, 2025 · 1 min · 205 words · Varun Tulsian
Building stockoMJ in public - complete series

#BuildingInPublic: stockoMJ Journey

About the #BuildingInPublic Series Welcome to my #BuildingInPublic series where I document the journey of building stockoMJ.ai and “WITY” - an AI trading assistant designed to help traders make disciplined, data-driven decisions. Each week, I’ll share progress updates, technical learnings, challenges, and reflections as I build this AI-powered fintech product. Follow along for insights on generative AI, market analysis, and building in the finance space. The Series Week Title Status Week 1 Getting Started ✏️ Draft Week 2 Coming Soon 📝 Planned Want to connect? Reach out on LinkedIn or follow my blog for weekly updates. ...

September 8, 2025 · 1 min · 96 words · Varun Tulsian
Colab tutorial for class conditioned diffusion models

Denoising Diffusion Models Part 2: Improving Diffusion Models

Code for this blog post: Notebook Github Link Colab Predicting Error and Score Function Error / Score Prediction Classifier free Guidance and other improvements Advanced concepts Topics to cover We have done most of the heavy-lifting in Part 1 of this series on Diffusion Models. To be able to use them well in practice, we may need to make some more improvements. That’s what we will do. ...

December 9, 2022 · 10 min · 1993 words · Varun Tulsian
First part tutorial for density generation using diffusion models

Denoising Diffusion Models Part 1: Estimating True Distribution

Code for this blog post: Notebook Github Link Colab Basic: Predicting Original Distribution Vanilla Implementation The best way to learn is by writing the maths in your notebook alongside the tutorial, or by implementing the code alongside the notebooks. What are Denoising Diffusion Models? Denoising Diffusion Models, commonly referred to as “Diffusion models”, are a class of generative models based on the Variational Auto Encoder (VAE) architecture. These models are called likelihood-based models because they assign a high likelihood to the observed data samples $p(X)$. In contrast to other generative models, such as GANs, which learn the sampling process of a complex distribution and are trained adversarially. ...

December 9, 2022 · 17 min · 3453 words · Varun Tulsian

Diffusion Model Jupyter and Colab Notebooks

The code accompanying the tutorials on denoising diffusion models. Notebook Description GitHub Link Colab Basic: Predicting Original Distribution Introduces Diffusion model concepts with PyTorch Vanilla Implementation Predicting Error and Score Function Diffusion models while predicting error with PyTorch Error / Score Prediction Classifier free Guidance and other improvements Diffusion models with Time Step Embeddings, Classifier Free Guidance, and time step striding to improve sampling from a diffusion model Advanced concepts EMINST Denoising and Conditional generation Working on EMNIST data Colab EMNIST If you have suggestions, please feel free to contribute to GitHub Repo. ...

December 5, 2022 · Varun Tulsian

Generative Ai

This article primarily focusses on Computer Vision and Diffusion models. Real World Applications Video/Image Restoration Take an old video or photo that is low quality or blurred and improve it using DL. Image Editing and Synthesis using text commands: “Make my smile wider” - Text suggested edits “segment image of guy wearing blue shirt and brown pants from an image” Text to speech Synthesis Here is a good summary of TTS algorithms from aiSummer School Speech to text OpenAI’s whisper Audio Generation Riffusion Code synthesis Generating Fakes (Photo’s, Videos, Personas) This is bread and butter for Generative algorithms ML Applications Text guided image generation also referred as Classifier Guidance In-Painting: This refers to the process of filling in missing or corrupted parts of an image or video with plausible content. Generative models, such as Variational Autoencoders (VAEs) and Generative Adversarial Networks (GANs), can be trained to learn the underlying distribution of the data, and can then be used to generate new content that is consistent with the surrounding area. Style Transfer: This is the process of applying the style of one image to another image, while preserving the content of the original image. This is typically done by training a generative model to separate the style and content representations of an image, and then recombining the content of one image with the style of another image. Upscaling Images: Super-resolution: This refers to the process of increasing the resolution of an image. Generative models, such as GANs, can be trained to learn the mapping from low-resolution images to high-resolution images. Few Shot Learning: Neural Network Pre-Training: This refers to the process of training a generative model on a large dataset, and then using the learned representations as a starting point for fine-tuning on a smaller dataset. This can be useful when the amount of labeled data is limited, as the pre-trained model can provide a good initialization that allows the model to quickly converge to a good solution when fine-tuning on the smaller dataset. Reinforcement Learning Exploration: Generative models can be used in Reinforcement Learning (RL) to help improve exploration. For example, a GAN can be trained to generate new samples that are similar to existing samples in the training data, but with slight variations. These generated samples can then be used to expand the state space of the RL agent, allowing it to explore and learn from a wider range of scenarios. Methods & Approach Diffusion Models VAE’s GAN’s Normalizing flows and Autoregressive models VAE’s with flows and autoregressive models Transformers based language generators Techniques Clip for multi-modal Prompt Engg, Chain of thought prompting Reinforcing behavior based on human feedback RHLF Stable Diffusion: Combine superpowers of VAE’s and Diffusion models to make things faster Super-resolution: Guided Diffusion model trained on Large resolution with guidance on the small resolution image Cascaded Diffusion Models: A small resolution text conditioned/class conditioned diffusion model chained with multiple super resolution images Cascaded Diffusion Models Textual Inversion Tools Codex by OpenAI Perpexity AI BirdSQL CoPilot ChatGPT … Blogs Open AI Blog, 2016 WeC Article on Generative AI References Quidgest article on Generative AI: Industry impact and predictions about generative AI Applications in the industry Canary Mail Companies Companies in Generative AI Topaz: Image and Video Editing with AI Quidgest: Genio, coding with AI replit.com Want to connect? Reach out @varuntul22. ...

January 10, 2023 · 3 min · 554 words · Varun Tulsian
Third part tutorial for density generation using diffusion models

Denoising Diffusion Models Part 3: Generating Characters and numbers with Diffusion Models

Notebook Github Link Colab EMINST Denoising and Conditional generation Colab EMNIST Introduction We have introduced most of the concepts in the previous two blogs. In this blog post, we will see how the concepts translate to code. If you want to check out the earlier posts, you can find them here, diffusion model intro 1, and diffusion model intro 2. EMNIST dataset Extended-MNIST dataset, as the name suggests, is an extension of the popular MNIST dataset. It contains labelled 28*28*1 images of handwritten English characters (upper and lower case) and numbers. ...

December 9, 2022 · 20 min · 4153 words · Varun Tulsian