#BuildingInPublic Week 1: Getting Started with stockoMJ

Week 1 Learnings Lots of busy work and not much of a plan I got a bunch of work done, and if I’m not being harsh on myself, I can say that I did achieve some goals. But over the weekend I had a deep sense of dissatisfaction with the way things are. Lots of busy work, with a sense of progress—feel good at the end of the day—but no real progress. ...

September 8, 2025 · 1 min · 205 words · Varun Tulsian
Building stockoMJ in public - complete series

#BuildingInPublic: stockoMJ Journey

About the #BuildingInPublic Series Welcome to my #BuildingInPublic series where I document the journey of building stockoMJ.ai and “WITY” - an AI trading assistant designed to help traders make disciplined, data-driven decisions. Each week, I’ll share progress updates, technical learnings, challenges, and reflections as I build this AI-powered fintech product. Follow along for insights on generative AI, market analysis, and building in the finance space. The Series Week Title Status Week 1 Getting Started ✏️ Draft Week 2 Coming Soon 📝 Planned Want to connect? Reach out on LinkedIn or follow my blog for weekly updates. ...

September 8, 2025 · 1 min · 96 words · Varun Tulsian
Colab tutorial for class conditioned diffusion models

Denoising Diffusion Models Part 2: Improving Diffusion Models

Code for this blog post: Notebook Github Link Colab Predicting Error and Score Function Error / Score Prediction Classifier free Guidance and other improvements Advanced concepts Topics to cover We have done most of the heavy-lifting in Part 1 of this series on Diffusion Models. To be able to use them well in practice, we may need to make some more improvements. That’s what we will do. ...

December 9, 2022 · 10 min · 1993 words · Varun Tulsian
First part tutorial for density generation using diffusion models

Denoising Diffusion Models Part 1: Estimating True Distribution

Code for this blog post: Notebook Github Link Colab Basic: Predicting Original Distribution Vanilla Implementation The best way to learn is by writing the maths in your notebook alongside the tutorial, or by implementing the code alongside the notebooks. What are Denoising Diffusion Models? Denoising Diffusion Models, commonly referred to as “Diffusion models”, are a class of generative models based on the Variational Auto Encoder (VAE) architecture. These models are called likelihood-based models because they assign a high likelihood to the observed data samples $p(X)$. In contrast to other generative models, such as GANs, which learn the sampling process of a complex distribution and are trained adversarially. ...

December 9, 2022 · 17 min · 3453 words · Varun Tulsian

Denoising Diffusion Models Resources

Here are some resources that I have found useful/interesting. Highlighting ones that I recommend going over. Papers Paper Title Paper Link Have I Read it? DDPM DDMP Yeah Improved DDPM IDDPM Yeah Stable Diffusion Stable Diffusion No Variational Diffusion Models VDM Yeah Cold Diffusion Cold Diffusion No Understanding Diffusion Models: A Unified Perspective Tutorial Yeah Glide Glide No Diffusion Models Survey A survey on Generative Diffusion Models No Score Prediction Diffusion Models Generative Modeling by Estimating Gradients of the Data Distribution No Blogs Author Description Link Lilian Weng Comprehensive coverage of Diffusion models theory (Advanced) lil’log diffusion models Yang Song This blog is about score based generative models, specifically about SDE’s (Advanced) score based generative models AI Summer School Easy to follow but comprehensive coverage of Diffusion models ai summer school Hugging Face Annotated discussion of diffusion model with code annotated diffusion models Alex Alemi Blog on Variational Diffusion Loss variational diffusion models Google AI Blog Cascaded Diffusion Models with Super Resolution High Fidelity Image Generation Using Diffusion Models YouTube Educators Channel Description Link AI Coffee Break with Letitia Byte sized content on Diffusion models diffusion models explained Yannic DDPM paper explained DDMP explained Aleksa Gordić - The AI Epiphany ML coding series on Improved DDPM codebase coding series GitHub Repos Repo Description Repo Link Colab Diffusion Models Tutorial Wity’AI tutorial Wity’AI tutorial Stable Diffusion Stable Diffusion LucidRains Denoising Diffusion Models LucidRains Variational Diffusion models VDM DDPM DDPM YiYi XU (Flax+JAX) Flax Denoising Diffusion Glide Glide Notebooks Play with Diffusion Models Description Link Play with Stable Diffusion v2 SD II Stable Boost: Personalized Photos Stable Boost Image variations with Stable Diffusion SD variations Gradio App for Stable Diffusion GitHub Repo Bing Create tool Bing Playground AI PlaygroundAI Want to connect? Reach out @varuntul22. ...

December 9, 2022 · 2 min · 293 words · Varun Tulsian

Diffusion Model Jupyter and Colab Notebooks

The code accompanying the tutorials on denoising diffusion models. Notebook Description GitHub Link Colab Basic: Predicting Original Distribution Introduces Diffusion model concepts with PyTorch Vanilla Implementation Predicting Error and Score Function Diffusion models while predicting error with PyTorch Error / Score Prediction Classifier free Guidance and other improvements Diffusion models with Time Step Embeddings, Classifier Free Guidance, and time step striding to improve sampling from a diffusion model Advanced concepts EMINST Denoising and Conditional generation Working on EMNIST data Colab EMNIST If you have suggestions, please feel free to contribute to GitHub Repo. ...

December 5, 2022 · Varun Tulsian

Generative Ai

This article primarily focusses on Computer Vision and Diffusion models. Real World Applications Video/Image Restoration Take an old video or photo that is low quality or blurred and improve it using DL. Image Editing and Synthesis using text commands: “Make my smile wider” - Text suggested edits “segment image of guy wearing blue shirt and brown pants from an image” Text to speech Synthesis Here is a good summary of TTS algorithms from aiSummer School Speech to text OpenAI’s whisper Audio Generation Riffusion Code synthesis Generating Fakes (Photo’s, Videos, Personas) This is bread and butter for Generative algorithms ML Applications Text guided image generation also referred as Classifier Guidance In-Painting: This refers to the process of filling in missing or corrupted parts of an image or video with plausible content. Generative models, such as Variational Autoencoders (VAEs) and Generative Adversarial Networks (GANs), can be trained to learn the underlying distribution of the data, and can then be used to generate new content that is consistent with the surrounding area. Style Transfer: This is the process of applying the style of one image to another image, while preserving the content of the original image. This is typically done by training a generative model to separate the style and content representations of an image, and then recombining the content of one image with the style of another image. Upscaling Images: Super-resolution: This refers to the process of increasing the resolution of an image. Generative models, such as GANs, can be trained to learn the mapping from low-resolution images to high-resolution images. Few Shot Learning: Neural Network Pre-Training: This refers to the process of training a generative model on a large dataset, and then using the learned representations as a starting point for fine-tuning on a smaller dataset. This can be useful when the amount of labeled data is limited, as the pre-trained model can provide a good initialization that allows the model to quickly converge to a good solution when fine-tuning on the smaller dataset. Reinforcement Learning Exploration: Generative models can be used in Reinforcement Learning (RL) to help improve exploration. For example, a GAN can be trained to generate new samples that are similar to existing samples in the training data, but with slight variations. These generated samples can then be used to expand the state space of the RL agent, allowing it to explore and learn from a wider range of scenarios. Methods & Approach Diffusion Models VAE’s GAN’s Normalizing flows and Autoregressive models VAE’s with flows and autoregressive models Transformers based language generators Techniques Clip for multi-modal Prompt Engg, Chain of thought prompting Reinforcing behavior based on human feedback RHLF Stable Diffusion: Combine superpowers of VAE’s and Diffusion models to make things faster Super-resolution: Guided Diffusion model trained on Large resolution with guidance on the small resolution image Cascaded Diffusion Models: A small resolution text conditioned/class conditioned diffusion model chained with multiple super resolution images Cascaded Diffusion Models Textual Inversion Tools Codex by OpenAI Perpexity AI BirdSQL CoPilot ChatGPT … Blogs Open AI Blog, 2016 WeC Article on Generative AI References Quidgest article on Generative AI: Industry impact and predictions about generative AI Applications in the industry Canary Mail Companies Companies in Generative AI Topaz: Image and Video Editing with AI Quidgest: Genio, coding with AI replit.com Want to connect? Reach out @varuntul22. ...

January 10, 2023 · 3 min · 554 words · Varun Tulsian

Tips and tricks for hugo

Tips and tricks for Hugo/PaperMod that I have used. Opening Links in New page Override default behaviour by adding a render-link.html file under layouts. Follow this page for more details. Handle Katex Enable Maths on the markdown page: 1 2 math: true markup: "mmark" Create a shortcode for katex with {{ .Inner }}. This would ensure all text meant for Katex is not processed. Use shortcode \{\{< katex >\}\} before any katex code. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 \{\{< katex >\}\} # remove \ here $$ \begin{align} q(x_t|x_0) &= N(\sqrt\alpha_tx_{t-1}, (1 - \alpha_t )I) \cr &= \sqrt\alpha_t x_{t-1} + \sqrt{(1-\alpha_t)}\ast\epsilon_t \cr &= \sqrt\alpha_t(\sqrt\alpha_t x_{t-2} + \sqrt{(1-\alpha_{t-1})}\ast\epsilon_{t-1}) + \sqrt(1-\alpha_t)\ast\epsilon_t \cr &= \sqrt\alpha_t\sqrt\alpha_t x_{t-2} + \sqrt\alpha_t\sqrt{(1-\alpha_{t-1})}\ast\epsilon_{t-1} + \sqrt(1-\alpha_t)\ast\epsilon_t \cr &= \sqrt\alpha_t\sqrt\alpha_tx_{t-2} + \sqrt{(1-\alpha_t\alpha_{t-1})}\ast\epsilon_{t-1}^\ast \quad where \thinspace \epsilon_{t-1}^\ast\in N(0, I) \cr &= ... \cr &= \sqrt{\bar\alpha_t}x_0 + \sqrt{(1 - \bar\alpha_t )}\ast\epsilon_0^\ast ; where \space \bar\alpha_t=\Pi_{i=1}^T{\sqrt\alpha_i}, \space \epsilon_0^\ast \in N(0, I) \cr &= N(\sqrt{\bar\alpha_t}x_0, (1 - \bar\alpha_t)I)\cr \end{align} $$ \{\{< /katex >\}\} # remove \ here $$ \begin{align} q(x_t|x_0) &= N(\sqrt\alpha_tx_{t-1}, (1 - \alpha_t )I) \cr &= \sqrt\alpha_t x_{t-1} + \sqrt{(1-\alpha_t)}\ast\epsilon_t \cr &= \sqrt\alpha_t(\sqrt\alpha_t x_{t-2} + \sqrt{(1-\alpha_{t-1})}\ast\epsilon_{t-1}) + \sqrt(1-\alpha_t)\ast\epsilon_t \cr &= \sqrt\alpha_t\sqrt\alpha_t x_{t-2} + \sqrt\alpha_t\sqrt{(1-\alpha_{t-1})}\ast\epsilon_{t-1} + \sqrt(1-\alpha_t)\ast\epsilon_t \cr &= \sqrt\alpha_t\sqrt\alpha_tx_{t-2} + \sqrt{(1-\alpha_t\alpha_{t-1})}\ast\epsilon_{t-1}^\ast \quad where \thinspace \epsilon_{t-1}^\ast\in N(0, I) \cr &= ... \cr &= \sqrt{\bar\alpha_t}x_0 + \sqrt{(1 - \bar\alpha_t )}\ast\epsilon_0^\ast ; where \space \bar\alpha_t=\Pi_{i=1}^T{\sqrt\alpha_i}, \space \epsilon_0^\ast \in N(0, I) \cr &= N(\sqrt{\bar\alpha_t}x_0, (1 - \bar\alpha_t)I)\cr \end{align} $$ Adding collapsible Sections in Hugo Got this from here ...

December 17, 2022 · 2 min · 326 words · Varun Tulsian

Modelling Correlation between multiple interelated time-series

Modelling Correlation between multiple interelated time-series Problem Definition: Model the inter-relation between stocks and predict stock next price prediction given min-by-min data on stock prices. Stock values are correlated, events in one stock will give information about events in other stock, so on and so forth. These 2nd order and 3rd order relation can be seen in the historical stock prices. Ofcourse, the situation is further complecated because there are global events that affect stock prices as well and which may cause these 2nd and 3rd order effects from playing out. ...

January 13, 2023 · 3 min · 486 words · Varun Tulsian
Third part tutorial for density generation using diffusion models

Denoising Diffusion Models Part 3: Generating Characters and numbers with Diffusion Models

Notebook Github Link Colab EMINST Denoising and Conditional generation Colab EMNIST Introduction We have introduced most of the concepts in the previous two blogs. In this blog post, we will see how the concepts translate to code. If you want to check out the earlier posts, you can find them here, diffusion model intro 1, and diffusion model intro 2. EMNIST dataset Extended-MNIST dataset, as the name suggests, is an extension of the popular MNIST dataset. It contains labelled 28*28*1 images of handwritten English characters (upper and lower case) and numbers. ...

December 9, 2022 · 20 min · 4153 words · Varun Tulsian

Cohere Research Scholar Notebook

November 7, 2022 · Varun Tulsian