Notion Journal | Linkedin | Gmail | Instagram
⭐ If this repo helped you, please consider starring it!
This repository is my living Generative AI lab and learning journal. It is organized around five core building blocks of modern GenAI systems: Models, Prompts, Chains, Indexes (RAG), and Agents. Everything else in this repo exists to support, explore, or extend these core pieces.
Here I continuously add small, focused, and practical experiments:
Models→ local (Ollama) and hosted APIs, embeddings, and chat modelsPrompts→ prompt engineering techniques, templates, and structured promptingChains→ sequential, parallel, conditional flows, routing, and orchestration logicIndexes (RAG)→ document loaders, chunking strategies, vector stores, retrieval pipelinesAgents→ tool-using, multi-step reasoning, and decision-making workflows
The goal is simple: learn by building. Every folder is a hands-on experiment, every script is a learning checkpoint. This is not a framework — it’s a code-first playground where I explore how real GenAI systems are composed, connected, and scaled.
⚙️ Built with Python and the LangChain ecosystem, this journal documents my journey from basic LLM usage to composing full, production-style AI pipelines.
- Create virtual environment (optional but recommended)
python -m venv venv
venv\Scripts\activate- Install Python dependencies
pip install -r requirements.txt- Environment variables
- Copy
Models/.env.exampletoModels/.envand fill in any required API keys (e.g. OpenAI or other providers) when using API models. - Use the
Prompts/.envfile for any prompt-related secrets (chat history storage, API keys, etc.).
- Copy
This project uses local models via Ollama with ChatOllama (e.g. llama3.2:3b).
- Install and run Ollama
- Download and install from https://ollama.com
- Start the Ollama service.
- Pull a model (example):
ollama pull llama3.2:3b- Run a local LLM example
python localLLm.pyFor scripts that use hosted APIs (e.g. OpenAI-style models) inside the Models and Prompts areas:
- Set your API keys inside
Models/.env(andPrompts/.envif needed). - Make sure the required client libraries are installed via
requirements.txt. - Run the example scripts directly with
python path/to/script.py.
Experiments around LangChain Chain primitives: simple, sequential, parallel, conditional flows, plus an AI study assistant.
-
What this area is for
- Exploring different chain patterns (simple, sequential, parallel, conditional).
- Routing logic and basic decision flows inside chains.
- The
AIStudyAssistantsubfolder holds a more applied study-helper example.
-
Explore code: ./Chains/
Working with document loaders and text splitters to build retrieval-augmented generation (RAG) style pipelines.
-
What this area is for
- Trying different
DocumentLoadersto bring data into the system. - Experimenting with
TextSplittersfor chunking text before indexing. - Forming the basis for retrieval and semantic search workflows.
- In Vector Stores Indexing embeddings for efficient similarity-based search and metadata management.
- In Retrievers using compression logic to extract specific facts and filter noise from retrieved documents.
- Trying different
-
Explore code: ./IndexesRAG/
Central place for LLMs, chat models, embedding models, and semantic search utilities.
-
What this area is for
ChatModels/: chat-style conversational models.EmbeddingModels/: vector embeddings for similarity and search.LLMs/: generic LLM examples and utilities.- Semantic search and retrieval experiments.
-
Explore code: ./Models/
Prompt-focused experiments and chat-oriented utilities.
-
What this area is for
PromptTechniques/: different prompt patterns and techniques.ChatBot/: chatbot flows and interactions.- Working with static vs dynamic prompts and prompt generators.
- Building structured chat prompts, message templates, and chat history handling.
-
Explore code: ./Prompts/
Using LangChain Runnable* primitives to compose more advanced and flexible workflows.
-
What this area is for
runnableSequence.py: chaining steps in sequence.runnableParallel.py: parallel branches that run at the same time.runnableBranch.py: conditional branching logic.runnableLambda.py,runnablePassThrough.py: functional / utility runnables.runnables_core.ipynb: an interactive notebook exploring runnable concepts.
-
Explore code: ./Runnables/
Controlling and validating model outputs to match structured schemas.
- What this area is for
BuiltinLLMs/: structured-output features that are built into certain LLM providers.OutputParsers/: parsing raw model text into JSON or typed Python objects.
- Explore code: ./StructuredOutputs/
These are some genuinely great tools and references I often use while experimenting:
-
🔍 Chunk Visualizer
-
📄 LangChain Document Loaders
-
🏆 Open LLM Leaderboard
-
🧠 LangChain Python Docs
-
🤗 HuggingFace Models Hub
-
🧩 Pydantic Docs (for structured outputs)
-
🦙 Ollama
This repo is not a “perfect framework” — it’s a learning log, playground, and experiment tracker.
If you’re also learning GenAI, feel free to explore, fork, break things, improve them, and build your own versions.
If something here helps you, that’s already a win. If you have ideas or improvements even better. Let’s keep learning and shipping 🚢✨🫱🏼🫲🏼