Skip to content

XynaxDev/learning-gen-ai

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

46 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Gen AI Journal Logo

Gen AI Journal 📝

Notion Journal | Linkedin | Gmail | Instagram

Gen AI Journal Python LangChain Ollama OpenAI Local + API Models Mermaid GitHub Notion

⭐ If this repo helped you, please consider starring it!


About

This repository is my living Generative AI lab and learning journal. It is organized around five core building blocks of modern GenAI systems: Models, Prompts, Chains, Indexes (RAG), and Agents. Everything else in this repo exists to support, explore, or extend these core pieces.

Here I continuously add small, focused, and practical experiments:

  • Models → local (Ollama) and hosted APIs, embeddings, and chat models
  • Prompts → prompt engineering techniques, templates, and structured prompting
  • Chains → sequential, parallel, conditional flows, routing, and orchestration logic
  • Indexes (RAG) → document loaders, chunking strategies, vector stores, retrieval pipelines
  • Agents → tool-using, multi-step reasoning, and decision-making workflows

The goal is simple: learn by building. Every folder is a hands-on experiment, every script is a learning checkpoint. This is not a framework — it’s a code-first playground where I explore how real GenAI systems are composed, connected, and scaled.

⚙️ Built with Python and the LangChain ecosystem, this journal documents my journey from basic LLM usage to composing full, production-style AI pipelines.

Environment & setup

  • Create virtual environment (optional but recommended)
python -m venv venv
venv\Scripts\activate
  • Install Python dependencies
pip install -r requirements.txt
  • Environment variables
    • Copy Models/.env.example to Models/.env and fill in any required API keys (e.g. OpenAI or other providers) when using API models.
    • Use the Prompts/.env file for any prompt-related secrets (chat history storage, API keys, etc.).

Local models (Ollama)

This project uses local models via Ollama with ChatOllama (e.g. llama3.2:3b).

  • Install and run Ollama
    • Download and install from https://ollama.com
    • Start the Ollama service.
    • Pull a model (example):
ollama pull llama3.2:3b
  • Run a local LLM example
python localLLm.py

API models (hosted providers)

For scripts that use hosted APIs (e.g. OpenAI-style models) inside the Models and Prompts areas:

  • Set your API keys inside Models/.env (and Prompts/.env if needed).
  • Make sure the required client libraries are installed via requirements.txt.
  • Run the example scripts directly with python path/to/script.py.

Chains

Experiments around LangChain Chain primitives: simple, sequential, parallel, conditional flows, plus an AI study assistant.

  • What this area is for

    • Exploring different chain patterns (simple, sequential, parallel, conditional).
    • Routing logic and basic decision flows inside chains.
    • The AIStudyAssistant subfolder holds a more applied study-helper example.
  • Explore code: ./Chains/

IndexesRAG*

Working with document loaders and text splitters to build retrieval-augmented generation (RAG) style pipelines.

  • What this area is for

    • Trying different DocumentLoaders to bring data into the system.
    • Experimenting with TextSplitters for chunking text before indexing.
    • Forming the basis for retrieval and semantic search workflows.
    • In Vector Stores Indexing embeddings for efficient similarity-based search and metadata management.
    • In Retrievers using compression logic to extract specific facts and filter noise from retrieved documents.
  • Explore code: ./IndexesRAG/

Models

Central place for LLMs, chat models, embedding models, and semantic search utilities.

  • What this area is for

    • ChatModels/: chat-style conversational models.
    • EmbeddingModels/: vector embeddings for similarity and search.
    • LLMs/: generic LLM examples and utilities.
    • Semantic search and retrieval experiments.
  • Explore code: ./Models/

Prompts

Prompt-focused experiments and chat-oriented utilities.

  • What this area is for

    • PromptTechniques/: different prompt patterns and techniques.
    • ChatBot/: chatbot flows and interactions.
    • Working with static vs dynamic prompts and prompt generators.
    • Building structured chat prompts, message templates, and chat history handling.
  • Explore code: ./Prompts/

Runnables

Using LangChain Runnable* primitives to compose more advanced and flexible workflows.

  • What this area is for

    • runnableSequence.py: chaining steps in sequence.
    • runnableParallel.py: parallel branches that run at the same time.
    • runnableBranch.py: conditional branching logic.
    • runnableLambda.py, runnablePassThrough.py: functional / utility runnables.
    • runnables_core.ipynb: an interactive notebook exploring runnable concepts.
  • Explore code: ./Runnables/

StructuredOutputs

Controlling and validating model outputs to match structured schemas.

  • What this area is for
    • BuiltinLLMs/: structured-output features that are built into certain LLM providers.
    • OutputParsers/: parsing raw model text into JSON or typed Python objects.
  • Explore code: ./StructuredOutputs/

Useful Resources & Explorers

These are some genuinely great tools and references I often use while experimenting:

Thank you for reading 💌

This repo is not a “perfect framework” — it’s a learning log, playground, and experiment tracker. If you’re also learning GenAI, feel free to explore, fork, break things, improve them, and build your own versions.

If something here helps you, that’s already a win. If you have ideas or improvements even better. Let’s keep learning and shipping 🚢✨🫱🏼‍🫲🏼

About

Learning Gen AI from scratch. Storing my daily code snippets and documentation here as I progress.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published