A Retrieval-Augmented Generation (RAG) chatbot for answering questions about Aroma College using locally hosted Ollama models.
- Ollama must be installed and running on your machine
- Python 3.8+ with pip and virtual environment
-
Create and activate the virtual environment:
python -m venv .venv .venv\Scripts\activate # On Windows source .venv/bin/activate # On macOS/Linux -
Install dependencies:
pip install -r requirements.txt -
Download Ollama model (in a separate terminal):
ollama pull llama2 # For LLM completionNote: For embeddings, we're using the
all-MiniLM-L6-v2model from the sentence-transformers library.
Place your college documents (PDF, DOCX, TXT, CSV) in the docs directory.
Process and index the documents:
python main.py ingest --data-dir=./docs
You can use the chatbot in two ways:
-
Web Interface: Start the Streamlit app
python main.py webThen open your browser at http://localhost:8501
-
Command Line: Ask questions directly from the terminal
python main.py chat --question="What are the admission requirements for Aroma College?"
main.py: Entry point for the applicationrequirements.txt: Project dependenciesdata/: Directory for storing vector databasedocs/: Directory for college documentssrc/: Source codeingestion/: Document loading and processingdatabase/: Vector database componentsrag/: Retrieval and question answeringui/: Streamlit web interface