IBM Call for Code 2025 Submission
๐ Live Demo
This has some limitations due to serverless streaming not supported on vercel, but locally the application works well, for running locally please go through the following.
- Node.js (v18.0.0 or higher)
- npm or yarn
Clone the repository and install dependencies:
git clone https://github.com/BHK4321/climate-change.git
cd climate-change
npm install- Create a
.envfile in the root directory as in .env.example - Add the credentials provided in the submission (
FASTAPI_API_KEY(secret key) andFASTAPI_BASE_URL(script backend deployment))
We do not need to have MONGO_URI and JWT_SECRET in the .env if we just want to ask queries and no track of the asked queries or if we do not want to access the dashboard.
To start the development server:
npm run devOpen your browser and go to:
http://localhost:3000EcoLens is an AI-powered web application that helps users understand the environmental impact of their consumption habits โ starting with something as simple as a chocolate bar.
Whether itโs carbon emissions, water usage, recyclability, or ethical sourcing, EcoLens zooms in on the lifecycle of products and services, giving users personalized and location-aware insights to make more sustainable choices.
-
AI-Driven Product Impact Analysis
Understand the carbon, water, and ethical footprint of your queries in seconds. -
Location-Aware Insights
Use your latitude and longitude to tailor responses to your region or country. -
Agentic RAG Pipeline
Multi-step retrieval-augmented generation using LangGraph. -
Trusted Sources
Pulls from OpenLCA, Ecoinvent, IPCC, UNEP, and other climate databases. -
Structured Output
Easily digestible JSON-based summaries with citations and actionable recommendations.
| Layer | Tools Used |
|---|---|
| Frontend | Next.js, Tailwind CSS, React |
| Backend | LangGraph, IBM Watsonx.ai, Next.js api route |
| AI Models | ibm/granite-13b-instruct-v2 |
| Retrieval | FAISS, Tavily Web Search |
| Data Sources | Ecoinvent, OpenLCA, IPCC, OpenFoodFacts, UNEP |
| Deployment | Vercel |
EcoLens uses a Hierarchical Agentic RAG System:
-
Query Decomposition
Complex user prompts are broken down into sub-questions. -
CRAG Loop
Each sub-question invokes a retrieval-grade-generate pipeline. -
Conditional Web Search
Web search is only triggered if retrieved documents are insufficient. -
Consolidation
All sub-answers are merged using a JSON-constrained final generation step. -
Location Integration
Optional lat/long values personalize responses for regional relevance.
Note:
Search result responses are limited to 10 results. This explanation is based on the most relevant files found. For more, view code search results on GitHub.
EcoLens uses a multi-layered architecture for streaming logs and AI-generated responses to the frontend in real time. This pipeline involves:
- A Next.js frontend (React) that displays streamed logs and results to users.
- An API route (
pages/api/query.js) acting as a proxy and streaming handler. - A FastAPI backend (
script/base_rag_api.py) that runs the agentic RAG pipeline and streams output. - A Python RAG pipeline (
script/base_rag.py) that executes the actual retrieval-augmented generation logic.
The system leverages Server-Sent Events (SSE) and custom chunked streaming to provide responsive, real-time feedback to users.
- The user submits a question (โqueryโ) via the web UI (
app/home/page.jsx). - The frontend calls the
/api/queryendpoint using a POST request, including the prompt, query type, and (optionally) latitude/longitude for location-aware results. - The frontend uses a function like
streamWithTimeoutto handle the streaming response, parsing SSE events.
Relevant code:
app/home/page.jsx
await streamWithTimeout(
'/api/query',
{ method: "POST", ... },
async (eventType, dataStr) => {
if (eventType === "logs") {
// Display log lines in the UI live
}
if (eventType === "result") {
// Display the final result
}
}
);- The Next.js API route (
pages/api/query.js) receives the POST request. - It forwards the request to the FastAPI backend (
/askendpoint) with all relevant data. - It sets the response headers for SSE:
Content-Type: text/event-stream Cache-Control: no-cache Connection: keep-alive - It reads the backendโs streamed response chunk-by-chunk, handling:
- Logs: Lines before a separator (
===RESULT===) - Result: JSON output after the separator
- Logs: Lines before a separator (
- The handler emits SSE events:
event: logsfor each log lineevent: resultfor the final answerevent: heartbeatevery 5 seconds to keep the connection alive
Relevant code:
pages/api/query.js
res.writeHead(200, { "Content-Type": "text/event-stream", ... });
...
while (true) {
const { value, done } = await reader.read();
if (value) {
const chunk = decoder.decode(value, { stream: !done });
// Parse for logs or final result using separator
res.write(`event: logs\ndata: ${JSON.stringify(line)}\n\n`);
}
if (done) {
res.write(`event: result\ndata: ${JSON.stringify(result)}\n\n`);
res.end();
break;
}
}- The FastAPI endpoint (
/askinscript/base_rag_api.py) authenticates and parses the request. - It launches a subprocess that runs the core RAG pipeline script (
base_rag.py), streaming its stdout line by line. - The FastAPI endpoint returns a
StreamingResponsewith each line as it is produced.
Relevant code:
script/base_rag_api.py
def generate():
process = subprocess.Popen([...], stdout=subprocess.PIPE, ...)
for line in iter(process.stdout.readline, ''):
yield line
return StreamingResponse(generate(), media_type="text/plain")base_rag.pyimplements the agentic RAG (retrieval-augmented generation) logic.- As the pipeline runs, it prints log lines to stdout.
- After finishing, it prints a separator line (
===RESULT===), then outputs the final result as a JSON string. - This output is streamed back up the chain.
Relevant code:
script/base_rag.py
print("...log line...", flush=True)
...
print("===RESULT===")
print(json.dumps({"result": response}), flush=True)| Layer | Technology / File | Purpose |
|---|---|---|
| Frontend | app/home/page.jsx |
User input, receives SSE logs/results, live feedback |
| API Route | pages/api/query.js |
Proxies request, parses/streams logs & result as SSE to client |
| Backend API | script/base_rag_api.py |
Authenticates, starts RAG process, streams output |
| RAG Logic | script/base_rag.py |
Runs pipeline, prints logs/results, defines separator |
- Logs and final results are separated by the string
===RESULT===in the backend output. - Frontend parses SSE events:
logs,result, andheartbeat. - Heartbeats prevent timeouts in serverless environments.
- All streaming is line-by-line for logs and JSON chunk for results.
For more search results & code context, browse the repository.

