Skip to content

Enhance pipeline#2

Merged
ravishan16 merged 4 commits intomainfrom
enhance-pipeline
Oct 1, 2025
Merged

Enhance pipeline#2
ravishan16 merged 4 commits intomainfrom
enhance-pipeline

Conversation

@ravishan16
Copy link
Owner

No description provided.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @ravishan16, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request represents a significant enhancement to the project's core infrastructure and development experience. It introduces a highly modular and extensible architecture for AI engine integration, allowing for seamless support of multiple large language models. Concurrently, it vastly improves the developer experience through comprehensive documentation, standardized tooling for code quality, and a more robust testing framework. The user interface has also received a refresh, making the application more intuitive and informative.

Highlights

  • Modular AI Engine Architecture: Introduced a new adapter pattern for AI engines, allowing for easy integration and switching between multiple providers like AWS Bedrock, Anthropic Claude, and Google Gemini. This significantly enhances the extensibility and flexibility of the AI-powered SQL generation.
  • Expanded AI Provider Support: Added support for Google Gemini as a new AI provider, alongside existing Bedrock and Claude integrations. The .env.example file and ENVIRONMENT_SETUP.md have been updated to reflect the new configuration options for Gemini.
  • Comprehensive Development Tooling: Overhauled the development workflow with new Makefile commands for testing (unit, integration, coverage), linting (flake8, mypy), and formatting (black, isort). New configuration files (pyproject.toml, pytest.ini, setup.cfg) ensure consistent code quality and testing practices.
  • Enhanced Documentation and Contribution Guides: Added extensive new documentation, including a detailed CONTRIBUTING.md guide, docs/AI_ENGINES.md for developing new AI adapters, docs/ARCHITECTURE.md for system overview, and docs/DATA_PIPELINE.md for data engineering. Existing setup guides (.env.example, docs/ENVIRONMENT_SETUP.md, docs/R2_SETUP.md, docs/D1_SETUP.md, docs/GOOGLE_OAUTH_SETUP.md) have been significantly updated and improved.
  • Project Rebranding and UI Improvements: The project has been rebranded from 'Single Family Loan Analytics Platform' to 'converSQL' across the README.md and app.py. The Streamlit UI (app.py) has been refined with enhanced styling, dynamic AI provider selection in the sidebar, and improved display of system status and data metrics.
Ignored Files
  • Ignored by pattern: .github/workflows/** (1)
    • .github/workflows/ci.yml
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request is a major enhancement to the project, rebranding it to "converSQL" and introducing a more robust, extensible architecture. Key improvements include a new adapter pattern for AI services, a significantly improved development environment, and greatly enhanced documentation and UI. The changes are excellent and move the project in a great direction. I've identified a few issues, including a critical one with incorrect AI model identifiers that will cause failures, a high-severity issue in the Makefile that could mask linting errors in CI, and several medium-severity suggestions to improve maintainability and fix minor issues.

Comment on lines +37 to +39
# - anthropic.claude-3-5-sonnet-20241022-v2:0 (Most capable, higher cost)
# - anthropic.claude-3-5-haiku-20241022-v1:0 (Fast, cost-effective)
BEDROCK_MODEL_ID=anthropic.claude-3-5-haiku-20241022-v1:0
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The model identifiers for the Anthropic models appear to contain a typo in the date. The date 20241022 is in the future and is likely incorrect. For example, the correct identifier for Claude 3.5 Sonnet is claude-3.5-sonnet-20240620. Using incorrect model IDs will result in API errors. This issue is also present on lines 62-64. Please verify and correct all model identifiers.

#   - anthropic.claude-3.5-sonnet-20240620-v1:0  (Most capable, higher cost)
#   - anthropic.claude-3-haiku-20240307-v1:0   (Fast, cost-effective)
BEDROCK_MODEL_ID=anthropic.claude-3.5-sonnet-20240620-v1:0

Comment on lines +128 to +130
@flake8 src/ tests/ app.py || echo "⚠️ Flake8 found issues"
@echo "Running mypy..."
@mypy src/ || echo "⚠️ MyPy found issues"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The lint target uses || echo ..., which suppresses the exit code from flake8 and mypy. This will cause the command to always report success, even if linting errors are found, potentially allowing issues to be missed in a CI environment. To ensure the CI pipeline fails correctly on linting errors, the command should exit with a non-zero status code, similar to the format-check target.

	@flake8 src/ tests/ app.py || (echo "❌ Flake8 found issues. Run 'make lint' to see them." && exit 1)
	@echo "Running mypy..."
	@mypy src/ || (echo "❌ MyPy found type errors. Run 'make lint' to see them." && exit 1)

# Backup files
*_old.py
*_backup.py
data/raw/2025Q1.csv
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Ignoring a specific file like data/raw/2025Q1.csv is brittle. If the intention is to ignore all raw data files, it would be more robust to use a wildcard pattern. This prevents accidentally committing other raw data files in the future.

data/raw/*.csv

Comment on lines +402 to +426
if "total_data_size" not in st.session_state:
st.session_state.total_data_size = sum(
os.path.getsize(f) for f in st.session_state.parquet_files if os.path.exists(f)
)
total_size = st.session_state.total_data_size

# Clean metrics display - one per row for readability
st.metric("📊 Total Records", f"{total:,}")
st.metric("💾 Data Size", format_file_size(total_size))
st.metric("📁 Data Files", len(st.session_state.parquet_files))
if total > 0 and total_size > 0:
records_per_mb = int(total / (total_size / (1024 * 1024)))
st.metric("⚡ Record Density", f"{records_per_mb:,} per MB")

except Exception:
# Fallback stats - clean single column layout
total_size = sum(os.path.getsize(f) for f in st.session_state.parquet_files if os.path.exists(f))
if "total_data_size" not in st.session_state:
st.session_state.total_data_size = sum(
os.path.getsize(f) for f in st.session_state.parquet_files if os.path.exists(f)
)
st.metric("📁 Data Files", len(st.session_state.parquet_files))
st.metric("💾 Data Size", format_file_size(total_size))
st.metric(
"💾 Data Size",
format_file_size(st.session_state.total_data_size),
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

There is some code duplication in the except block for calculating total_size. This logic can be moved outside the try...except block to be executed once, improving maintainability.

                        # Get total file size (cached calculation)
                        if "total_data_size" not in st.session_state:
                            st.session_state.total_data_size = sum(
                                os.path.getsize(f) for f in st.session_state.parquet_files if os.path.exists(f)
                            )
                        total_size = st.session_state.total_data_size

                        # Get record count
                        total = conn.execute("SELECT COUNT(*) FROM 'data/processed/data.parquet'").fetchone()[0]

                        # Clean metrics display - one per row for readability
                        st.metric("📊 Total Records", f"{total:,}")
                        st.metric("💾 Data Size", format_file_size(total_size))
                        st.metric("📁 Data Files", len(st.session_state.parquet_files))
                        if total > 0 and total_size > 0:
                            records_per_mb = int(total / (total_size / (1024 * 1024)))
                            st.metric("⚡ Record Density", f"{records_per_mb:,} per MB")

                except Exception:
                    # Fallback stats - clean single column layout
                    if "total_data_size" not in st.session_state:
                        st.session_state.total_data_size = sum(
                            os.path.getsize(f) for f in st.session_state.parquet_files if os.path.exists(f)
                        )
                    st.metric("📁 Data Files", len(st.session_state.parquet_files))
                    st.metric(
                        "💾 Data Size",
                        format_file_size(st.session_state.total_data_size),
                    )

Comment on lines +35 to +42
[isort]
profile = black
line_length = 120
multi_line_output = 3
include_trailing_comma = True
force_grid_wrap = 0
use_parentheses = True
ensure_newline_before_comments = True
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The isort configuration is also present in pyproject.toml. To avoid duplication and potential conflicts, it's best to keep this configuration in a single location. I recommend removing this section from setup.cfg and relying on the pyproject.toml configuration.

ravishan16 and others added 2 commits October 1, 2025 18:51
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
@ravishan16 ravishan16 merged commit a6db68c into main Oct 1, 2025
12 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant

Comments