Conversation
There was a problem hiding this comment.
There are good improvements to error handling in general and especially the error responses to API response codes JSON responses will help with frontend integration.
There was a problem hiding this comment.
These integration tests are thorough and test different edge cases!! Good work
There was a problem hiding this comment.
This end to end test worked nicely when running locally with the extension. It works as intended and provides good error handling and debugging information.
|
I also pulled and tested them, they are very thorough and manage to catch a lot of different edge cases. I actually think these are really good and will help us continue the project after the semester ends. I also tried them with the Chrome Extension, the e2e works very good |
There was a problem hiding this comment.
I ran the tests locally on my machine and the tests all work as intended. I can also see Baljinder had no issues running the tests. I'm not sure as to why they do not properly run on the automation but I will approve this PR because the tests are still especially helpful for debugging and work as intended.
There was a problem hiding this comment.
Pull Request Overview
This pull request adds a suite of automated tests along with coverage reporting and a CI pipeline configuration to improve quality checks for both frontend and backend authentication and recommendation logic. The changes include:
- New unit tests, integration tests, and end-to-end tests for login, signup, and Chrome extension UI flows.
- Addition of automated testing using Playwright, pytest, and requests along with test-specific enhancements.
- A GitHub Actions workflow to run tests, generate coverage reports, and upload artifacts on every pull request.
Reviewed Changes
Copilot reviewed 25 out of 25 changed files in this pull request and generated 3 comments.
Show a summary per file
| File | Description |
|---|---|
| tests/test_popup_ui.py | Added Playwright E2E test for extension UI with defensive error handling |
| tests/test_login_e2e.py | Introduced end-to-end tests for login flow using Playwright with screenshot debugging |
| tests/test_login.py | Added API tests for login functionality, including missing field validations |
| tests/test_getRec_logic.py | Added unit tests for recommendation logic with mocks for Supabase and recommendation service |
| tests/test_getRec_integration_api.py | Added integration tests for the getRec API endpoint checking error handling for invalid methods and tokens |
| tests/test_getRec_e2e.py | Introduced E2E tests for recommendation UI with retry loops and conditional expectations |
| tests/test_content_extract.js | Added test for content extraction logic with JSDOM and Chrome extension API mocks |
| tests/login_integration_test.py | Included integration tests for signup and login flows with dynamic xfail/skips |
| fitcheck/pages/api/user_rec/getRec.js | Updated API handler for recommendations to include enhanced error logging and try-catch wrapping |
| .github/workflows/cipipeline.yml | Configured CI pipeline to install, build, test (Python & Playwright), and upload coverage reports |
| return # Skip instead of failing | ||
|
|
||
| # Wait for extension to fully load before proceeding | ||
| time.sleep(3) |
There was a problem hiding this comment.
[nitpick] Consider replacing the fixed sleep with a dynamic wait (e.g., using wait_for_selector on a specific UI element) to improve test reliability in different environments.
| time.sleep(3) | |
| try: | |
| context.pages[0].wait_for_selector("text=Fitcheck", timeout=5000) | |
| print("✅ Extension loaded successfully") | |
| except PlaywrightTimeoutError: | |
| print("⚠️ Extension did not load within the expected time") | |
| context.close() | |
| return |
| def test_successful_signup_and_login(cleanup_user): | ||
| try: | ||
| response = requests.post(f"{BASE_URL}/signup", json=TEST_USER) | ||
| print("Signup response:", response.status_code, response.json()) | ||
| if response.status_code not in (200, 400): | ||
| pytest.skip("Signup failed unexpectedly") | ||
|
|
There was a problem hiding this comment.
[nitpick] Consider splitting the combined test for successful signup and login into two separate tests to clearly delineate expected behaviors for each scenario.
| def test_successful_signup_and_login(cleanup_user): | |
| try: | |
| response = requests.post(f"{BASE_URL}/signup", json=TEST_USER) | |
| print("Signup response:", response.status_code, response.json()) | |
| if response.status_code not in (200, 400): | |
| pytest.skip("Signup failed unexpectedly") | |
| def test_successful_signup(cleanup_user): | |
| try: | |
| response = requests.post(f"{BASE_URL}/signup", json=TEST_USER) | |
| print("Signup response:", response.status_code, response.json()) | |
| if response.status_code not in (200, 400): | |
| pytest.skip("Signup failed unexpectedly") | |
| assert response.status_code == 200 | |
| assert response.json()["message"] in ["Please verify your email"] | |
| except Exception as e: | |
| pytest.skip(f"Test skipped due to exception: {e}") | |
| def test_successful_login(cleanup_user): | |
| try: |
| NEXT_PUBLIC_SUPABASE_KEY: ${{ secrets.SUPABASE_KEY }} | ||
| NEXT_PUBLIC_MOCK_GEMINI: true | ||
| run: | | ||
| nohup npm start > ../nextjs.log 2>&1 & |
There was a problem hiding this comment.
[nitpick] Using 'nohup' for server startup might complicate process management; consider utilizing a dedicated process manager or GitHub Actions background job control instead.
NirathH
left a comment
There was a problem hiding this comment.
I made some edits to the pipeline file .. @Anas10202 Please look over
This pull request adds automated testing and coverage reporting for the application. I implemented unit tests for core login logic, integration tests for the /api/auth/signup and /api/auth/login API endpoints, and end-to-end tests using Playwright that simulate a user logging in through the UI and interacting with the Chrome extension on Amazon. I also set up a GitHub Actions workflow that installs all dependencies, builds the Next.js frontend, launches the app locally, runs all Python and Playwright tests, generates a coverage report with pytest-cov, and uploads it as an artifact. These checks run automatically on every pull request to the main branch. Code coverage has improved as a result of these additions. To test locally, install the dependencies from requirements.txt, run playwright install, and execute pytest --cov=./ --cov-report=html. Please review the changes and let me know if anything needs to be updated. I've added you as a reviewer.