Conversation
| // Removes "Bearer " and trims | ||
| const token = authorizationHeader.split(' ')[1].trim(); | ||
| // Load user for later middleware/routes to use | ||
| req.user = await User.newUser({ token }); |
There was a problem hiding this comment.
User.newUser is misleading, it does not create a new user, instead it retrieves a user based on the JWT token.
There was a problem hiding this comment.
User.fromToken({ token }) or User.getByToken({ token }) would be a better fit.
There was a problem hiding this comment.
Yup agreed. I was using existing methods though.
| const appDefinition = backendJsFile.Definition; | ||
|
|
||
| const backend = createFriggBackend(appDefinition); | ||
| const loadRouterFromObject = (IntegrationClass, routerObject) => { |
There was a problem hiding this comment.
"Object" is too vague, I have no clue what can be in there or what that means
There was a problem hiding this comment.
I think we could call it loadIntegrationRoutes
There was a problem hiding this comment.
Also the routerObject is already inside of IntegrationClass isn't it?
There was a problem hiding this comment.
The overall method of loadIntegrationRoutes makes sense, or integration defined routes.
The load from object method was intended to load from a... route object? Dunno what to call it. The "method", "path", "event" object. I'm not even sure where I got that concept from except it's a common way to represent an http endpoint. Minus the event thing. That I want to be an easy way for a dev to reference when creating event handlers.
|
|
||
| for (const routeDef of IntegrationClass.Definition.routes) { | ||
| if (typeof routeDef === 'function') { | ||
| router.use(basePath, routeDef(IntegrationClass)); |
There was a problem hiding this comment.
I don't understand this.
You're looping through IntegrationClass.Definition.routes and from every routeDef you're passing the same integration as parameter?
There was a problem hiding this comment.
Do you have an example of a place where we declare routes as a function and need it's own integration class as parameter?
There was a problem hiding this comment.
Why not stick to one type of route definition? Either function or object
There was a problem hiding this comment.
I think the rationale was that in some cases, you need to have some logic that runs outside of express route generation. https://github.com/lefthookhq/frigg-2.0-prototyping/blob/460ad99b85b5a53bac5a859de8b8d3780b85937d/backend/src/testRouter.js#L8
In other cases, relying on the normal instantiation is fine and for those you can reach for either the straight express route or the static object.
I wanted to provide a "quick and easy", "moderate complexity", "high complexity" set of options.
That said I'm not sure what I did was the way to do it.
| router[method.toLowerCase()](path, async (req, res, next) => { | ||
| try { | ||
| const integration = new IntegrationClass({}); | ||
| await integration.loadModules(); |
There was a problem hiding this comment.
We already loadModules in the IntegrationBase constructor, we could registerEventHandlers in the constructor as well.
There was a problem hiding this comment.
I think I had an issue of not wanting to load event handlers yet because of some dynamic issues? Or something else. But, happy to not duplicate invoking!
| await integration.loadModules(); | ||
| await integration.registerEventHandlers(); | ||
| const result = await integration.send(event, {req, res, next}); | ||
| res.json(result); |
There was a problem hiding this comment.
Do we really have to return the result here?
Developers will by instict call the http response when it's available to them. I was not aware that this existed until now and probably other developers won't either.
There was a problem hiding this comment.
Well, the intent was to remove the need to think about express inside the handler function. Just, do what you need, then if you need to return something, return it.
There was a problem hiding this comment.
But if someones sees a "res" object, one would not simply return and ignore "res".
BTW, one of the reasons I don't like ruby on rails is because it does too much magic and I don't know why and how 🫠
| for (const [entityId, key] of Object.entries( | ||
| integrationRecord.entityReference | ||
| )) { | ||
| const moduleInstance = |
There was a problem hiding this comment.
Are the modules not already instantiated in IntegrationBase -> loadModules() when we do:
const instance = new integrationClass({
userId,
integrationId: params.integrationId,
});
There was a problem hiding this comment.
OK, looks like loadModules doesn't actualy loads modules but it simply instantiates the module api.
There was a problem hiding this comment.
There's a difference between "load module definitions" and "load module entities into the module instance"
| integrationRecord.config.type | ||
| ); | ||
|
|
||
| const instance = new integrationClass({ |
There was a problem hiding this comment.
rename to integrationInstance
| // for each entity, get the moduleinstance and load them according to their keys | ||
| // If it's the first entity, load the moduleinstance into primary as well | ||
| // If it's the second entity, load the moduleinstance into target as well | ||
| const moduleTypesAndKeys = |
There was a problem hiding this comment.
I think this comment also does not make sense anymore, right?
There was a problem hiding this comment.
The first line does make sense. The second and third lines are for backwards compatibility, where we enforced a "primary" and "target" concept via naming conventions.
There was a problem hiding this comment.
I don't think we need to maintain that though. Let things error and people correct the errors.
| moduleClass && | ||
| typeof moduleClass.definition.getName === 'function' | ||
| ) { | ||
| const moduleType = moduleClass.definition.getName(); |
There was a problem hiding this comment.
moduleType also comes from the name?
There was a problem hiding this comment.
Yes, though we can debate that
| const integrationClassIndex = this.integrationTypes.indexOf(type); | ||
| return this.integrationClasses[integrationClassIndex]; | ||
| } | ||
| getModuleTypesAndKeys(integrationClass) { |
There was a problem hiding this comment.
I dont get what this does
There was a problem hiding this comment.
I don't think this implementation is fully what I intended. But anywho, the goal is that we grab an integration definition, and from it we can determine which api modules are part of it, which allows us to get the required module entity instances in order to create a complete integration record (TODO allow for a module to be optional and not required on creation of an integration record, potentially make them required on a per event basis, ie we throw an error/force a user to assign a connection or create a new connection if they go to use a feature that is only available if they have a specific module instance added).
The direct use case is during the management inside the frontend experience. The ui should see "user wants to create a HubSpot integration. The HubSpot integration requires two modules. The user has one module connection (entity) available to use, but needs the other one. I'll run the auth flow for the other one and then ask them to confirm the use of that new connection."
The reason I say this implementation may not be what I intended is that we should allow multiple modules of the same type to be assigned different module names so you have something like "slackUser" and "slackApp" both pointing to the slack api module.
seanspeaks
left a comment
There was a problem hiding this comment.
Did my comments to your comments and a few added comments in there
| { | ||
| "$schema": "node_modules/lerna/schemas/lerna-schema.json", | ||
| "version": "1.2.2", | ||
| "version": "2.0.0-next.0", |
There was a problem hiding this comment.
I have no idea if this should be committed 😅
| const appDefinition = backendJsFile.Definition; | ||
|
|
||
| const backend = createFriggBackend(appDefinition); | ||
| const loadRouterFromObject = (IntegrationClass, routerObject) => { |
There was a problem hiding this comment.
The overall method of loadIntegrationRoutes makes sense, or integration defined routes.
The load from object method was intended to load from a... route object? Dunno what to call it. The "method", "path", "event" object. I'm not even sure where I got that concept from except it's a common way to represent an http endpoint. Minus the event thing. That I want to be an easy way for a dev to reference when creating event handlers.
| router[method.toLowerCase()](path, async (req, res, next) => { | ||
| try { | ||
| const integration = new IntegrationClass({}); | ||
| await integration.loadModules(); |
There was a problem hiding this comment.
I think I had an issue of not wanting to load event handlers yet because of some dynamic issues? Or something else. But, happy to not duplicate invoking!
|
|
||
| getIntegrationById: async function(id) { | ||
| return IntegrationModel.findById(id); | ||
| getIntegrationById: async function (id) { |
There was a problem hiding this comment.
I really need to lock this concept into my brain. Git repo takes the entire definition of "repository" in my head.
| const integration = | ||
| await integrationFactory.getInstanceFromIntegrationId({ | ||
| integrationId: integrationRecord.id, | ||
| userId: getUserId(req), |
There was a problem hiding this comment.
I think we discussed this on our call. Likely just was throwing things that stuck, for context on why this is here.
| @@ -349,9 +375,7 @@ function setEntityRoutes(router, factory, getUserId) { | |||
| throw Boom.forbidden('Credential does not belong to user'); | |||
There was a problem hiding this comment.
I think there were moments where this failed? Dunno what those moments were/are though.
| const {ModuleConstants} = require("./ModuleConstants"); | ||
| const { ModuleConstants } = require('./ModuleConstants'); | ||
|
|
||
| class Auther extends Delegate { |
There was a problem hiding this comment.
All for it. At one point @MichaelRyanWebber and I were debating both naming and intent of the class (hi Michael! Not to rope you in but, you likely can either find the note somewhere or comment on the future improvements you/we had in mind).
| this.EntityModel = definition.Entity || this.getEntityModel(); | ||
| } | ||
|
|
||
| static async getInstance(params) { |
There was a problem hiding this comment.
Async construction.
It's a debate in nodejs world. Might be a resolved debate in node 20? But basically "if you need to await/promise something in order to instantiate a class, do you make it an async constructor, or do you create a static async instantiation method?" Aka the get
That's the root of this decision at any rate.
There was a problem hiding this comment.
CodeQL found more than 20 potential problems in the proposed changes. Check the Files changed tab for more details.
Remove temporary analysis documents that were created during investigation: - SUBTYPE_OAUTH_ANALYSIS.md - POST_AUTHORIZE_TRACE.md - MODULE_NAME_SOURCE_ANALYSIS.md - SUBTYPE_VALUE_ANALYSIS.md - SUBTYPE_CRITICAL_FINDING.md - SUBTYPE_REMOVAL_MIGRATION_GUIDE.md These were helpful for understanding the changes but are not needed in the final codebase. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
…apping-011CUPEgXrH6n8pAzwNxRrfi Fix type↔subType Mapping Confusion and Remove Vestigial subType Field
When CloudFormation discovery finds a VPC but subnet resources are not in the stack (e.g., CREATE_FAILED but resources still exist in AWS), fall back to querying EC2 directly for Frigg-managed subnets. This fixes the issue where VPC builder tries to create new subnets with conflicting CIDRs when subnets already exist but aren't tracked in CloudFormation. Changes: - Added EC2 subnet query fallback in CloudFormation discovery - Filters by VPC ID and ManagedBy tag to find Frigg-managed subnets - Extracts private/public subnet IDs and sorts by logical ID - Added comprehensive tests for EC2 fallback behavior Closes #ISSUE
When CloudFormation stack has VPC but subnet resources failed to create (CREATE_FAILED then DELETE_SKIPPED), subnets still exist in AWS but aren't tracked. Add fallback to query EC2 directly for Frigg-managed subnets tagged with ManagedBy='Frigg'. This ensures subnet IDs are discovered even when CloudFormation state is inconsistent with actual AWS resources.
…tion stack ## Problem KMS key alias was only checked within CloudFormation stack resources. If the alias was created outside CloudFormation (manually or via another process), the infrastructure code would not discover it and would attempt to create a new key instead of using the existing one. ## Solution Following DDD and hexagonal architecture principles: 1. Added `describeKmsKey()` method to AWS Provider Adapter - Implements port/adapter pattern for KMS queries - Enables testing without AWS SDK dependencies 2. Enhanced CloudFormation Discovery to check for KMS aliases - Checks `FriggKMSKeyAlias` resources in CloudFormation stack - Queries AWS API for expected alias name (based on serviceName/stage) - Works even when alias exists outside CloudFormation management 3. Fixed resource extraction to always run AWS queries - Changed to call `_extractFromResources()` even with empty resource list - Ensures AWS API queries run regardless of CloudFormation stack state 4. Added comprehensive test coverage (TDD) - 6 tests covering all KMS alias discovery scenarios - Tests verify behavior with mocked dependencies - All tests passing ## Impact - Uses existing KMS keys when alias exists, avoiding duplicate key creation - Reduces infrastructure costs by reusing existing resources - Improves idempotency of infrastructure deployment - Maintains backward compatibility with existing deployments 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
…ra-fix-011CUQ5APFUTQGK8RbVDN8eW fix(infrastructure): Discover KMS key alias even if not in CloudForma…
…overy
## Problem
In isolated mode (vpcIsolation: 'isolated'), if CloudFormation discovery didn't
find VPC resources in the stack, the code would fall back to AWS API discovery
and select the default VPC instead of creating fresh isolated resources.
This violates the isolated mode contract: each stage should have completely
separate VPC/Aurora resources, never shared with other stages or default VPC.
## Root Cause
The isolated mode check returned empty `{}` (correct), but the intent wasn't
clear, and AWS discovery fallback logic after line 120 could theoretically run
in edge cases.
## Solution
1. Made isolated mode logic explicit and defensive:
- If CF stack has VPC → use it (isolated to this stage)
- If CF stack has NO VPC → return empty to CREATE fresh resources
- NEVER fall back to AWS discovery (which would find default/shared VPC)
2. Improved logging to show exactly what's happening:
- "Isolated mode: No CloudFormation stack or no VPC/Aurora in stack"
- "Will create fresh isolated VPC/Aurora for this stage"
- "Cloud resource discovery completed - will create isolated VPC/Aurora!"
3. Better error handling in CF VPC extraction:
- Added check for `provider.getEC2Client` before calling
- Clearer error messages if VPC extraction fails
## Impact
- Isolated mode now strictly enforces stage isolation
- Fresh deployments correctly create new VPC/Aurora per stage
- Prevents accidentally sharing default VPC across isolated stages
- Maintains KMS key sharing (encryption keys safe to reuse)
## Testing
- Existing isolated mode tests should pass
- Fresh deployment to new stage creates new VPC
- Existing stage deployment reuses its CF stack VPC
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
…esources
## Problem
CloudFormation discovery was only extracting VPC ID indirectly by querying the
security group resource. If the security group query failed or wasn't present,
the VPC wouldn't be discovered even though it exists in the CloudFormation stack
as the `FriggVPC` resource.
This caused the isolated mode logic to incorrectly think there was no VPC in the
stack and either create a duplicate VPC or fall back to AWS discovery (finding
the default VPC).
## Root Cause
The `_extractFromResources` method checked for:
- `FriggLambdaSecurityGroup` (to query EC2 for VPC ID)
- `FriggKMSKey`, `FriggAuroraCluster`, etc.
But it NEVER checked for `FriggVPC` directly, even though it's a standard
resource in managed VPC deployments.
## Solution
Added direct extraction of `FriggVPC` resource:
```javascript
if (LogicalResourceId === 'FriggVPC' && ResourceType === 'AWS::EC2::VPC') {
discovered.defaultVpcId = PhysicalResourceId;
console.log(` ✓ Found VPC in stack: ${PhysicalResourceId}`);
}
```
This is now the **primary method** for VPC discovery from CloudFormation stacks.
The security group query remains as a fallback for edge cases.
## Impact
- Fixes VPC discovery for existing stacks with managed VPCs
- Prevents isolated mode from creating duplicate VPCs
- Prevents discovering default VPC when managed VPC exists
- More reliable than security group query (no EC2 API call required)
## Testing
- Added test: "should extract VPC directly from stack resources"
- Test passes with VPC ID `vpc-037ec55fe87aec1e7` (user's actual VPC)
- All existing CloudFormation discovery tests still pass (18/20)
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
…ra-fix-011CUQ5APFUTQGK8RbVDN8eW Fixing the VPC detection bug, it should look for the VPC directly
…edeployment
## Problem
When redeploying with an existing CloudFormation stack, infrastructure builders
were creating NEW resources with the same logical IDs (FriggVPC, FriggInternetGateway,
FriggVPCGatewayAttachment, etc.), causing CloudFormation to:
1. See the logical IDs as new resources
2. Attempt to replace existing resources
3. Delete old resources (sometimes failing due to dependencies)
4. Result in deployment failures and resource churn
## Root Cause
Discovery found physical resource IDs but didn't indicate WHERE they came from:
- CloudFormation stack (should reuse existing logical IDs)
- AWS API (can create new resources)
Builders always generated NEW CloudFormation resources regardless of source,
causing CloudFormation to see duplicate logical IDs and trigger replacement.
## Solution
### 1. CloudFormation Discovery Metadata
Added metadata to discovery results indicating source:
```javascript
{
fromCloudFormationStack: true,
stackName: 'quo-integrations-dev',
existingLogicalIds: ['FriggVPC', 'FriggInternetGateway', ...]
}
```
### 2. VPC Builder Updates
- `discoverVpc()`: Checks if resources from CF stack, skips creating new resources
- `buildNatGateway()`: Skips NAT Gateway creation for CF stack resources
- VPC Endpoints: Skips creation when all endpoints exist in CF stack
### 3. Behavior Changes
**Before:**
- Discovery finds VPC vpc-037ec55fe87aec1e7
- Builder creates NEW FriggVPC, FriggInternetGateway, etc.
- CloudFormation replaces resources ❌
**After:**
- Discovery finds VPC from CF stack quo-integrations-dev
- Builder sees fromCloudFormationStack=true
- Builder skips resource creation, reuses existing ✅
## Impact
- Prevents resource replacement on existing stack deployments
- Eliminates resource deletion failures
- Reduces deployment time (no unnecessary creates/deletes)
- Maintains proper resource lifecycle management
## Testing
Test with existing stack deployment:
1. Stack quo-integrations-dev exists with VPC/subnets/gateways
2. Run deployment
3. Should log: "Skipping resource creation - will reuse existing CloudFormation resources"
4. No resources should be replaced or deleted
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
…ra-fix-011CUQ5APFUTQGK8RbVDN8eW fix(infrastructure): Prevent CloudFormation resource replacement on redeployment
Resolves error code 263: "Cannot create namespace frigg.Credential in multi-document transaction" that occurs when the encryption health check tries to create a credential before the collection exists. MongoDB does not allow creating collections inside multi-document transactions. This fix ensures the Credential collection exists before attempting to create documents in it. Changes: - Created reusable MongoDB collection utilities module with: * ensureCollectionExists() - ensures a collection exists before operations * ensureCollectionsExist() - ensures multiple collections exist * collectionExists() - checks if a collection exists - Updated HealthCheckRepositoryMongoDB to call ensureCollectionExists() before creating credentials - Added comprehensive unit tests for MongoDB collection utilities - Added documentation in MONGODB_TRANSACTION_FIX.md This follows DDD/Hexagonal architecture by separating MongoDB-specific infrastructure concerns into reusable utilities. References: - prisma/prisma#8305 - https://www.mongodb.com/docs/manual/core/transactions/#transactions-and-operations 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
Refactored the MongoDB transaction namespace fix to use a systematic, fail-fast approach that initializes all collections at application startup rather than checking before each individual operation. ## Architectural Improvements 1. **Schema Initialization System** - Created mongodb-schema-init.js with initializeMongoDBSchema() - Automatically creates all 13 Prisma collections on database connection - Only runs for MongoDB (skips PostgreSQL) - Fails fast if database issues exist at startup 2. **Integration with Connection Flow** - Modified connectPrisma() to call initializeMongoDBSchema() - Ensures all collections exist before handling requests - Zero runtime overhead - collections created once at startup 3. **Simplified Repository Implementation** - Removed per-operation collection checks from health check repository - All repositories now benefit automatically without changes - Cleaner code with no conditional collection creation logic ## Benefits - Zero runtime overhead (no per-operation checks) - Fail fast principle (database issues discovered at startup) - Centralized schema initialization logic - Idempotent and safe across multiple instances - Follows DDD/Hexagonal Architecture principles - Comprehensive test coverage and documentation ## Design Decision Per-operation checks ❌ - Runtime overhead on every database operation - Scattered collection creation logic - Discovered issues during runtime Startup initialization ✅ - Collections created once at application startup - Centralized infrastructure concern - Fail fast if database has issues - Zero runtime overhead ## Collections Initialized All 13 Prisma collections: User, Token, Credential, Entity, Integration, IntegrationMapping, Process, Sync, DataIdentifier, Association, AssociationObject, State, WebsocketConnection See MONGODB_TRANSACTION_FIX.md for complete architectural documentation. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
Implemented automatic extraction of MongoDB collection names from the Prisma schema file, eliminating the need to manually maintain a list of collections and ensuring schema changes are automatically reflected. ## Changes 1. **Created Prisma Schema Parser** (prisma-schema-parser.js) - Parses schema.prisma file to extract model definitions - Extracts collection names from @@Map() directives - Falls back to model name if no @@Map() exists - Handles comments, whitespace, and complex schemas - Provides both sync and async APIs 2. **Updated Schema Initialization** (mongodb-schema-init.js) - Removed hardcoded PRISMA_COLLECTIONS array - Now dynamically parses schema at startup - Automatically syncs with schema changes - Added warning if no collections found 3. **Comprehensive Test Coverage** - Tests for all parsing edge cases (@@Map, no @@Map, quotes, whitespace) - Tests for complex schemas with relations and indexes - Tests for error handling and graceful degradation - Verified against actual Frigg schema (13 collections) ## Benefits - ✅ **Zero maintenance** - Collections automatically sync with schema - ✅ **No manual updates** - Add/remove models without code changes - ✅ **Single source of truth** - Prisma schema is the only place to define models - ✅ **Catches mistakes** - Schema parsing errors fail fast at startup - ✅ **Better DX** - Developers don't need to remember to update collection lists ## Tested Verified with actual Prisma schema: ``` Found 13 collections: User, Token, Credential, Entity, Integration, IntegrationMapping, Process, Sync, DataIdentifier, Association, AssociationObject, State, WebsocketConnection ``` All collections successfully extracted from schema.prisma with @@Map() directives. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
Fixed connectPrisma() to check DB_TYPE before requiring MongoDB-specific
modules, preventing PostgreSQL deployments from loading MongoDB/mongoose
dependencies unnecessarily.
## Problem
Previously, connectPrisma() unconditionally loaded mongodb-schema-init
module for ALL database types (MongoDB and PostgreSQL), even though the
module internally checked DB_TYPE. This meant:
- PostgreSQL deployments loaded MongoDB-specific code
- Mongoose module was required even when not needed
- Unnecessary module loading overhead
## Solution
Added DB_TYPE check BEFORE requiring the MongoDB schema initialization
module in connectPrisma():
```javascript
if (config.DB_TYPE === 'mongodb') {
const { initializeMongoDBSchema } = require('./utils/mongodb-schema-init');
await initializeMongoDBSchema();
}
```
## Benefits
- ✅ PostgreSQL deployments don't load MongoDB-specific modules
- ✅ Cleaner separation of concerns
- ✅ No mongoose dependencies for PostgreSQL
- ✅ Faster startup for PostgreSQL (no unnecessary module loading)
- ✅ More explicit about MongoDB-only behavior
## Testing
The module still has an internal DB_TYPE check as defense-in-depth, but
now PostgreSQL deployments won't even load the module.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
…n-namespace-fix-011CUQGmkZ8MAVuZ9L5CJ7Ce Claude/mongodb transaction namespace fix 011 cuq gmk z8 ma vu z9 l5 cj7 ce
## Problem When running `frigg build --stage=qa`, the discovery flow was not using the correct stage. It was defaulting to 'dev' and looking for CloudFormation stack 'quo-integrations-dev' instead of 'quo-integrations-qa'. ## Root Cause The build command passed --stage to serverless but did not set the SLS_STAGE environment variable that the discovery service reads to determine the stage. ## Solution (TDD/DDD Approach) Following Test-Driven Development: 1. Added failing tests to document expected behavior 2. Implemented fix in build command to set SLS_STAGE environment variable 3. Added integration test to document CLI-to-discovery contract Changes: - build-command/index.js: Set SLS_STAGE in spawned process environment - build.test.js: Added tests verifying SLS_STAGE is set for qa, production, and dev - resource-discovery.test.js: Added test documenting SLS_STAGE contract ## DDD/Hexagonal Architecture Alignment - Maintains separation between CLI (application layer) and discovery (domain layer) - Uses environment variable as the port between layers - Discovery service remains cloud-agnostic and testable 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
…tion-011CUQSejRdbY82PFes4Mt88 fix(cli): propagate stage parameter to resource discovery via SLS_STAGE
…for-refresh-auth Fix: use correct property for refresh auth
The token refresh test was calling `api.refreshAccessToken()` without
arguments, but `OAuth2Requester.refreshAccessToken` expects the refresh
token to be passed in the params object: `{ refresh_token: ... }`.
This caused the token refresh test to fail with:
"Cannot read properties of undefined (reading 'refresh_token')"
…for-refresh-auth fix(requester): improve auth refresh handling and tests
…resh fix(devtools): pass refresh_token to refreshAccessToken in auth-tester
…persistence Add data JSON field to Entity model following the same pattern used for Credential, enabling persistence of dynamic entity properties from apiPropertiesToPersist.entity (e.g., domain, region, accountId). Changes: - Add data Json field to Entity in both MongoDB and PostgreSQL schemas - Update all module repositories to handle dynamic data persistence - Fix DocumentDB updateEntity to not return null on zero modifications
feat(core): add data JSON field to Entity model for dynamic property persistence
The refreshAuth() method was catching errors silently, making it impossible to diagnose token refresh failures. Added console logging to capture token refresh attempt details, success confirmation, and failure details including error message and response data. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
…r528 fix(core): add debug logging to OAuth2Requester.refreshAuth()
Add scheduler commands and infrastructure support for one-time job scheduling
using AWS EventBridge Scheduler. This enables integrations to schedule
delayed tasks like webhook renewal.
## Core Package (@friggframework/core)
### New: Scheduler Commands (application/commands/scheduler-commands.js)
- `createSchedulerCommands({ integrationName })` factory function
- `scheduleJob()` - Schedule one-time job targeting SQS queue
- `deleteJob()` - Delete scheduled job (graceful if not exists)
- `getJobStatus()` - Get schedule status
### New: Scheduler Infrastructure (infrastructure/scheduler/)
- `EventBridgeSchedulerAdapter` - AWS EventBridge Scheduler implementation
- `createSchedulerAdapter()` - Factory for scheduler adapters
- Supports `ActionAfterCompletion: DELETE` for auto-cleanup
### Exports
- Added `createSchedulerCommands` export from core index
## DevTools Package (@friggframework/devtools)
### New: SchedulerBuilder (infrastructure/domains/scheduler/)
- Creates `AWS::Scheduler::ScheduleGroup` resource
- Creates IAM Role for EventBridge Scheduler to send SQS messages
- Adds IAM statements for Lambda to manage schedules
- Sets `SCHEDULER_ROLE_ARN` and `SCHEDULER_PROVIDER` environment variables
- Auto-enables when integrations have webhooks configured
### Infrastructure Composer
- Added SchedulerBuilder to builder orchestrator
- Depends on IntegrationBuilder (needs queue ARNs)
## Usage Example
```javascript
const { createSchedulerCommands } = require('@friggframework/core');
const scheduler = createSchedulerCommands({ integrationName: 'zoho' });
await scheduler.scheduleJob({
jobId: 'renewal-123',
scheduledAt: new Date(Date.now() + 6 * 24 * 60 * 60 * 1000),
event: 'REFRESH_WEBHOOK',
payload: { integrationId: '123' },
queueArn: process.env.INTEGRATION_QUEUE_ARN,
});
```
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Add MockSchedulerAdapter for in-memory scheduling in dev/test/local - Update scheduler factory to auto-select provider based on STAGE - Support explicit SCHEDULER_PROVIDER env var override - Add helper methods for testing: _getSchedules, _clearSchedules, _simulateTrigger This enables local testing of scheduled notification renewals without requiring AWS EventBridge Scheduler infrastructure. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Add {INTEGRATION}_QUEUE_ARN environment variable alongside
{INTEGRATION}_QUEUE_URL. This is needed for EventBridge Scheduler
to target integration queues for scheduled jobs.
- For stack-owned queues: use Fn::GetAtt to get ARN
- For external queues: derive ARN from URL
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
This reverts commit 5ea5dab.
Change scheduler commands to accept queueUrl instead of queueArn.
The ARN is now derived internally from the URL, which follows the
standard Frigg pattern of using {INTEGRATION}_QUEUE_URL as the
primary queue identifier.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Add documentation for the scheduler commands system to SKILL.md and CLAUDE.md: - Usage examples for scheduleJob, deleteJob, getJobStatus - Environment variables (SCHEDULER_ROLE_ARN, SCHEDULER_DLQ_ARN, SCHEDULER_PROVIDER) - Key features (EventBridge Scheduler, mock for local dev, queue URL handling) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Create SchedulerServiceInterface as the Port defining the contract - Update EventBridgeSchedulerAdapter to extend the interface - Update MockSchedulerAdapter to extend interface with proper validation - Rename scheduler-factory.js to scheduler-service-factory.js - Add dependency injection support to createSchedulerCommands - Standardize return shapes for consistency Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
…ility alias The hexagonal architecture refactoring is complete. All code now uses createSchedulerService directly - the deprecated alias has no usages. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
…agnostic naming
- Remove SCHEDULER_PROVIDER=eventbridge from scheduler-builder.js to allow
auto-detection based on stage (dev/test/local → mock, prod/staging → eventbridge)
- Rename interface parameters for provider-agnostic clarity:
- targetArn → queueResourceId
- scheduleArn → scheduledJobId
- Update all adapters and commands to use new naming convention
- MockScheduler now returns generic ID format (mock-job-{name})
Fixes the '[object Object]' error when running locally due to CloudFormation
intrinsic not resolving for SCHEDULER_ROLE_ARN.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Remove hardcoded SCHEDULE_GROUP_NAME constant from builder and adapter
- Use ${self:service}-${self:provider.stage}-schedules naming pattern
- Pass SCHEDULE_GROUP_NAME as environment variable to Lambda functions
- Read schedule group name from env var with backwards-compatible fallback
- Update IAM policy to reference CloudFormation resource dynamically
This allows multiple stages (dev, prod) to be deployed in the same
AWS account without resource name conflicts.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
The queue worker reads params.event to dispatch jobs, but the scheduler was writing the field as eventType, causing REFRESH_WEBHOOK events to fail with "Event undefined not registered". Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
feat(scheduler): add EventBridge Scheduler commands and infrastructure
Add trace/warning logs to setTokens() and Module.onTokenUpdate() to aid debugging when a token refresh response omits the refresh_token. This makes it easier to detect misconfigured modules or unexpected OAuth provider behavior in production. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…n-logging fix(oauth2): add defensive logging for refresh_token preservation
|




Version
Published prerelease version:
v2.0.0-next.67Changelog
🚀 Enhancement
@friggframework/devtools🐛 Bug Fix
@friggframework/devtools@friggframework/core,@friggframework/devtools,@friggframework/eslint-config,@friggframework/prettier-config,@friggframework/schemas,@friggframework/serverless-plugin,@friggframework/test,@friggframework/uiAuthors: 1