Skip to content

Comments

feat: allow kernel.plugin_error event#187

Open
0x4007 wants to merge 5 commits intodevelopmentfrom
chore/kernel-plugin-error-event
Open

feat: allow kernel.plugin_error event#187
0x4007 wants to merge 5 commits intodevelopmentfrom
chore/kernel-plugin-error-event

Conversation

@0x4007
Copy link
Member

@0x4007 0x4007 commented Feb 2, 2026

Allow kernel.plugin_error in plugin-sdk config and manifest schemas.

Copilot AI review requested due to automatic review settings February 2, 2026 08:13
@gentlementlegen
Copy link
Member

You should not target main

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR adds support for a custom kernel.plugin_error event in plugin configuration and manifest schemas, and implements automatic kernel token refresh before LLM API calls. The PR includes significant functionality beyond what the title suggests.

Changes:

  • Added kernel.plugin_error to allowed events in both manifest and configuration schemas
  • Implemented automatic kernel attestation token refresh when calling LLM API with GitHub tokens
  • Added comprehensive test coverage for the token refresh flow
  • Cleaned up duplicate documentation sections in README

Reviewed changes

Copilot reviewed 6 out of 6 changed files in this pull request and generated 8 comments.

Show a summary per file
File Description
src/types/manifest.ts Added kernel.plugin_error to custom events array for manifest schema validation
src/configuration/schema.ts Added kernel.plugin_error to custom events array and modified plugin settings schema to make "with" field non-optional with default
src/llm/index.ts Implemented kernel token refresh logic including URL resolution, token update, and refresh API call handling
tests/llm.test.ts Added test case validating kernel token refresh flow before LLM calls
README.md Removed duplicate LLM utility documentation sections
CHANGELOG.md Added entry for kernel attestation refresh feature

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +128 to +191
it("refreshes kernel attestation before calling the LLM for GitHub auth", async () => {
const completion = {
id: "completion-2",
object: "chat.completion",
created: 1,
model: "gpt-5.1",
choices: [],
} as ChatCompletion;

const input = {
authToken: "ghs_initial_token",
ubiquityKernelToken: "kernel-initial",
eventPayload: {
repository: { owner: { login: "octo" }, name: "repo" },
installation: { id: 123 },
},
config: {
kernelRefreshUrl: "https://kernel.test/internal/agent/refresh-token",
},
};

const fetchMock = jest.spyOn(globalThis, "fetch").mockImplementation(async (url) => {
if (String(url) === "https://kernel.test/internal/agent/refresh-token") {
return new Response(JSON.stringify({ authToken: "ghs_refreshed", ubiquityKernelToken: "kernel-refreshed" }), {
status: 200,
headers: { "Content-Type": "application/json" },
});
}
if (String(url) === "https://ai.ubq.fi/v1/chat/completions") {
return new Response(JSON.stringify(completion), {
status: 200,
headers: { "Content-Type": "application/json" },
});
}
return new Response("not found", { status: 404 });
});

const result = await callLlm({ messages: [{ role: "user", content: "Hi" }], baseUrl: "https://ai.ubq.fi" }, input as typeof baseInput);

expect(result).toEqual(completion);
expect(fetchMock).toHaveBeenCalledTimes(2);
expect(fetchMock).toHaveBeenCalledWith(
"https://kernel.test/internal/agent/refresh-token",
expect.objectContaining({
method: "POST",
headers: expect.objectContaining({
Authorization: "Bearer ghs_initial_token",
"X-Ubiquity-Kernel-Token": "kernel-initial",
"X-GitHub-Owner": "octo",
"X-GitHub-Repo": "repo",
"X-GitHub-Installation-Id": "123",
}),
})
);
expect(fetchMock).toHaveBeenCalledWith(
"https://ai.ubq.fi/v1/chat/completions",
expect.objectContaining({
headers: expect.objectContaining({
Authorization: "Bearer ghs_refreshed",
"X-Ubiquity-Kernel-Token": "kernel-refreshed",
}),
})
);
});
Copy link

Copilot AI Feb 2, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Test coverage is missing for error scenarios in the kernel token refresh flow, such as: refresh endpoint returns non-2xx status, refresh response missing required fields, refresh response with invalid JSON, or network failures during refresh. These scenarios should be tested to ensure proper error handling and messaging.

Copilot uses AI. Check for mistakes.
Comment on lines +4 to +5
const customKernelEvents = ["kernel.plugin_error"] as const;
export const runEvent = T.Union([...emitterEventNames, ...customKernelEvents].map((o) => T.Literal(o)));
Copy link

Copilot AI Feb 2, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are no tests validating that the new kernel.plugin_error event is accepted in plugin configurations and manifests. Consider adding tests that verify this event can be used in both the runsOn configuration array and the ubiquity:listeners manifest array, similar to existing tests for standard GitHub webhook events like issues.opened.

Copilot uses AI. Check for mistakes.
Comment on lines +130 to +142
if (isGitHub && kernelRefreshUrl && kernelToken && !hasExplicitAiAuth) {
const refreshed = await refreshKernelTokens({
url: kernelRefreshUrl,
authToken,
kernelToken,
owner,
repo,
installationId,
});
authToken = refreshed.authToken;
kernelToken = refreshed.kernelToken;
updateInputTokens(input, refreshed);
}
Copy link

Copilot AI Feb 2, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The callLlm function mutates the input object by updating authToken and ubiquityKernelToken when refreshing kernel tokens. This side effect is not documented and could lead to unexpected behavior if callers rely on the input object remaining unchanged. Consider documenting this behavior in a JSDoc comment for the function, or consider returning the updated tokens as part of the response instead of mutating the input.

Copilot uses AI. Check for mistakes.
Comment on lines +165 to +191
const result = await callLlm({ messages: [{ role: "user", content: "Hi" }], baseUrl: "https://ai.ubq.fi" }, input as typeof baseInput);

expect(result).toEqual(completion);
expect(fetchMock).toHaveBeenCalledTimes(2);
expect(fetchMock).toHaveBeenCalledWith(
"https://kernel.test/internal/agent/refresh-token",
expect.objectContaining({
method: "POST",
headers: expect.objectContaining({
Authorization: "Bearer ghs_initial_token",
"X-Ubiquity-Kernel-Token": "kernel-initial",
"X-GitHub-Owner": "octo",
"X-GitHub-Repo": "repo",
"X-GitHub-Installation-Id": "123",
}),
})
);
expect(fetchMock).toHaveBeenCalledWith(
"https://ai.ubq.fi/v1/chat/completions",
expect.objectContaining({
headers: expect.objectContaining({
Authorization: "Bearer ghs_refreshed",
"X-Ubiquity-Kernel-Token": "kernel-refreshed",
}),
})
);
});
Copy link

Copilot AI Feb 2, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The test validates that the refreshed tokens are used in the LLM API call, but doesn't verify that the input object is updated with the new tokens. Consider adding assertions to check that input.authToken and input.ubiquityKernelToken have been updated to the refreshed values after the callLlm function completes, since the code explicitly mutates these values via updateInputTokens.

Copilot uses AI. Check for mistakes.
Comment on lines +128 to +191
it("refreshes kernel attestation before calling the LLM for GitHub auth", async () => {
const completion = {
id: "completion-2",
object: "chat.completion",
created: 1,
model: "gpt-5.1",
choices: [],
} as ChatCompletion;

const input = {
authToken: "ghs_initial_token",
ubiquityKernelToken: "kernel-initial",
eventPayload: {
repository: { owner: { login: "octo" }, name: "repo" },
installation: { id: 123 },
},
config: {
kernelRefreshUrl: "https://kernel.test/internal/agent/refresh-token",
},
};

const fetchMock = jest.spyOn(globalThis, "fetch").mockImplementation(async (url) => {
if (String(url) === "https://kernel.test/internal/agent/refresh-token") {
return new Response(JSON.stringify({ authToken: "ghs_refreshed", ubiquityKernelToken: "kernel-refreshed" }), {
status: 200,
headers: { "Content-Type": "application/json" },
});
}
if (String(url) === "https://ai.ubq.fi/v1/chat/completions") {
return new Response(JSON.stringify(completion), {
status: 200,
headers: { "Content-Type": "application/json" },
});
}
return new Response("not found", { status: 404 });
});

const result = await callLlm({ messages: [{ role: "user", content: "Hi" }], baseUrl: "https://ai.ubq.fi" }, input as typeof baseInput);

expect(result).toEqual(completion);
expect(fetchMock).toHaveBeenCalledTimes(2);
expect(fetchMock).toHaveBeenCalledWith(
"https://kernel.test/internal/agent/refresh-token",
expect.objectContaining({
method: "POST",
headers: expect.objectContaining({
Authorization: "Bearer ghs_initial_token",
"X-Ubiquity-Kernel-Token": "kernel-initial",
"X-GitHub-Owner": "octo",
"X-GitHub-Repo": "repo",
"X-GitHub-Installation-Id": "123",
}),
})
);
expect(fetchMock).toHaveBeenCalledWith(
"https://ai.ubq.fi/v1/chat/completions",
expect.objectContaining({
headers: expect.objectContaining({
Authorization: "Bearer ghs_refreshed",
"X-Ubiquity-Kernel-Token": "kernel-refreshed",
}),
})
);
});
Copy link

Copilot AI Feb 2, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Test coverage is missing for the case where kernel token refresh should be skipped. According to the implementation logic, refresh should not occur when: 1) the token is not a GitHub token, 2) kernelRefreshUrl is not configured, 3) kernelToken is missing, or 4) an explicit aiAuthToken is provided. Consider adding test cases for these scenarios to ensure the refresh logic is correctly bypassed.

Copilot uses AI. Check for mistakes.
T.Object(
{
with: T.Optional(T.Record(T.String(), T.Unknown(), { default: {} })),
with: T.Record(T.String(), T.Unknown(), { default: {} }),
Copy link

Copilot AI Feb 2, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The removal of T.Optional from the "with" field changes the schema behavior. While the field now has a default empty object, this represents a semantic change in how plugin configurations are validated. Previously, the "with" field could be entirely omitted from a plugin configuration object; now it will always be present (either explicitly or as an empty object). This change doesn't appear to be mentioned in the PR description or related to the stated purpose of allowing the kernel.plugin_error event.

Copilot uses AI. Check for mistakes.
### Features

* add ref overrides for configuration fetches ([#174](https://github.com/ubiquity-os/plugin-sdk/issues/174)) ([a915b12](https://github.com/ubiquity-os/plugin-sdk/commit/a915b12ef4218830ad44ae5cb6482a5fac38217c))
* refresh kernel attestation before LLM calls (PR [#178](https://github.com/ubiquity-os/plugin-sdk/pull/178))
Copy link

Copilot AI Feb 2, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The CHANGELOG entry mentions "refresh kernel attestation before LLM calls" but does not mention the kernel.plugin_error event feature, which is the stated purpose of this PR according to the title and description. The entry should be updated to include both features or clarified to match the actual scope of changes.

Copilot uses AI. Check for mistakes.
Comment on lines +193 to +238
async function refreshKernelTokens(params: {
url: string;
authToken: string;
kernelToken: string;
owner: string;
repo: string;
installationId?: number;
}): Promise<KernelRefreshTokens> {
const headers = buildHeaders(params.authToken, {
owner: params.owner,
repo: params.repo,
installationId: params.installationId,
ubiquityKernelToken: params.kernelToken,
});
const response = await fetch(params.url, { method: "POST", headers });
const text = await response.text().catch(() => EMPTY_STRING);
if (!response.ok) {
const error = new Error(`Kernel refresh error: ${response.status} - ${text}`);
(error as Error & { status?: number }).status = response.status;
throw error;
}

let payload: Record<string, unknown> = {};
if (text.trim()) {
try {
payload = JSON.parse(text) as Record<string, unknown>;
} catch (err) {
const details = err instanceof Error ? ` (${err.message})` : "";
const error = new Error(`Kernel refresh error: failed to parse JSON response${details}`);
(error as Error & { status?: number }).status = response.status;
throw error;
}
}

const authToken = normalizeToken(payload.authToken);
const kernelToken = normalizeToken(payload.ubiquityKernelToken);
if (!authToken || !kernelToken) {
throw new Error("Kernel refresh error: response missing authToken or ubiquityKernelToken");
}

return {
authToken,
kernelToken,
expiresAt: typeof payload.expiresAt === "string" ? payload.expiresAt : null,
};
}
Copy link

Copilot AI Feb 2, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The refreshKernelTokens function does not implement retry logic for transient failures. If the kernel refresh endpoint experiences temporary issues, the LLM call will fail even though the refresh might succeed on retry. Consider adding retry logic similar to fetchWithRetry that's used for LLM API calls, particularly for 5xx errors.

Copilot uses AI. Check for mistakes.
@0x4007 0x4007 changed the base branch from main to development February 4, 2026 17:51
@0x4007
Copy link
Member Author

0x4007 commented Feb 4, 2026

@gentlementlegen just changed target to development

@0x4007
Copy link
Member Author

0x4007 commented Feb 4, 2026

@codex review

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 4b0303c734

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Copy link
Member

@gentlementlegen gentlementlegen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code changes seem fine to me, would like to have a real e2e test refreshing a token, that shouldn't be hard to do only using the SDK with a small snippet of code. Please resolve the merge conflicts too.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants