feat: enhanced GitHub Copilot integration — SDK provider, CLI backend, and thinking signature fix#3
feat: enhanced GitHub Copilot integration — SDK provider, CLI backend, and thinking signature fix#3tag-assistant wants to merge 693 commits intomainfrom
Conversation
|
The formal models extracted constants ( This check is informational (not blocking merges yet). If this change is intentional, follow up by updating the formal models repo or regenerating the extracted artifacts there. |
1 similar comment
|
The formal models extracted constants ( This check is informational (not blocking merges yet). If this change is intentional, follow up by updating the formal models repo or regenerating the extracted artifacts there. |
4aaa458 to
9fbc0c8
Compare
|
The formal models extracted constants ( This check is informational (not blocking merges yet). If this change is intentional, follow up by updating the formal models repo or regenerating the extracted artifacts there. |
1 similar comment
|
The formal models extracted constants ( This check is informational (not blocking merges yet). If this change is intentional, follow up by updating the formal models repo or regenerating the extracted artifacts there. |
d16dea2 to
cbe9567
Compare
|
The formal models extracted constants ( This check is informational (not blocking merges yet). If this change is intentional, follow up by updating the formal models repo or regenerating the extracted artifacts there. |
cbe9567 to
ad16ae1
Compare
|
The formal models extracted constants ( This check is informational (not blocking merges yet). If this change is intentional, follow up by updating the formal models repo or regenerating the extracted artifacts there. |
11 similar comments
|
The formal models extracted constants ( This check is informational (not blocking merges yet). If this change is intentional, follow up by updating the formal models repo or regenerating the extracted artifacts there. |
|
The formal models extracted constants ( This check is informational (not blocking merges yet). If this change is intentional, follow up by updating the formal models repo or regenerating the extracted artifacts there. |
|
The formal models extracted constants ( This check is informational (not blocking merges yet). If this change is intentional, follow up by updating the formal models repo or regenerating the extracted artifacts there. |
|
The formal models extracted constants ( This check is informational (not blocking merges yet). If this change is intentional, follow up by updating the formal models repo or regenerating the extracted artifacts there. |
|
The formal models extracted constants ( This check is informational (not blocking merges yet). If this change is intentional, follow up by updating the formal models repo or regenerating the extracted artifacts there. |
|
The formal models extracted constants ( This check is informational (not blocking merges yet). If this change is intentional, follow up by updating the formal models repo or regenerating the extracted artifacts there. |
|
The formal models extracted constants ( This check is informational (not blocking merges yet). If this change is intentional, follow up by updating the formal models repo or regenerating the extracted artifacts there. |
|
The formal models extracted constants ( This check is informational (not blocking merges yet). If this change is intentional, follow up by updating the formal models repo or regenerating the extracted artifacts there. |
|
The formal models extracted constants ( This check is informational (not blocking merges yet). If this change is intentional, follow up by updating the formal models repo or regenerating the extracted artifacts there. |
|
The formal models extracted constants ( This check is informational (not blocking merges yet). If this change is intentional, follow up by updating the formal models repo or regenerating the extracted artifacts there. |
|
The formal models extracted constants ( This check is informational (not blocking merges yet). If this change is intentional, follow up by updating the formal models repo or regenerating the extracted artifacts there. |
1d942f7 to
890432a
Compare
|
The formal models extracted constants ( This check is informational (not blocking merges yet). If this change is intentional, follow up by updating the formal models repo or regenerating the extracted artifacts there. |
|
The formal models extracted constants ( This check is informational (not blocking merges yet). If this change is intentional, follow up by updating the formal models repo or regenerating the extracted artifacts there. |
fe39ae1 to
45b5082
Compare
|
The formal models extracted constants ( This check is informational (not blocking merges yet). If this change is intentional, follow up by updating the formal models repo or regenerating the extracted artifacts there. |
45b5082 to
1fed689
Compare
|
The formal models extracted constants ( This check is informational (not blocking merges yet). If this change is intentional, follow up by updating the formal models repo or regenerating the extracted artifacts there. |
* changelog: add security deepMerge prototype-pollution fix entry * update: refresh gateway service env during update restart * test(cli): fix daemon install mock assertion * test(cli): guard update restart false path
…s to user context (openclaw#20597) Merged via /review-pr -> /prepare-pr -> /merge-pr. Prepared head SHA: 175919a Co-authored-by: anisoptera <768771+anisoptera@users.noreply.github.com> Co-authored-by: mbelinky <132747814+mbelinky@users.noreply.github.com> Reviewed-by: @mbelinky
* fix(docker): pin base images to SHA256 digests for supply chain security Pin all 9 Dockerfiles to immutable SHA256 digests to prevent supply chain attacks where a compromised upstream image could be silently pulled into production builds. Also add Docker ecosystem to Dependabot configuration for automated digest updates. Images pinned: - node:22-bookworm@sha256:cd7bcd2e7a1e6f72052feb023c7f6b722205d3fcab7bbcbd2d1bfdab10b1e935 - node:22-bookworm-slim@sha256:3cfe526ec8dd62013b8843e8e5d4877e297b886e5aace4a59fec25dc20736e45 - debian:bookworm-slim@sha256:98f4b71de414932439ac6ac690d7060df1f27161073c5036a7553723881bffbe - ubuntu:24.04@sha256:cd1dba651b3080c3686ecf4e3c4220f026b521fb76978881737d24f200828b2b Fixes openclaw#7731 Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * test(docker): add digest pinning regression coverage --------- Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
…nclaw#21086) * fix: treat HTTP 503 as failover-eligible for LLM provider errors When LLM SDKs wrap 503 responses, the leading "503" prefix is lost (e.g. Google Gemini returns "high demand" / "UNAVAILABLE" without a numeric prefix). The existing isTransientHttpError only matches messages starting with "503 ...", so these wrapped errors silently skip failover — no profile rotation, no model fallback. This patch closes that gap: - resolveFailoverReasonFromError: map HTTP status 503 → rate_limit (covers structured error objects with a status field) - ERROR_PATTERNS.overloaded: add /\b503\b/, "service unavailable", "high demand" (covers message-only classification when the leading status prefix is absent) Existing isTransientHttpError behavior is unchanged; these additions are complementary and only fire for errors that previously fell through unclassified. * fix: address review feedback — drop /\b503\b/ pattern, add test coverage - Remove `/\b503\b/` from ERROR_PATTERNS.overloaded to resolve the semantic inconsistency noted by reviewers: `isTransientHttpError` already handles messages prefixed with "503" (→ "timeout"), so a redundant overloaded pattern would classify the same class of errors differently depending on message formatting. - Keep "service unavailable" and "high demand" patterns — these are the real gap-fillers for SDK-rewritten messages that lack a numeric prefix. - Add test case for JSON-wrapped 503 error body containing "overloaded" to strengthen coverage. * fix: unify 503 classification — status 503 → timeout (consistent with isTransientHttpError) resolveFailoverReasonFromError previously mapped status 503 → "rate_limit", while the string-based isTransientHttpError mapped "503 ..." → "timeout". Align both paths: structured {status: 503} now also returns "timeout", matching the existing transient-error convention. Both reasons are failover-eligible, so runtime behavior is unchanged. --------- Co-authored-by: Vincent Koc <vincentkoc@ieee.org>
…verbose logs, and WebUI (openclaw#20704)
…0988) * fix(slack): pass recipient_team_id and recipient_user_id to streaming API calls The Slack Agents & AI Apps streaming API (chat.startStream / chat.stopStream) requires recipient_team_id and recipient_user_id parameters. Without them, stopStream fails with 'missing_recipient_team_id' (all contexts) or 'missing_recipient_user_id' (DM contexts), causing streamed messages to disappear after generation completes. This passes: - team_id (from auth.test at provider startup, stored in monitor context) - user_id (from the incoming message sender, for DM recipient identification) through to the ChatStreamer via recipient_team_id and recipient_user_id options. Fixes openclaw#19839, openclaw#20847, openclaw#20299, openclaw#19791, openclaw#20337 AI-assisted: Written with Claude (Opus 4.6) via OpenClaw. Lightly tested (unit tests pass, live workspace verification in progress). * fix(slack): disable block streaming when native streaming is active When Slack native streaming (`chat.startStream`/`stopStream`) is enabled, `disableBlockStreaming` was set to `false`, which activated the app-level block streaming pipeline. This pipeline intercepted agent output, sent it via block replies, then dropped the final payloads that would have flowed through `deliverWithStreaming` to the Slack streaming API — resulting in zero replies delivered. Set `disableBlockStreaming: true` when native streaming is active so the final reply flows through the Slack streaming API path as intended. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com> Co-authored-by: Vincent Koc <vincentkoc@ieee.org>
…#11046) - Changed "cask" to "formula" in SKILL.md for consistency. - Enhanced formula parsing in frontmatter.ts to trim whitespace and fallback to cask if formula is not provided.
Summary
openai-completionsAPI.Change Type (select all)
Scope (select all touched areas)
Linked Issue/PR
Commits
User-visible / Behavior Changes
github-copilotavailable for model configuration with SDK-based authcopilotCLI backend available for terminal usagestripCompletionsReasoningFieldSignaturestranscript policy flag — enabled automatically for non-native providers usingopenai-completionsSecurity Impact (required)
Evidence
68 tests passing across 6 test files. Live-tested all models: Claude Opus 4.6 ✅, 4.6 Fast ✅, 4.6 1M ✅, GPT-5.3 Codex ✅
Human Verification (required)
Compatibility / Migration
github-copilotprovider configFailure Recovery (if this breaks)
github-copilotprovider from config, switch to another providerRisks and Mitigations