-
Notifications
You must be signed in to change notification settings - Fork 22
feat(pattern): privacy L2s comparison #71
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
CI Fix SummaryThe CI was failing on the Prose Quality (Vale) check due to two issues: 1. CHANGELOG.md Spelling ErrorsThe Fix: Changed 2. Reviewdog Annotation LimitThe Vale action uses reviewdog to post annotations, but GitHub has a limit on annotations per workflow run. When Vale produces many warnings (even non-blocking ones), reviewdog fails with "Too many results (annotations) in diff" - even with Fix: Changed reporter from default Why It Passed Locally But Failed in CILocally, Commits:
|
Vale's base spelling checker doesn't use the IPTF vocabulary. Switch CHANGELOG.md to IPTF style to recognize domain terms.
The default github-pr-annotations reporter hits GitHub's annotation limit when Vale produces many warnings, causing reviewdog to fail even with fail_on_error: false. The github-check reporter has a higher limit and handles large result sets better.
9b9c8e5 to
e496e31
Compare
|
Ping @oskarth for review |
| - think outside of the box | ||
| - third time's the charm | ||
| - this day and age | ||
| - this hurts me worse than it hurts you |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cc @mojalil
| - no skin off my back | ||
| - no stone unturned | ||
| - no time like the present | ||
| - no use crying over spilled milk |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this list man lmao
| ## Protocol | ||
|
|
||
| 1. **Define workload**: Use [Simple Value Transfer](#simple-value-transfer) as baseline | ||
| 2. **Collect metrics**: Request L2 teams fill in the three evaluation tables |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is collecting metrics part of the protocol here? Just making sure this actually matches intention
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This describes the process of coming up with a privacy/throughput comparative table.
Not sure if it's clear.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is the "process" what we want to capture? As opposed to the actual comparative table itself? I thought the latter would be more useful, and then just have it well-sourced and somewhat up to date?
Or do you expect many people to want the process itself? Almost seems more like a CLAUDE skill or so?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What we want to capture are both the framework and the results. Both wouldn't fit as a pattern to be semantically correct, but that was the only place where it'd fit. The PRD mentioned domains but I assume it was wrong.
The goal is still to have the analysis with L2 teams provided metrics, but as of today nobody answered, so the tables here are LLM placeholders.
So the question is should create a new category in the repo to capture this? Or make it fit the pattern even though it's not a direct match?
| | **Scroll Cloak** | Yes | Access control | Operator + regulator access | | ||
| | **EY Nightfall** | Yes | Yes | Enterprise audit trail | | ||
|
|
||
| _Sources: Protocol documentation, L2Beat, academic papers. Last updated: 2026-01-27_ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Have all these been reviewed by respective projects? Might be good to add sources or so, including if it is asking Core Devs or so in Jan, 2026. Ideally post to docs or something as source.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, this is only publicly available information, mainly from each L2 documentation.
Yes, I can add sources.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added sources and disclaimer to the doc.
I don't know. I didn't see this before. It does seem a bit noisy? Maybe we could add to gitignore, and if it seems valuable to have in code base we can add in separate PR later? |
Tbf I just ran the specified command and Vale automatically added as part of the setup. |
Summary
.gitignore(node_modules, package-lock.json, editor files, Vale cache)write-goodstyle (required by.vale.ini)Test plan
npm run validatepasses (0 errors)npm run check-termsruns (warnings in other files)npm run lint:valepasses (0 errors)@oskarth
Vale setup: Running
vale syncdownloaded thewrite-goodpackage to.vale/styles/. Is this the intended setup, or should these styles be excluded from the repo (added to.gitignore) and installed separately by contributors? Currently they're committed (~1.2K lines).