Skip to content

Conversation

@Meyanis95
Copy link
Collaborator

Summary

  • Added L2 Privacy Evaluation Framework pattern for comparing privacy-preserving L2 solutions
  • Updated RFP for Living Benchmark Dashboard
  • Added .gitignore (node_modules, package-lock.json, editor files, Vale cache)
  • Synced Vale write-good style (required by .vale.ini)

Test plan

  • npm run validate passes (0 errors)
  • npm run check-terms runs (warnings in other files)
  • npm run lint:vale passes (0 errors)

@oskarth
Vale setup: Running vale sync downloaded the write-good package to .vale/styles/. Is this the intended setup, or should these styles be excluded from the repo (added to .gitignore) and installed separately by contributors? Currently they're committed (~1.2K lines).

@Meyanis95 Meyanis95 requested a review from oskarth January 28, 2026 10:28
@Meyanis95 Meyanis95 changed the title feat: pr 011 -- l2 privacy comparison feat(pattern): l2 privacy comparison Jan 28, 2026
@Meyanis95 Meyanis95 changed the title feat(pattern): l2 privacy comparison feat(pattern): privacy l2s comparison Jan 28, 2026
@Meyanis95 Meyanis95 changed the title feat(pattern): privacy l2s comparison feat(pattern): privacy L2s comparison Jan 28, 2026
@oskarth
Copy link
Collaborator

oskarth commented Jan 28, 2026

CI Fix Summary

The CI was failing on the Prose Quality (Vale) check due to two issues:

1. CHANGELOG.md Spelling Errors

The .vale.ini config applied only the base Vale style to CHANGELOG.md, which doesn't include the IPTF vocabulary. Domain terms like "Nullifiers", "Ethereum", "ZKsync", etc. triggered spelling errors.

Fix: Changed BasedOnStyles = Vale to BasedOnStyles = IPTF for CHANGELOG.md section.

2. Reviewdog Annotation Limit

The Vale action uses reviewdog to post annotations, but GitHub has a limit on annotations per workflow run. When Vale produces many warnings (even non-blocking ones), reviewdog fails with "Too many results (annotations) in diff" - even with fail_on_error: false.

Fix: Changed reporter from default github-pr-annotations to github-check, which has a higher limit and handles large result sets better. See errata-ai/vale-action#89 for context.

Why It Passed Locally But Failed in CI

Locally, npm run lint:vale runs Vale directly without reviewdog, so it only shows warnings/errors without hitting annotation limits. In CI, the vale-action wraps Vale with reviewdog for PR annotations, which has stricter limits.


Commits:

  • e811197 - fix(ci): use IPTF style for CHANGELOG Vale checks
  • 9b9c8e5 - fix(ci): use github-check reporter for Vale action

Meyanis95 and others added 5 commits January 28, 2026 12:42
Vale's base spelling checker doesn't use the IPTF vocabulary.
Switch CHANGELOG.md to IPTF style to recognize domain terms.
The default github-pr-annotations reporter hits GitHub's annotation
limit when Vale produces many warnings, causing reviewdog to fail
even with fail_on_error: false. The github-check reporter has a
higher limit and handles large result sets better.
@Meyanis95 Meyanis95 force-pushed the pr-011-l2-privacy-comparison branch from 9b9c8e5 to e496e31 Compare January 28, 2026 11:43
@Meyanis95
Copy link
Collaborator Author

Ping @oskarth for review

- think outside of the box
- third time's the charm
- this day and age
- this hurts me worse than it hurts you
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

- no skin off my back
- no stone unturned
- no time like the present
- no use crying over spilled milk
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this list man lmao

## Protocol

1. **Define workload**: Use [Simple Value Transfer](#simple-value-transfer) as baseline
2. **Collect metrics**: Request L2 teams fill in the three evaluation tables
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is collecting metrics part of the protocol here? Just making sure this actually matches intention

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This describes the process of coming up with a privacy/throughput comparative table.
Not sure if it's clear.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is the "process" what we want to capture? As opposed to the actual comparative table itself? I thought the latter would be more useful, and then just have it well-sourced and somewhat up to date?

Or do you expect many people to want the process itself? Almost seems more like a CLAUDE skill or so?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What we want to capture are both the framework and the results. Both wouldn't fit as a pattern to be semantically correct, but that was the only place where it'd fit. The PRD mentioned domains but I assume it was wrong.
The goal is still to have the analysis with L2 teams provided metrics, but as of today nobody answered, so the tables here are LLM placeholders.

So the question is should create a new category in the repo to capture this? Or make it fit the pattern even though it's not a direct match?

| **Scroll Cloak** | Yes | Access control | Operator + regulator access |
| **EY Nightfall** | Yes | Yes | Enterprise audit trail |

_Sources: Protocol documentation, L2Beat, academic papers. Last updated: 2026-01-27_
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Have all these been reviewed by respective projects? Might be good to add sources or so, including if it is asking Core Devs or so in Jan, 2026. Ideally post to docs or something as source.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, this is only publicly available information, mainly from each L2 documentation.
Yes, I can add sources.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added sources and disclaimer to the doc.

@Meyanis95 Meyanis95 requested a review from oskarth January 29, 2026 11:57
@oskarth
Copy link
Collaborator

oskarth commented Jan 30, 2026

Vale setup: Running vale sync downloaded the write-good package to .vale/styles/. Is this the intended setup, or should these styles be excluded from the repo (added to .gitignore) and installed separately by contributors? Currently they're committed (~1.2K lines).

I don't know. I didn't see this before. It does seem a bit noisy? Maybe we could add to gitignore, and if it seems valuable to have in code base we can add in separate PR later?

@Meyanis95
Copy link
Collaborator Author

Vale setup: Running vale sync downloaded the write-good package to .vale/styles/. Is this the intended setup, or should these styles be excluded from the repo (added to .gitignore) and installed separately by contributors? Currently they're committed (~1.2K lines).

I don't know. I didn't see this before. It does seem a bit noisy? Maybe we could add to gitignore, and if it seems valuable to have in code base we can add in separate PR later?

Tbf I just ran the specified command and Vale automatically added as part of the setup.
Happy to prune if it wasn't intentional to have the good-write checks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

feat(domains): L2 privacy comparison document (PR-011) feat: map performance and trust assumptions of privacy L2 designs

3 participants