-
Notifications
You must be signed in to change notification settings - Fork 1
Graph BOAW #257
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Graph BOAW #257
Conversation
Phase 1 of BOAW implementation: drill-sergeant tests that define the contract before the infrastructure exists. Test files added: - boaw_end_to_end.rs: "god test" for permutation/worker count invariance - boaw_determinism.rs: serial vs parallel equivalence, insertion order - boaw_cow.rs: COW overlay semantics (unlink, precedence, sharing) - boaw_footprints.rs: independence checks, bucket enforcement - boaw_merge.rs: collapse/merge determinism, conflict artifacts - boaw_privacy.rs: mind mode enforcement, claim merging All tests are #[ignore] until BOAW infrastructure is wired, except for t3_1 (footprint symmetry) and t4_1 (shard routing) which run against existing Footprint implementation. Adds BoawTestHarness trait and XorShift64 RNG to tests/common/mod.rs. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Add parallelism study and benchmark scripts - Update benchmark HTML reports - Add ADR-0007 implementation notes - Add claude's musings on determinism (time capsule from Phase 1) - Ignore experimental bench binaries (not CI-ready) - Update Cargo.lock and warp-benches dependencies Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Replace PanicHarness with real EngineHarness implementation: - BoawSnapshot holds GraphStore, root, and scenario - IngressItem is (&'static str, NodeId) for rule+scope - boaw/touch rule sets marker attachment with proper footprint - 5 scenario builders: Small, ManyIndependent, ManyConflicts, etc. - execute_parallel delegates to serial (Phase 5 adds real parallel) - 8 BOAW determinism tests now pass with real engine hashes Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Phase 3 BOAW: ExecuteFn now receives &mut TickDelta for op emission. New module tick_delta.rs: - TickDelta struct wraps Vec<(WarpOp, Option<OpOrigin>)> - finalize() sorts ops canonically via WarpOp::sort_key() - OpOrigin for future tie-breaking (intent_id, rule_id, match_ix) - DeltaStats for debugging op counts - assert_delta_matches_diff() validation helper - delta_validate feature flag for gating validation ExecuteFn signature change: - Old: fn(&mut GraphStore, &NodeId) - New: fn(&mut GraphStore, &NodeId, &mut TickDelta) Updated 32 executor sites across: - warp-core (engine_impl, inbox, tests) - echo-dry-tests (demo_rules, rules) - echo-dind-tests (rules) Next: make executors emit ops, wire validation. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Document TickDelta module addition - Document ExecuteFn signature change - Document validation infrastructure (assert_delta_matches_diff) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Phase 4 implements SnapshotAccumulator, a columnar accumulator that builds WSC directly from base state + delta ops without reconstructing GraphStore. Key changes: - New snapshot_accum.rs with SnapshotAccumulator struct - from_warp_state(): captures immutable base state - apply_ops(): processes all 8 WarpOp variants - compute_state_root(): hashes directly from accumulator tables - build(): produces WSC bytes and state_root - apply_reserved_rewrites() now returns Vec<WarpOp> - Validation under delta_validate: asserts accumulator state_root matches legacy - TickDelta::finalize() uses sort_by_key optimization This crosses the Rubicon: delta ops are now the source of truth for state changes. SnapshotAccumulator validates that base+ops→state_root matches the legacy computation path. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
A note for future agents (and humans) about what it meant to make the delta the source of truth. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Phase 5 of BOAW: Executors are now read-only. They receive GraphView (an immutable view of GraphStore) and emit ops to TickDelta only. No GraphStore mutations occur during execution. Key changes: - Add GraphView<'a> in graph_view.rs - read-only wrapper (8 bytes, Copy) - Change ExecuteFn: fn(GraphView<'_>, &NodeId, &mut TickDelta) - Change MatchFn, FootprintFn to also use GraphView - Convert all executors to emit-only pattern - Engine uses store() not store_mut() during execution - State updated via WarpTickPatchV1::apply_to_state() after execution This enforces the BOAW contract at the type level: observation (GraphView) and mutation (TickDelta) are now completely separate. The footprint scheduler guarantees independence, so no overlay view is needed. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Belt-and-suspenders for Phase 5 BOAW enforcement: - Add policy comment documenting what NOT to add to GraphView (no Deref, no AsRef, no into_inner, no interior mutability) - Add pointer-size invariant test (fails if extra fields added) - Add Sync/Send invariant tests (required for Phase 6 parallelism) These prevent accidental regression of the read-only boundary. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Prevents future backdoors through unsafe code in executor implementations (demo_rules.rs). warp-core already has this guard. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Documents the policy that all crates containing executor implementations must include #![forbid(unsafe_code)] at crate root. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
… merge Implements the BOAW Phase 6A "FREE MONEY" parallel execution system: - Add `execute_parallel` with stride partitioning for lockless worker execution - Add `merge_deltas` for canonical op ordering by (WarpOpKey, OpOrigin) - Add `OpOrigin.op_ix` for stable tie-breaking across worker counts - Add `ScopedDelta` auto-assignment of op_ix during emission - Add `into_parts_unsorted()` for merge access to raw ops/origins Key invariant: same output regardless of worker count (1, 2, 4, 8, 16, 32). Conflicts are bugs (footprint model violations) - Phase 6A explodes loudly. Includes 7 determinism tests validating worker-count invariance. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
…odel Phase 5 changed the executor model from direct GraphStore mutation to TickDelta emission. This changes the serialization path and produces different (but equally deterministic) commit hashes. The old golden hashes were from the pre-Phase 5 execution model. Regenerated all fixtures to match the current BOAW execution model. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Enable 16 previously-ignored doc tests across engine, materialization, graph_view, tick_delta, and WSC modules - Move PRNG golden regression to standalone test file with required-features - Fix doc examples to compile and run correctly - Update CI and gitignore for improved test infrastructure Doc tests now serve as both documentation AND regression tests, catching API drift early instead of rotting silently. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
The date +%H format returns zero-padded hours (08, 09). Bash arithmetic interprets leading zeros as octal, but 8 and 9 aren't valid octal digits. Use 10#$HOUR to force decimal interpretation. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
|
Note Other AI code review bot(s) detectedCodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review. Summary by CodeRabbit
✏️ Tip: You can customize this high-level summary in your review settings. WalkthroughThis PR completes BOAW Phase 6B by integrating sharded parallel execution into the engine, migrating the rule API from mutable GraphStore to read-only GraphView with TickDelta-based mutation emission, introducing canonical delta merging with conflict detection, implementing warp-scoped resource tracking in footprints, adding comprehensive BOAW testing infrastructure, and shipping extensive architectural documentation. Changes
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Rationale: This PR spans 100+ files with substantial heterogeneous changes across three major domains: (1) core engine refactoring with read-only execution semantics and parallel integration (high logic density, multiple module interactions), (2) 2500+ lines of new test infrastructure with sophisticated harness and scenario generation (complex setup, wide coverage), and (3) architectural documentation with intricate sequencing and invariant descriptions. The changes involve non-trivial API surface shifts (GraphStore→GraphView, rule signatures, footprint warp-scoping), new subsystems (SnapshotAccumulator, BOAW execution/merge), and determinism-critical merge logic requiring careful verification. Cross-file dependencies, feature-gated paths, and parallelism semantics demand methodical review despite repetition in test patterns. Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 3 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Signed-off-by: James Ross <james@flyingrobots.dev>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: c2017ba4a0
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 38
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (4)
crates/warp-benches/Cargo.toml (1)
13-25: Remove unseeded randomness from benchmarks—violates determinism mandate.
scheduler_adversarial.rsusesrand::thread_rng()(lines 46, 55) to generate unseeded random data, making benchmark results nondeterministic across runs. Either seed all RNG calls with a fixed seed or migrate to Echo PRNG per coding guidelines. Uncommentingranddependency without deterministic guarantees is unacceptable.crates/echo-dry-tests/src/demo_rules.rs (1)
194-238: port_executor: Potential redundant delta emission when attachment already exists.Lines 218-237: When an attachment already exists and
decode_motion_payloadsucceeds, the code always emits aSetAttachmentop, even if the new bytes are identical to the old (e.g., ifpos[0] += 1.0results in the same value due to floating-point representation, or if called twice in the same tick).Contrast with
motion_executor(Line 101), which guards emission withif payload.bytes != new_bytes.🔧 Suggested fix — guard with bytes comparison
if let Some((mut pos, vel)) = decode_motion_payload(&payload.bytes) { pos[0] += 1.0; let new_bytes = encode_motion_payload(pos, vel); + // Only emit if bytes actually changed + if payload.bytes != new_bytes { // Phase 5 BOAW: only emit delta ops, no direct mutation let new_value = Some(AttachmentValue::Atom(AtomPayload { type_id: motion_payload_type_id(), bytes: new_bytes, })); delta.push(WarpOp::SetAttachment { key, value: new_value, }); + } }crates/warp-core/src/tick_patch.rs (1)
180-186: Add rustdoc forWarpOpKey—missing docs will fail CI lint.
This struct ispub(crate)and used throughout the module (returned bysort_key(), constructed in 7 match arms, stored inBTreeMap). Per coding guidelines, public APIs must carry rustdoc. Themissing_docslint check enforced by CI will reject this as-is.Suggested doc
+/// Canonical ordering key for `WarpOp` used by patch construction and merge sorting. pub(crate) struct WarpOpKey { kind: u8, warp: ContentHash, a: ContentHash, b: ContentHash, }docs/spec-warp-core.md (1)
426-447: Example won’t compile: wrong arg order + non‑existent TickDelta helper.
b1_rule_exectakes(view, delta, scope)and callsdelta.upsert_node, but the engine expects(view, scope, delta)andTickDeltaonly exposespush/emit. Update the example to the actual API.🛠️ Correct the example
-fn b1_rule_exec(_view: &GraphView, delta: &mut TickDelta, _scope: &NodeId) { +fn b1_rule_exec(view: &GraphView, _scope: &NodeId, delta: &mut TickDelta) { let child_node = make_node_id("child-node"); - delta.upsert_node( - child_node, - NodeRecord { - ty: make_type_id("ChildTy"), - }, - ); + delta.push(WarpOp::UpsertNode { + node: NodeKey { + warp_id: view.warp_id(), + local_id: child_node, + }, + record: NodeRecord { ty: make_type_id("ChildTy") }, + }); }
🤖 Fix all issues with AI agents
In `@CHANGELOG.md`:
- Around line 5-86: There are two "Unreleased" sections (the header "##
[Unreleased] - 2026-01-18" and a trailing "## Unreleased") which is ambiguous
and breaks release tooling; consolidate them by removing the duplicate header
and merging content under a single Unreleased section (or convert the dated
header "## [Unreleased] - 2026-01-18" into a dated release entry and keep a
single plain "## Unreleased"), ensuring all listed entries (Phase 6A, Phase 5,
Phase 4, tests, architecture notes) remain present and the final file contains
only one Unreleased heading.
In `@crates/echo-dind-tests/src/rules.rs`:
- Around line 122-132: The view-op sequence is hardcoded to zero causing ID
collisions; update emit_view_op_delta usage to take a deterministic per-op
sequence (op_ix) from the ScopedDelta/OpOrigin instead of using 0. In the
executor closure that decodes toast args (decode_op_args and the closure passed
as executor), obtain the current operation index (OpOrigin::op_ix provided by
ScopedDelta) and pass it into emit_view_op_delta (or change emit_view_op_delta
to accept an OpOrigin/op_ix parameter and use that inside); ensure any callers
(e.g., other view-op emitters like route push) are updated to forward the same
op_ix so each op in a tick gets a unique sequence.
In `@crates/warp-core/src/boaw/exec.rs`:
- Around line 10-19: The ExecItem struct is missing a Debug derive; update its
derives to include Debug (in addition to Clone, Copy) so ExecItem, which
contains ExecuteFn, scope: NodeId, and origin: OpOrigin, can be formatted in
tests and logs; ensure the derive list on the ExecItem declaration includes
Debug to leverage the function pointer's Debug impl.
In `@crates/warp-core/src/boaw/merge.rs`:
- Around line 39-44: The loop over deltas currently uses
debug_assert_eq!(ops.len(), origins.len()) and then zips ops and origins, which
silently drops ops in release builds; change the check to a non-conditional
failure (e.g., assert_eq! or return/result error) so mismatched lengths are
handled deterministically in delta validation. Specifically, in the block where
you call d.into_parts_unsorted() inside the for d in deltas loop, replace the
debug_assert_eq! with an assert_eq! or propagate an error from that scope (so
the mismatch is not truncated by ops.into_iter().zip(origins)); ensure the
handling path prevents pushing incomplete pairs into flat via
flat.push((op.sort_key(), origin, op)).
- Around line 11-16: The MergeConflict struct is missing the conflicting
WarpOpKey which the ADR requires; add a new pub field (e.g. pub key: WarpOpKey)
to the MergeConflict struct and update all places that construct or
pattern-match on MergeConflict (search for MergeConflict:: or struct literals)
to provide/handle the key; ensure OpOrigin remains as writers: Vec<OpOrigin>,
update any Debug/Display or tests that print or assert MergeConflict to include
the new key field, and import/qualify WarpOpKey where needed.
In `@crates/warp-core/src/engine_impl.rs`:
- Around line 1089-1093: The current error path leaks memory by calling
Box::leak when mapping the patch.apply_to_state error into
EngineError::InternalCorruption; instead stop leaking by passing a static
&'static str (e.g. a const static message) to EngineError::InternalCorruption
and log or attach the detailed error elsewhere (e.g. via process/trace logger or
by wrapping the original error in a non-static variant if available). Locate the
mapping around patch.apply_to_state and replace the Box::leak usage with a
static constant message, referencing EngineError::InternalCorruption and
patch.apply_to_state in your change.
In `@crates/warp-core/src/snapshot_accum.rs`:
- Around line 246-254: When handling WarpOp::DeleteNode (in the block that
currently calls self.nodes.remove(&node) and removes the node AttachmentKey with
AttachmentOwner::Node), also locate and remove any edges that reference that
node and their attachments so the accumulator matches GraphStore cascade
semantics; specifically, iterate the edge collection (e.g., self.edges or
whatever stores EdgeId -> Edge) and remove any edge whose endpoints include the
deleted node, and for each removed edge remove its attachments (use
AttachmentOwner::Edge(edge_id) and the appropriate AttachmentPlane entries) as
well as deleting the edge entry itself so node removal, incident-edge removal,
and all associated attachments are purged together.
In `@crates/warp-core/tests/boaw_cow.rs`:
- Around line 66-79: Test t1_3_reachable_only_semantics is misfiled under the
COW test suite (it targets snapshot atomicity §4.1); either move the test
function t1_3_reachable_only_semantics into the boaw_snapshot.rs test file so it
lives with other §4 snapshot tests, or rename the function to follow the COW
naming (e.g. t2_3_reachable_only_semantics) and update any references so it
clearly belongs to §5 COW tests; ensure the test module/imports compile after
the move/rename and adjust any test ordering metadata if present.
In `@crates/warp-core/tests/boaw_determinism.rs`:
- Around line 158-186: The test t4_2_admission_does_not_depend_on_num_cpus
currently only compares r_baseline.state_root and r_baseline.patch_digest which
are indirect proxies for admission invariance; either (A) if the
boaw_harness/ExecuteResult exposes the admitted set (e.g., a field like admitted
or admitted_items on the result from h.execute_parallel), add an explicit assert
comparing r_baseline.admitted to r.admitted for each worker count to directly
test admission invariance, or (B) if no admitted set is exposed, update the test
name/doc comment to state clearly that state_root/patch_digest equality are
being used as a proxy for admission invariance (rename test or add a clarifying
comment inside t4_2_admission_does_not_depend_on_num_cpus) so the intent matches
the assertions.
- Around line 72-82: The test currently silently skips validation when
r.wsc_bytes is None; replace the empty else branch with an explicit failure so
the test fails until SnapshotBuilder is implemented—e.g., call panic! with a
clear message like "WSC bytes not produced: SnapshotBuilder not wired"
(reference r.wsc_bytes, h.wsc_roundtrip_state_root, and assert_hash_eq) so the
CI surfaces the missing feature; alternatively, if temporary, annotate the test
with #[should_panic] and include the panic so the intent is explicit until
SnapshotBuilder is wired.
In `@crates/warp-core/tests/boaw_end_to_end.rs`:
- Around line 19-72: The test
boaw_end_to_end_is_deterministic_across_permutations_and_workers uses magic
numbers 42 (tick) and 20 (shuffle iterations); extract them into named constants
(e.g., TEST_TICK and SHUFFLE_ITERATIONS) defined near the test or module scope
and replace the inline literals in
boaw_end_to_end_is_deterministic_across_permutations_and_workers (the variables
tick and loop bound 0..20) with those constants, or alternatively add a short
comment explaining the chosen values if you prefer not to extract constants.
- Around line 97-133: The test
boaw_conflicts_scenario_deterministic_across_permutations only uses a fixed
worker count (8) when calling h.execute_parallel, so it doesn't verify
invariance across different worker counts; change the test to loop over
WORKER_COUNTS (similar to the first test) and call h.execute_parallel(&base,
&ingress, tick, workers) for each workers value (or if the single-worker-count
behavior is intentional, add a clarifying comment above the test explaining why
only 8 is used); ensure references to SEEDS, XorShift64, shuffle, and the
ManyConflicts scenario remain unchanged.
In `@crates/warp-core/tests/boaw_footprints.rs`:
- Around line 173-211: The test t4_1_shard_routing_is_stable_across_machines
currently asserts a value against the same computation (tautology); instead
compute and assert against fixed expected shard IDs for the known inputs. For
the given test_hashes and SHARDS, precompute an expected_shards array (using the
intended routing rule: low byte hash[0] & (SHARDS - 1)) and in the loop derive
shard via the code under test (the same expression used in the test) and
assert_eq!(shard, expected_shards[i], "Shard routing must be stable for
{node_id:?}"). Keep references to SHARDS, test_hashes, NodeId and
t4_1_shard_routing_is_stable_across_machines so the change is localized to that
test.
In `@crates/warp-core/tests/boaw_privacy.rs`:
- Around line 1-11: The test file declares an unused module import `mod common;`
which triggers a warning; either remove the `mod common;` line from
crates/warp-core/tests/boaw_privacy.rs if the harness isn't needed yet, or keep
it as a placeholder by annotating it to suppress the warning (e.g., add an
`#[allow(unused)]` or `#[allow(dead_code)]` attribute immediately above `mod
common;`) so the compiler no longer reports it as unused.
In `@crates/warp-core/tests/common/mod.rs`:
- Around line 540-542: The wsc_roundtrip_state_root helper currently returns a
silent all-zero hash which masks failures; update the function
(wsc_roundtrip_state_root) to either panic loudly by replacing the body with
todo!() or unimplemented!() so tests fail until a proper WSC round‑trip is
implemented, or implement the actual round‑trip logic using the reader API once
available and return the real Hash32 instead of [0u8; 32].
- Around line 67-69: The public helper hex32 is missing rustdoc: add a brief ///
doc comment immediately above pub fn hex32 that explains what the function does
(e.g., converts a Hash32 to its lowercase hexadecimal String representation) and
mention the parameter type (Hash32) and the return value; ensure the comment is
concise and follows crate doc style so the public API rule passes.
- Around line 518-523: BoawExecResult is incorrectly setting state_root to
snapshot.hash; replace this by computing and assigning the real state root
(e.g., use SnapshotAccumulator to compute the state root from the snapshot or
add an accessor on Engine/Snapshot to return the true state root) instead of
using snapshot.hash; update the BoawExecResult construction to use that
computed/exposed state_root and add the required imports for
SnapshotAccumulator/related types and any accessor methods you add.
In `@docs/adr/ADR-0007-PART-6-FREE-MONEY.md`:
- Around line 246-285: The MergeConflict payload must include the conflicting
key; update the MergeConflict type and its construction in merge_deltas so the
struct carries key: WarpOpKey (in addition to writers) and ensure the error
returned from merge_deltas uses the local key variable (e.g., return
Err(MergeConflict { key, writers })) so callers receive both the WarpOpKey and
writers for debugging; locate symbols MergeConflict, merge_deltas, key, and
writers to make the change.
In `@docs/benchmarks/index.html`:
- Around line 72-76: Add ARIA pressed state to the two toggle buttons (ids
"btn-budget" and "btn-complexity") and ensure the setView function updates those
attributes when switching views: initialize each button's aria-pressed
appropriately in the markup or on DOM ready, then inside setView set
aria-pressed="true" on the active button and aria-pressed="false" on the
inactive one and toggle the "active" class consistently so assistive tech sees
the current state.
- Around line 162-168: The CI string construction can produce "NaN" when
scheduler1k.lb or scheduler1k.ub are missing because fmtMs(undefined) is called;
update the schedulerCI computation to fall back to scheduler1k.mean when lb or
ub are falsy (e.g. use fmtMs(scheduler1k.lb ?? scheduler1k.mean) and
fmtMs(scheduler1k.ub ?? scheduler1k.mean)) so the CI shows sensible values;
modify the code that builds schedulerCI (and any similar CI constructions) to
use the nullish coalescing fallback to mean.
In `@docs/benchmarks/parallelism-study.md`:
- Around line 74-80: The sample panics because shards is created with
Vec::with_capacity(num_shards) (capacity but length 0) and then indexed via
shards[i % num_shards]; populate the vector with actual Shard instances before
using indexing. Replace the empty-with-capacity pattern by initializing shards
to contain num_shards Shard objects (e.g., construct num_shards entries via a
loop, vec! macro, or resize/extend) so that shards[i % num_shards].insert(...)
is valid during the Partition Data loop; ensure you reference the same Shard
constructor or initializer used elsewhere in the codebase.
- Around line 102-104: The routing example using hash(id) % N is
nondeterministic; update the docs and example to use the ADR-0007 deterministic
routing pattern by masking existing NodeId bits (NodeId is already
BLAKE3-derived) instead of Rust's std::hash: compute shard = lowbits(NodeId) &
(SHARDS - 1), ensure SHARDS is a power-of-two, and reference the working pattern
in crates/warp-core/tests/boaw_footprints.rs; replace any mention of hash(id) %
N and add a brief note that this preserves Echo's determinism guarantee.
In `@docs/benchmarks/report-inline.html`:
- Around line 72-76: Add ARIA state to the toggle buttons by setting
aria-pressed on the elements (ids "btn-budget" and "btn-complexity") and update
that state inside the existing setView function; specifically, when switching
views in setView, set
document.getElementById('btn-budget').setAttribute('aria-pressed', 'true' or
'false') and likewise for 'btn-complexity' to reflect the active button, and
ensure this mirrors the existing active class toggling so the visual and
accessibility states stay in sync (apply same change for the other instance
referenced around lines 208-212).
- Around line 166-172: The CI formatting currently calls fmtMs(scheduler1k.lb)
and fmtMs(scheduler1k.ub) which can produce "NaN" when lb/ub are undefined;
update the schedulerCI construction to guard against undefined by falling back
to scheduler1k.mean (e.g., use fmtMs(scheduler1k.lb ?? scheduler1k.mean) and
fmtMs(scheduler1k.ub ?? scheduler1k.mean)), keeping the rest of the string
assembly (schedulerPct, html) the same and referencing the existing symbols
schedulerCI, fmtMs, and scheduler1k.mean/lb/ub.
In `@docs/book/echo/free-money.tex`:
- Around line 4-8: Update the build command at the top that currently references
"boaw_phase6_free_money.tex" to use this document's actual filename
("free-money.tex") so the recommended latexmk command matches the file in this
document; edit the latexmk line (the command string starting with "latexmk -pdf
-interaction=nonstopmode -halt-on-error") to replace the incorrect filename with
"free-money.tex".
In `@docs/memorials/2026-01-18-phase4-rubicon.md`:
- Line 49: Edit the sentence "The state_root didn't match. Off by
miles—completely different hashes." to remove the redundant word "completely" so
it reads "The state_root didn't match. Off by miles—different hashes." ensuring
only that word is removed and punctuation/spacing are preserved.
- Around line 18-19: Replace the informal verb "diffed" in the sentence "Before:
GraphStore was truth. Executors mutated it. We diffed afterward to learn what
changed." with a clearer phrase such as "computed a diff afterward" or "computed
the diff afterward" so the line reads e.g. "We computed a diff afterward to
learn what changed."; update the sentence in the memorials text (the line
containing "We diffed afterward to learn what changed") to use the clearer
phrasing.
In `@docs/spec-warp-core.md`:
- Around line 191-193: The executor signature in the RewriteRule docs is
incorrect; update the documented executor type to match the actual API by
changing it to executor: fn(GraphView, &NodeId, &mut TickDelta) (and ensure any
adjacent examples or references to executor, matcher, and compute_footprint
reflect this exact parameter order and borrowing semantics so copy‑pasted code
matches the real function signature).
In `@README.md`:
- Line 80: Replace the vague phrase "worker-count invariant" in the README entry
for parallel execution (the line referencing boaw::execute_parallel) with
plain-English wording that conveys user value; e.g., change it to "deterministic
results independent of CPU count" or add an inline parenthetical like "(output
hash stays identical regardless of thread count)" so readers immediately
understand the property without jargon.
- Line 85: Update the README entry for crates/warp-geom to remove the
unsupported claim that it's "shared by engine/tools" and instead accurately
describe its current status; replace the phrase with something like "geometry
primitives (currently isolated)" and ensure references to warp-core or any tool
crates are not implied to depend on crates/warp-geom unless those dependencies
actually exist.
- Around line 77-85: The README currently lists implementation-level APIs (e.g.,
Engine::{begin, apply, commit, snapshot}, boaw::execute_parallel, wsc::*,
math::{Vec3, Mat4, Quat, Prng}, MaterializationBus) which belong in
rustdoc/architecture docs; remove or replace lines 77–85 with a concise,
high-level project/value statement about the deterministic multiverse-aware ECS
and its benefits, and move the detailed API and architecture items (engine
lifecycle, scheduler design, snapshot format, parallel execution strategy) into
a new architecture doc such as docs/architecture-outline.md or into rustdoc
pages that reference the Engine struct and MaterializationBus; update README to
link to those detailed docs instead of enumerating method names and module
paths.
In `@scripts/bench_accumulate.py`:
- Around line 71-78: The exception handler around run_bench() is calling
e.stderr.decode() but subprocess.run in run_bench (invoked via
subprocess.run(cmd, check=True)) doesn't capture stderr, so e.stderr is None and
decode() raises; update run_bench's subprocess.run call to capture stderr (e.g.,
capture_output=True or stderr=subprocess.PIPE) so CalledProcessError.stderr is
populated, and/or change the except block in scripts/bench_accumulate.py to
safely extract the message (use getattr(e, "stderr", None) and decode only if
present, otherwise fall back to str(e)) when printing the failure; reference
run_bench, subprocess.run, CalledProcessError and the e.stderr.decode() call.
- Around line 38-53: The benchmark parsing and runner need three fixes: when
computing samples_ns replace zip(times, iters) with zip(times, iters,
strict=True) so length mismatches raise errors; narrow the except from a blanket
except Exception to specific exceptions like json.JSONDecodeError, KeyError, and
ValueError when parsing the JSON and extracting "iters"/"times"; and update the
subprocess invocation (the subprocess.run(cmd, check=True) call used by the
runner) to include capture_output=True (or stdout=PIPE/stderr=PIPE) so e.stderr
is populated and safe to decode — also ensure any use of e.stderr checks for
None before calling .decode().
In `@scripts/bench_report_local.py`:
- Around line 19-23: The function fmt_ns contains multiple single-line if-return
statements that violate E701; refactor fmt_ns to use a proper multi-line
conditional structure (if/elif/else) with each conditional and return on its own
line so no line contains multiple statements, e.g., replace the one-line "if ns
< 1000: return ..." forms inside fmt_ns with expanded multi-line blocks ensuring
each branch (if, elif, else) has its own return statement on its own line.
- Around line 42-59: The code currently uses zip(times, iters) and a blanket
except/pass which hides problems; update the block that builds samples_ns and
error handling so you first validate that times and iters have the same length
(or use zip(strict=True) if Python>=3.10) before computing samples_ns (the list
comprehension/variable name), and replace the broad "except Exception: pass"
with specific exception handlers (e.g., json.JSONDecodeError, KeyError,
ValueError, OSError) that log the file path (path), the error and a short
context (e.g., "failed to parse bench file" or "mismatched times/iters
lengths"), then continue to the next file; keep the existing
median/count/fmt_ns/print flow when data is valid.
In `@testdata/dind/000_smoke_theme.hashes.json`:
- Line 11: The file ends with a closing brace '}' but lacks the required POSIX
trailing newline; fix by adding a single newline character after the final '}'
in testdata/dind/000_smoke_theme.hashes.json so the file terminates with a
newline.
In `@testdata/dind/010_dense_rewrite_seed0001.hashes.json`:
- Around line 12-507: The hashes array in
testdata/dind/010_dense_rewrite_seed0001.hashes.json contains 501 entries (many
duplicates) but the test expects 496; re-generate the hashes against the exact
commit referenced in the CI logs, replace the hashes array with the freshly
produced ordered output from that run (the array of 64-char hex strings), and
ensure the resulting count matches the expected execution tick output (or, if
the dense-rewrite behavior legitimately yields 501 entries, update the test
expectation/config to 501). Verify the file name
(010_dense_rewrite_seed0001.hashes.json) and the top-level hashes array are the
only artifacts changed and run the CI tick to confirm the corrected count.
♻️ Duplicate comments (3)
testdata/dind/050_randomized_order_small_seed0002.hashes.json (1)
10-207: Duplicate: Identical to seed0001 - see previous comment.This file's
hashes_hexarray is byte-for-byte identical toseed0001. The critical issue regarding identical seed files has been raised above.testdata/dind/050_randomized_order_small_seed0003.hashes.json (1)
10-207: Duplicate: Identical to seed0001 and seed0002 - see previous comment.This file's
hashes_hexarray is byte-for-byte identical to the other seed files. The critical issue has been raised onseed0001.crates/warp-core/src/tick_delta.rs (1)
464-488: Validation still compares onlysort_key(content mismatches slip through).
If two ops target the same key but carry different payloads/records,sort_keyequality will incorrectly treat them as identical. Compare fullWarpOpvalues after canonicalization.✅ Safer comparison
- if delta_op.sort_key() != diff_op.sort_key() { + if delta_op != diff_op { return Err(Box::new(DeltaMismatch { delta_len: delta_sorted.len(), diff_len: diff_sorted.len(), first_mismatch_index: Some(i), delta_stats, diff_stats, })); }
Introduce Phase 6B virtual shard routing (NUM_SHARDS=256) and sharded parallel execution with canonical merge determinism. Also tightens delta validation under `delta_validate`: delta-to-diff checks now compare full `WarpOp` values (payload included), preventing sort-key-only false positives, with a regression test. Docs: update changelog, README, testing plan, and refresh BOAW/book artifacts.
- CHANGELOG.md: remove duplicate Unreleased section, fix duplicate headings - echo-dind-tests/rules.rs: use op_ix for deterministic view-op sequencing - boaw/exec.rs: add Debug derive for BoawExecResult - boaw/merge.rs: add key field to MergeConflict (ADR-0007), Box error, add # Panics doc - tick_patch.rs: make WarpOpKey public for MergeConflict field - engine_impl.rs: remove Box::leak memory issue - snapshot.rs: expose state_root field separately from commit hash - snapshot_accum.rs: add cascade delete for incident edges, extract helper - common/mod.rs: use snapshot.state_root for BoawExecResult - Test fixes: boaw_cow, boaw_determinism, boaw_end_to_end, boaw_footprints, boaw_privacy - scheduler_adversarial.rs: fix tautological assertion with pre-computed expected values - Docs: ARIA accessibility for benchmark HTML, NaN handling in CI display - Python scripts: fix type annotations and imports Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
# Conflicts: # README.md
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 64
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
crates/echo-dind-harness/tests/coverage.rs (1)
72-81: Missing golden files silently pass—tighten validation.The test silently skips validation when a golden file is absent and
DIND_UPDATE_GOLDENis not set. Case in point:050_randomized_order_small_seed0003.eintlogis inMANIFEST.jsonbut has no corresponding.hashes.jsonfile. The scenario runs, passes the "not empty" assertion, then exits without any verification.In CI, this masks incomplete or stale golden file coverage. Require that either the golden file exists or
DIND_UPDATE_GOLDENis set—do not silently skip.docs/book/echo/sections/01-what-is-echo.tex (1)
28-34: Grammar issue: "which give it:" doesn't connect to the list.The sentence "It uses WARP graphs, which give it:" followed by bullets describing use cases (RTS Games, Fighting Games, etc.) is grammatically broken. WARP graphs don't "give" use cases—they enable capabilities suitable for those use cases.
✏️ Suggested fix
-Echo is designed for general use. It uses WARP graphs, which give it: +Echo is designed for general use. Its WARP graph foundation makes it particularly suited for:
🤖 Fix all issues with AI agents
In @.markdownlint.json:
- Around line 10-11: The MD060 rule is currently disabled in the linter config
("MD060": false) which allows unused link definitions to accumulate; re-enable
this rule by removing or changing the "MD060": false entry in .markdownlint.json
so the linter will flag orphaned reference links, or if there is a valid reason
(shared multi-file references or generated docs), add an explicit inline comment
or a documented exception explaining why MD060 remains disabled for the
repository and where generated/shared docs live (reference the "MD060" key to
locate the setting).
- Line 7: Re-enable the MD045 rule in .markdownlint.json (remove or set "MD045":
true) so alt-text enforcement is restored; instead of a global disable, audit
images and add explicit alt="" for decorative images or meaningful alt text for
informative images, updating markdown files referenced by the docs to satisfy
the MD045 requirement and avoid accessibility regressions.
In `@crates/echo-dind-harness/tests/coverage.rs`:
- Line 45: The env check using std::env::var("DIND_UPDATE_GOLDEN").is_ok()
treats any set value as true; change it to explicitly parse/recognize truthy
values so "false"/"0"/"" don't enable updates. Replace the is_ok() logic around
the update_golden variable to read the env var, trim it and match it
case-insensitively against accepted true values (e.g. "1", "true", "yes") or use
parse::<bool>() if you only want "true"/"false"; update the variable name
update_golden accordingly so callers get a proper boolean.
- Around line 69-71: Change the write+log to open the file as a mutable binding
(let mut f_out = File::create(&golden_path)?), call
serde_json::to_writer_pretty(&mut f_out, &golden)? so the writer isn't moved,
then call f_out.sync_all()? to force OS flush before drop, and replace
println!("Updated: {:?}", golden_path) with eprintln! or a structured status
message; reference the variables/functions f_out, serde_json::to_writer_pretty,
sync_all, and golden_path when making the edits.
- Around line 61-68: The test uses hardcoded schema/hash literals; update the
Golden construction to use shared constants: import echo_dind_tests::SCHEMA_HASH
and replace the literal schema_hash_hex with SCHEMA_HASH.to_string(), and add
module-level constants in dind.rs (e.g., ELOG_VERSION, HASH_DOMAIN, HASH_ALG)
and replace occurrences of the string literals for hash_domain and hash_alg with
those constants when constructing Golden (and update any other occurrences in
dind.rs to use the new constants).
In `@crates/warp-benches/Cargo.toml`:
- Around line 13-24: The Cargo.toml uses inconsistent version pinning: blake3 is
patch-pinned (~1.8.2) but rayon, criterion, num_cpus, and rustc-hash are loose,
which hurts reproducibility for benchmarks; update those dependency entries
(rayon, criterion, num_cpus, rustc-hash) to use tilde-style minor-level pins
(e.g., change rayon = "1.10" to rayon = "~1.10", criterion = "0.5" to criterion
= "~0.5", num_cpus = "1.0" to num_cpus = "~1.0", rustc-hash = "2.1.1" to
rustc-hash = "~2.1.1") so all pins consistently allow only patch updates while
keeping blake3 as-is.
- Around line 17-18: Remove the unused dependency by deleting the num_cpus =
"1.0" entry from Cargo.toml; the crate is not referenced by any benchmark files
(motion_throughput.rs, snapshot_hash.rs, scheduler_adversarial.rs,
scheduler_drain.rs) so simply drop that line and rely on
std::thread::available_parallelism() if CPU count is needed later.
In `@crates/warp-core/src/boaw/exec.rs`:
- Around line 101-109: The sharded/stride public APIs (e.g.,
execute_parallel_sharded and the corresponding execute_parallel_strided) must
hard-fail when workers == 0 to match execute_parallel(); add a precondition
check at the top of each function (use assert!(workers > 0, "workers must be >
0") or panic! with a clear message) instead of silently returning an empty
result, so callers get an immediate failure for invalid input.
- Around line 47-79: The docstring for execute_parallel claims the stride
fallback is enabled only when ECHO_PARALLEL_STRIDE=1, but the runtime check
(std::env::var("ECHO_PARALLEL_STRIDE").is_ok()) treats any set value as truthy;
update the check to only enable the fallback when the variable equals "1" (or
another explicit truthy value you choose) before calling
execute_parallel_stride(view, items, capped_workers), or alternatively change
the docstring to reflect that any non-empty value enables the fallback; modify
the condition in execute_parallel to compare the env var value (e.g.,
map/unwrap_or and == "1") instead of using is_ok().
In `@crates/warp-core/src/boaw/shard.rs`:
- Around line 45-48: Add a compile-time assertion that NUM_SHARDS is a power of
two so the masking used by SHARD_MASK stays valid; place a const check next to
NUM_SHARDS/SHARD_MASK that fails compilation if NUM_SHARDS is not power-of-two
(e.g. a const assertion using the bitwise test (NUM_SHARDS & (NUM_SHARDS - 1))
== 0 or a const is_power_of_two check), so the invariant for NUM_SHARDS and
SHARD_MASK is enforced at compile time.
In `@crates/warp-core/src/lib.rs`:
- Around line 55-61: The public module declaration pub mod boaw is missing a
rustdoc comment which violates the crate-wide #![deny(missing_docs)]; add a
clear rustdoc comment above pub mod boaw describing the module's purpose, public
API surface, and any key invariants (following the style used for fixed, math,
wsc, inbox, materialization), and ensure it briefly documents intended usage and
concurrency/shard guarantees if any; additionally, confirm that the re-exported
symbols ExecItem, execute_parallel, execute_serial, shard_of, MergeConflict, and
NUM_SHARDS (exported from boaw) and GraphView (exported from graph_view) each
have rustdoc in their defining modules and add or improve those docs where
missing so the re-exports are properly documented.
In `@crates/warp-core/src/tick_delta.rs`:
- Around line 106-115: The finalize method (and other canonicalization paths
like canonicalize_ops) only sorts by WarpOp::sort_key, leaving ties unordered
and causing nondeterminism when ops share the same key; fix by adding a
deterministic tie‑breaker to the sort (for example include OpOrigin and a stable
payload hash) and use that same composite compare in finalize and
canonicalize_ops (and any other sorting sites) so ordering is consistent;
alternatively, if conflict ops are invalid, assert/return an error when two ops
share both sort_key and tie fields (e.g. add a check in delta_validate) and
document which strategy is used.
In `@crates/warp-core/tests/boaw_cow.rs`:
- Around line 16-109: Several tests (t2_1_delete_is_unlink_not_physical_delete,
t2_2_overlay_wins_over_base_reads,
t2_3_structural_sharing_reuses_unchanged_segments,
t2_4_reachable_only_semantics, delta_view_resolves_overlay_then_base,
delta_view_handles_tombstones) are left as #[ignore] + unimplemented! and must
be tracked and implemented once COW/DeltaView wiring exists; replace the
unimplemented! bodies with real test logic that builds a base Snapshot (or
commit C0), applies an Overlay/Delta or TickDelta, exercises reads via
DeltaView::get()/get_node() and commit paths, and asserts expected visibility,
state_root, and segment manifest reuse, then remove #[ignore] to enable the
tests. If the harness isn’t available yet, create tracked issue(s) referencing
these test names, add a short TODO comment above each test with the issue
number, and keep the #[ignore] but change the message to include the issue id so
CI knows there’s a planned follow-up.
In `@crates/warp-core/tests/boaw_end_to_end.rs`:
- Around line 25-27: Remove the #[ignore = "..."] attributes from the three test
functions in crates/warp-core/tests/boaw_end_to_end.rs (starting with
boaw_end_to_end_is_deterministic_across_permutations_and_workers and the other
two test functions later in the file) so they run normally; specifically delete
the #[ignore] annotation lines above each test function declaration, ensure the
tests compile and run locally (cargo test) and then push so CI executes them.
In `@crates/warp-core/tests/boaw_footprints.rs`:
- Around line 38-78: Extract the nested helper functions random_footprint and
random_hash out of the t3_1 test and into a shared test helper module (e.g.,
common/mod.rs) so other tests can reuse them; keep their signatures and uses of
types like Footprint, XorShift64, WarpId, NodeId, EdgeId, NodeKey, and
AttachmentKey unchanged, make them pub(crate) so t3_1 can call them, and update
t3_1 to call common::random_footprint and common::random_hash instead of the
inlined versions.
- Around line 90-133: These ignored tests (t3_2_no_write_read_overlap_admitted,
t3_3_deletes_that_share_adjacency_bucket_must_conflict,
t3_4_footprint_guard_catches_executor_drift) currently call unimplemented!()
which will panic if someone runs ignored tests; replace each unimplemented!()
with todo!("...") preserving the existing message and keep the #[ignore]
attribute, so the intent is clear while avoiding an unexpected panic when tests
are executed with --include-ignored; also verify CI configuration continues to
skip ignored tests rather than running with --include-ignored.
- Line 45: The test uses the nightly-only method u64::is_multiple_of (seen as
rng.next_u64().is_multiple_of(2)), which breaks stable builds; replace each
occurrence with the stable equivalent using remainder checks (e.g.,
rng.next_u64() % 2 == 0) and update the other is_multiple_of usages in the same
test to the same pattern so all checks compile on stable Rust.
In `@crates/warp-core/tests/boaw_parallel_exec.rs`:
- Around line 111-112: Remove or silence the noisy eprintln! debug prints used
in the tests (e.g., the eprintln! that prints WORKER_COUNTS and the other
eprintln! calls referenced) so CI logs stay clean: either delete the eprintln!
calls or gate them behind a runtime env var (e.g., check
std::env::var("TEST_DEBUG").is_ok() before printing) or replace with a test-only
tracing/log macro enabled via #[cfg(test)] and an env-controlled subscriber;
search for the eprintln! macro invocations in this test file (and the other
listed locations) and apply the chosen approach consistently.
- Line 362: The test uses a hardcoded array &[1, 4, 8, 16] instead of the shared
WORKER_COUNTS constant, causing inconsistency; update the loop to iterate over
WORKER_COUNTS (or explicitly justify and document the divergence) so the test
uses the same worker set as other tests—modify the for loop that binds `workers`
to reference `WORKER_COUNTS` (or add a brief comment if a different set is
required).
- Around line 63-78: The comment "Will be assigned by ScopedDelta" in
make_exec_items is misleading because execute_serial (boaw/exec.rs) unwraps a
ScopedDelta to its inner TickDelta and TickDelta::emit replaces the prepared
OpOrigin with OpOrigin::default(); update make_exec_items by removing that
comment and either set op_ix using OpOrigin::default() (or leave as 0) so the
code reflects that the prepared OpOrigin is not used; reference symbols:
make_exec_items, ExecItem, OpOrigin, ScopedDelta, execute_serial,
TickDelta::emit, scoped.inner_mut, delta.push.
In `@crates/warp-core/tests/common/mod.rs`:
- Around line 85-92: BoawExecResult's public fields are missing rustdoc; add
concise field-level documentation for pub commit_hash, pub state_root, pub
patch_digest, and pub wsc_bytes inside the BoawExecResult struct describing what
each represents (e.g., commit_hash = BOAW commit identifier, state_root =
resulting state root hash, patch_digest = digest of produced patch, wsc_bytes =
optional WebStateChange bytes) so the test-harness public API is properly
documented.
- Around line 113-118: The public struct BoawSnapshot exposes pub fields store,
root, and scenario without rustdoc; add field-level documentation comments (///)
above each public field in the BoawSnapshot definition to describe what each
field represents (e.g., store: GraphStore holding snapshot data, root: NodeId
for the scenario root node, scenario: BoawScenario describing the BOAW test
setup) following project doc style so the public API is documented.
- Around line 515-518: The loop currently swallows errors from engine.apply(tx,
rule_name, scope) which hides failures; update the ingress application loop in
tests/common/mod.rs to surface errors by either propagating the Result (use the
? operator) or explicitly asserting success (e.g., call .expect or collect
Results and assert none failed), or filter expected NoMatch variants and fail on
all other Errs; ensure you reference engine.apply, the ingress loop, and
tx/rule_name/scope so the test will fail loudly on unexpected errors instead of
silently discarding them.
- Around line 24-45: Add rustdoc comments to the public methods on XorShift64
(XorShift64::new, XorShift64::next_u64, and XorShift64::gen_range_usize)
describing their behavior and guarantees: document that new(seed: u64)
initializes the state using seed.max(1) so zero seeds become 1, that next_u64()
produces a xorshift64* u64 stream, and that gen_range_usize(upper: usize)
returns a value in [0, upper) but uses simple modulo reduction which introduces
modulo bias when upper is not a power of two (acceptable for test usage); ensure
the rustdoc is concise and placed above each public method.
- Around line 534-544: The test harness' execute_parallel currently silently
swallows the workers parameter and delegates to execute_serial; make the intent
explicit by marking the unused parameter with #[allow(unused_variables)] (or
annotate the function similarly) and add an explicit guard such as
assert_eq!(workers, 1) or a todo!("Phase 5: implement multi-worker execution for
test harness") before delegating to execute_serial so violations are visible;
target the execute_parallel method in the BoawTestHarness impl and keep the
existing delegation to self.execute_serial(base, ingress, tick) after the
assertion/todo.
In `@docs/adr/ADR-0007-BOAW-Storage.md`:
- Around line 1114-1132: Update the ADR to track the remaining Phase 6B work by
creating a persistent tracker (issue or project card) that references the two
unchecked checklist items "Wire into Engine pipeline (Phase 6B)" and "Virtual
shards for locality (Phase 6B)" and link the proven artifacts
("boaw::execute_parallel", "boaw::merge_deltas", and tests in
boaw_parallel_exec.rs) for context; change the status line near "Status: Phase
6A COMPLETE" to note "Phase 6A complete — Phase 6B work tracked (issue #<id>)"
or similar, and add the issue/board ID in the ADR so the remaining work is
discoverable.
In `@docs/book/echo/booklet-06-tour-de-code.tex`:
- Around line 16-20: Move the \appendix declaration so it precedes any appendix
content: treat \input{appendix-musings} (and any other appendix files like
\input{legal-appendix} or \input{colophon-06-tour} if they are appendices) as
content after the \appendix marker; specifically, place \appendix before
\input{appendix-musings} to ensure LaTeX treats it as an appendix section.
In `@docs/book/echo/colophon-06-tour.tex`:
- Around line 1-53: Update the path notation example in the Colophon so it
reflects the real repository layout: replace the incorrect example that maps
warp_core::scheduler::drain to crates/warp-core/src/scheduler/drain.rs with a
correct mapping (e.g., show warp_core::scheduler →
crates/warp-core/src/scheduler.rs and mention drain_for_tx() exists inside that
file), or alternatively swap in a true submodule example such as
warp_core::boaw::shard → crates/warp-core/src/boaw/shard.rs; edit the text
around "Rust paths" and the specific example reference to reference the chosen
correct module name and function (drain_for_tx or shard) so the notation is
accurate.
In `@docs/book/echo/parts/part-06-tour-de-code.tex`:
- Around line 4-10: The section include order in part-06-tour-de-code.tex is out
of sequence: \input{sections/18-tour-boaw-execution} is placed after
\input{sections/21-tour-scheduler}, breaking the intended numeric flow; reorder
the \input lines so the sections follow numeric order (move
18-tour-boaw-execution to appear between 17-tour-intent-ingestion and
19-tour-transaction) to restore correct sequencing and prevent documentation
confusion.
In `@docs/book/echo/sections/01-what-is-echo.tex`:
- Around line 20-23: The four philosophy bullet points are too cryptic; update
each \item (Tests Are The Spec, Hexagonal Domain Boundary, We Do Not Rewrite
History, Predictable Loop) to include a one-sentence clarification or a
parenthetical/glossary link that explains the intent (e.g., clarify that "Tests
Are The Spec" means tests document expected behavior and serve as living
documentation, "Hexagonal Domain Boundary" refers to isolating core domain logic
from external interfaces, remove the "Reminder:" prefix in "We Do Not Rewrite
History" or rephrase to explain immutability of commit history and its
implications, and explain "Predictable Loop" as meaning deterministic
frame/state descriptions), or alternatively append a short "See glossary" link
for each item so newcomers get actionable context.
In `@docs/book/echo/sections/15-boaw-storage.tex`:
- Around line 90-96: Update the LaTeX spec to match the implementation in
compute_commit_hash_v2(): replace the current formula with the actual inputs
used by the function — include the version tag prefix and a parents length
prefix, and list the hashed components as version_tag || parents_length ||
parents || state_root || patch_digest || policy_id; remove "schema" and "tick"
as direct inputs and note that tick-related data is condensed into patch_digest.
Ensure the text explicitly references compute_commit_hash_v2() as the
authoritative source for the commit hash construction.
In `@docs/book/echo/sections/16-tour-overview.tex`:
- Around line 42-53: The table uses wrong crate names and several incorrect
filenames; update the eight rows to match the real source files: change the two
Echo entries from crate name crates/echo-core to crates/echo-app-core and point
them at the actual engine and transaction files under crates/echo-app-core/src
(e.g., engine.rs and transaction.rs if present), in the Warp entries rename
rule_matching.rs to rule.rs and radix_scheduler.rs to scheduler.rs, keep BOAW
entries (boaw/exec.rs and boaw/shard.rs) but verify their exact paths under
crates/warp-core/src/boaw/, keep/verify tick_delta.rs, and replace or remove the
hashing row—locate where hashing actually lives in crates/warp-core/src (e.g., a
mod or util file) and use that real path; scan the two crates to confirm exact
filenames and update each table cell to the verified path.
In `@docs/book/echo/sections/17-tour-intent-ingestion.tex`:
- Around line 69-73: The doc example incorrectly passes the already-prefixed
label to make_edge_id, causing a double "edge:" prefix; update the example in
the echo tour section so make_edge_id receives the raw label (e.g.,
make_edge_id("root/sim") not make_edge_id("edge:root/sim")) and adjust the shown
CODE comment to reflect blake3("edge:" || "root/sim") to match the
implementation in ident.rs that prepends b"edge:" internally.
In `@docs/book/echo/sections/19-tour-transaction.tex`:
- Around line 9-10: Replace hardcoded source line ranges and single-line
references with symbolic references and/or automated validation: remove the
explicit "engine_impl.rs:711-719", "engine_impl.rs:962-968" and "line 715"
fragments and instead point readers to the function or symbol (e.g.,
Engine::begin(), the corresponding watchout note inside Engine::begin(), and the
other referenced function name) so docs remain valid as code moves; additionally
add a CI doc lint that detects stale "file:line" patterns or validates that any
remaining file:line spans still match the repo, or convert those references to
links that resolve to the current commit/anchor. Ensure all instances (including
the occurrences noted at lines 49-50) are updated to use symbol names or
CI-backed anchors.
- Around line 23-25: The watchout text refers to "line 715's check" which
doesn't exist in the shown listing and confuses readers; update the prose to
either remove the specific line-number reference or change it to a concrete
reference to the sentinel check in the code (e.g., mention the
initialization/guard around self.tx_counter = 1 or the TxId(0) sentinel) so
readers can map the explanation to the code, or add line numbers to the
displayed listing so "line 715" is accurate. Ensure references use the unique
symbols TxId(0) and self.tx_counter to make the link explicit.
In `@docs/book/echo/sections/20-tour-rule-matching.tex`:
- Around line 7-8: Update the documentation entry that references
Engine::apply() so it no longer includes brittle line numbers; remove the
":730-737" suffix and instead reference the symbol and file name only (e.g.,
"engine_impl.rs") or point readers to search for Engine::apply() in the
repository/IDE, and ensure the text reads "File: engine_impl.rs" (or similar)
without any numeric ranges.
In `@docs/book/echo/sections/22-tour-delta-merge.tex`:
- Around line 13-15: Update the documentation entry so the Entry Point points to
the actual symbol merge_deltas() in the module where it is defined (the boaw
merge module, e.g. merge.rs) instead of referencing exec.rs; also remove the
incorrect commentary about WarpOp::SetAttachment fields (they are current and do
not need correction) so the snippet only documents merge_deltas() and accurate
information about WarpOp::SetAttachment.
In `@docs/book/echo/sections/23-tour-hashing.tex`:
- Around line 15-116: The section must be removed or rewritten because it
references a non-existent hashing.rs and describes wrong algorithms; update the
text to document the real functions and signatures: replace the
compute_state_root description with the reachable-node BFS implementation
signature (compute_state_root(state: &WarpState, root: &NodeKey)), change the
compute_commit_hash_v2 note to state that the domain/version is encoded as
2u16.to_le_bytes() (not b"echo-commit-v2"), and replace the patch-digest section
to describe compute_patch_digest_v2 with its actual parameters (policy_id,
rule_pack_id, commit_status, in_slots, out_slots, ops) and algorithm from the
snapshot/tick-patch implementations; ensure you point readers to the correct
functions (compute_state_root, compute_commit_hash_v2, compute_patch_digest_v2)
rather than hashing.rs.
In `@docs/book/echo/sections/24-tour-commit.tex`:
- Around line 16-56: The documented commit pipeline is inaccurate: update the
trace to match the real commit_with_receipt flow by replacing scheduler calls
with drain_for_tx() and reserve_for_receipt(), show pre-state cloning for delta
tracking, include materialization bus finalization and delta patch construction
(e.g., construct_delta_patch / build_patch), and replace apply_delta(&merged)
with apply_reserved_rewrites(); keep the subsequent hash computations
(compute_state_root, compute_patch_digest, compute_commit_hash_v2) and
finalization (tick++, parents = vec![commit_hash], live_txs.remove) or else
clearly mark the trace as a simplified/alternate path and state which functions
were elided. Ensure all function names above (commit_with_receipt, drain_for_tx,
reserve_for_receipt, apply_reserved_rewrites, construct_delta_patch,
materialization_bus.finalize, compute_commit_hash_v2) appear correctly in the
updated trace or note.
In `@docs/book/echo/sections/25-tour-call-graph.tex`:
- Line 97: Update the table row for "Drain" that currently shows "O(n log n)" to
the tighter bound "O(n)" and keep the existing qualifier text ("Radix or
comparison sort") but append a short parenthetical note indicating that for
small n a comparison sort may be used (e.g., "fallback to comparison sort for
small n"). Locate the row containing the "Drain" label and the string "Radix or
comparison sort" in sections/25-tour-call-graph.tex (the table line with "Drain
& $O(n \log n)$ & Radix or comparison sort \\") and replace the complexity entry
and add the small-n fallback note in the same table cell.
In `@docs/macros.tex`:
- Around line 5-6: Remove the unused TikZ dependency and its erroneous comment:
delete the \usepackage{tikz} and
\usetikzlibrary{positioning,calc,shapes.geometric} lines and replace the
incorrect reference to \AIONLogo with an accurate comment (or remove the comment
altogether), since the file only defines \AION, \AIONProjectURL, and \WARP and
none of those use TikZ; ensure no other code references \AIONLogo before
deleting.
- Around line 7-8: Add an explicit dependency on the url package and make the
macros robust: insert \RequirePackage{url} (or \RequirePackage{hyperref} if you
intend that) at the top of the file so \url is always defined, and change the
fragile \newcommand{\AION}{...} to \DeclareRobustCommand{\AION}{...}; likewise
make \AIONProjectURL robust (e.g.
\DeclareRobustCommand{\AIONProjectURL}{\url{https://github.com/flyingrobots/aion}})
so both \AION and \AIONProjectURL are safe in moving arguments like section
titles or captions.
In `@docs/spec-warp-core.md`:
- Around line 191-193: The doc's function signatures use incorrect borrowing for
GraphView; update the three signatures (`matcher`, `executor`,
`compute_footprint`) to take GraphView by value (matching the actual type
aliases MatchFn, ExecuteFn, FootprintFn) — e.g., use GraphView<'a> instead of
&GraphView — so they align with the rule.rs definitions where GraphView is Copy
and passed by value.
In `@docs/study/aion.cls`:
- Around line 148-152: The "% Table of Contents formatting" placeholder and
trailing "\endinput" in aion.cls is misleading technical debt; either remove the
empty comment block or replace it with an explicit TODO explaining what's
missing. Locate the comment line "% Table of Contents formatting" and either
delete the whole placeholder block or change it to a clear TODO note (e.g., "%
TODO: implement TOC formatting - tracked in ISSUE-####") so the intent is
explicit and no silent placeholder remains.
- Around line 126-128: Wrap the unconditional prints of \paperaffiliation and
\paperorcid with the existing validation helper \AION@require so they only
render when non-empty; specifically, guard the line that emits the forced line
break and affiliation (currently using \paperaffiliation and \\) so the \\ and
affiliation are emitted only if \AION@require{\paperaffiliation} succeeds, and
similarly render the "ORCID: \paperorcid" text only when
\AION@require{\paperorcid} is true, ensuring no bare \\ or "ORCID:" label
appears for empty values.
- Around line 84-97: Replace the fragile public-macro checks with proper setter
macros and an internal empty-token check: define internal storage macros (e.g.
\AION@papertitle, \AION@papernumber, etc.) initialized to \@empty inside a
makeatletter block, provide user-facing setter commands \papertitle{...},
\papernumber{...} that assign to those internals, and change \AION@require to
test emptiness with \ifx\AION@papertitle\@empty (and similarly for other fields)
so the check is robust; finally update \AIONTitlePage to reference the internal
names (\AION@papertitle, \AION@papernumber, ...) instead of the public macros.
In `@docs/study/build-tour.py`:
- Around line 25-38: The convert_commentary_to_latex function currently wraps
matched commentary raw into a LaTeX environment (replace_commentary) without
escaping LaTeX special characters, which will break Pandoc PDF generation for
content with _, %, #, &, $, {, }, ~, ^, and backslash; modify
convert_commentary_to_latex (and its helper replace_commentary) to run the
captured inner text through a LaTeX-escaping routine before wrapping (escape
characters like _ % # & $ { } ~ ^ \) so the environment receives safe text;
additionally remove the duplicate package loading by deleting the redundant
Pandoc -V geometry:* and -V graphics/graphicx flags (or alternatively remove the
post-processing injection) so that geometry and graphicx are only loaded
once—prefer removing the Pandoc -V flags to keep post-processing package options
central.
- Around line 83-136: The packages string named packages currently reintroduces
\usepackage{graphicx} and \usepackage[margin=0.75in,letterpaper]{geometry},
which duplicates Pandoc-inserted packages and causes option clashes; update the
packages variable to remove the lines that add graphicx and geometry so it only
contains adjustbox, tcolorbox, fvextra and the claudecommentary tcolorbox
definition, and ensure the insertion logic that uses content.replace (the branch
checking r'\usepackage{amsmath' and the elif checking r'\begin{document}') does
not attempt to inject graphicx or geometry anymore (leave the conditional
insertion but only insert the cleaned packages string).
In `@docs/study/diagrams/tour-02.mmd`:
- Around line 15-18: The diagram shows Engine->>Engine: commit() followed by
separate steps (Engine->>Store: apply delta, Engine->>Engine: compute hashes,
Engine->>Hub: emit snapshot/diff) which makes them look like separate phases;
update the mermaid sequence so these three actions are represented as internal
steps of commit() (e.g., nest them under a labeled activation/alt/loop or prefix
them with "commit internals:" and/or use a note over/box around the three
messages) so it’s clear apply delta, compute hashes, and emit snapshot/diff are
internal to Engine.commit().
In `@docs/study/diagrams/tour-03.mmd`:
- Around line 1-20: The file docs/study/diagrams/tour-03.mmd is missing a
trailing newline (POSIX newline) at the end of the file; open the file and add a
single '\n' after the final line (the line containing "N3 -.- A3" / the end of
the mermaid graph) so the file ends with a newline character to satisfy POSIX
conventions.
In `@docs/study/diagrams/tour-04.mmd`:
- Line 16: Add a trailing newline at the end of the Mermaid file so the last
line ending with P2 -.->|"α[portal] = Descend('child-abc')"| C1 is terminated by
a newline character; open the diagram file and ensure the final line (containing
the α[portal] = Descend('child-abc') portal link) ends with a newline and commit
the change.
In `@docs/study/diagrams/tour-10.mmd`:
- Around line 3-14: The diagram's Partitioning subgraph defines shards S0, S1,
S2, and S255 but the Work Stealing subgraph references S3, creating a mismatch;
add an explicit node definition for S3 (e.g., S3[Shard 3]) inside the
partition_into_shards subgraph so that the referenced symbol S3 exists and the
work-stealing claims (W1 -->|claims| S3) are valid.
In `@docs/study/diagrams/tour-11.mmd`:
- Around line 1-17: Add a POSIX-compliant trailing newline to the end of the
Mermaid diagram file by ensuring the file ends with a '\n' character; locate the
end of the diagram (the last line containing "TD -->|apply ops| S2") and append
a newline so the file terminates with a newline character.
In `@docs/study/diagrams/tour-12.mmd`:
- Around line 1-16: The file ends without a trailing newline (EOF after the
final "end" / Viewer block, e.g., the last line containing V[Viewer...]); fix by
appending a single POSIX newline character at the end of that diagram file so it
ends with a trailing newline, and optionally add an .editorconfig or pre-commit
hook to enforce POSIX line endings for all diagram/text files.
In `@docs/study/diagrams/tour-13.mmd`:
- Line 2: The label "BLAKE3 bytes" is ambiguous; update the diagram edge/node
label to explicitly state the input being hashed (e.g., "Compute intent_id =
BLAKE3(intent_bytes)" or "Compute intent_id = BLAKE3(intent payload)") so the
operation on node B clearly shows it hashes the incoming intent bytes; modify
the text in the node/edge that currently reads "BLAKE3 bytes" to the chosen
explicit phrasing.
In `@docs/study/diagrams/tour-15.mmd`:
- Around line 13-17: The diagram currently uses an implicit node "Screen" in the
edge "R -->|pixels| Screen"; add an explicit node declaration for Screen to
match the other nodes (e.g., declare Screen with the same node style used
elsewhere) so the graph is consistent and readable—place the Screen node
definition (Screen[Screen] or the same bracket style as SC/D/S/L/R) near the
other node declarations and keep the "R -->|pixels| Screen" edge intact.
In `@docs/study/echo-tour-de-code-directors-cut.tex`:
- Around line 1144-1146: The sentence claiming "Rust's clone is typically
copy-on-write" is incorrect; update the text around clone() to either remove the
CoW claim or explicitly state the condition: if Echo's WarpState actually uses
shared pointers (e.g., Arc<T> or Rc<T>) or an explicit CoW type
(std::borrow::Cow), mention that WarpState uses Arc internally and that cloning
is cheap because it clones the Arc reference, otherwise delete or reword the
sentence to state that Clone performs a deep copy unless shared-wrapper types
(Arc/Rc/Cow) are used.
In `@docs/study/echo-tour-de-code-with-commentary.tex`:
- Around line 959-962: The acronym expansion for BOAW in the tour text is wrong;
locate the BOAW occurrence inside the \begin{tourguide} ... \end{tourguide}
block and replace the phrase "Bag of Asynchronous Work" with the correct
project-defined expansion "Best Of All Worlds" (preserve surrounding
punctuation/formatting such as the triple-dash and quotes around BOAW), ensuring
the sentence reads "BOAW---``Best Of All Worlds''---..." so the acronym maps
correctly to BOAW.
In `@docs/study/echo-tour-de-code.tex`:
- Around line 114-117: The document's hardcoded note "File paths and line
numbers are accurate as of 2026-01-18." will rot; update the doc to remove or
change line-number claims and implement a maintenance strategy: replace explicit
line-numbered references with symbol names (function/class names) and relative
file references where possible, split or annotate sections with per-section
timestamps instead of a single global date, and add a CI validation step that
checks referenced file snippets still match expected signatures (e.g., verify
that symbol names and small code excerpts mentioned in the text exist in the
repo) so the doc surfaces breakage during PRs.
In `@docs/study/echo-visual-atlas-with-diagrams.tex`:
- Around line 103-104: The LaTeX metadata commands \author{} and \date{} are
empty, resulting in blank PDF metadata; update the \author{...} and/or
\date{...} entries in the document (or remove them if you prefer default
behavior) so the PDF metadata contains the correct attribution and date—e.g.,
fill \author with the copyright holder/maintainer name and \date with a fixed
date or \today.
- Around line 23-25: Remove the dead pandoc-template cruft: delete the empty
\else branch following \ifPDFTeX (the comment-only block at lines with
\ifPDFTeX\else … \fi) and remove the unused counter declaration
\newcounter{none} (declared “for unnumbered tables”) since it is never
referenced; if you prefer to keep a placeholder, replace the empty \else with a
brief comment explaining it’s intentionally omitted, but the preferred fix is to
purge both the empty \else block and the \newcounter{none} declaration to avoid
unused code.
- Line 11: The document currently disables automatic section numbering via
\setcounter{secnumdepth}{-\maxdimen} while every section title includes a
hardcoded numeric prefix (e.g., "1. The Complete Tick Pipeline"), causing
duplication and maintenance pain; fix by re-enabling auto-numbering (remove or
set secnumdepth appropriately instead of -\maxdimen) and strip manual numeric
prefixes from all \section/\subsection titles (also clean up the manual numbers
referenced around the other affected titles noted at the same places as lines
119-120), or if you must keep manual numbering, add a clear top-of-file comment
explaining why auto-numbering is disabled and keep a single maintained list of
manual numbers—reference \setcounter{secnumdepth}{-\maxdimen} and the explicit
section title strings to locate and update the changes.
- Around line 107-110: The \frontmatter and \backmatter declarations in the
document are vacuous (you call \frontmatter then immediately \mainmatter, and
\backmatter is placed just before \end{document>), so remove the unnecessary
\frontmatter and \backmatter commands or populate them with actual front/back
matter content (e.g., title page, \tableofcontents, preface, index,
bibliography) so they have effect; specifically edit the occurrences of
\frontmatter, \mainmatter, and \backmatter in the LaTeX source to either delete
the empty \frontmatter/\backmatter lines or move/add the intended front/back
sections into those regions.
- Around line 122-124: The LaTeX includegraphics calls in echo-visual-atlas
(e.g., the \includegraphics[max width=\textwidth,max
height=0.4\textheight,keepaspectratio]{diagrams/echo-visual-atlas-01.pdf})
reference ten missing assets (echo-visual-atlas-01.pdf …
echo-visual-atlas-10.pdf); fix by either adding/committing the missing PDFs into
the diagrams directory with those exact names or update each \includegraphics
invocation to point to the existing filenames (e.g., replace
echo-visual-atlas-0N.pdf with the corresponding tour-*.pdf names) so the file
names match the repository assets and LaTeX can compile.
♻️ Duplicate comments (3)
crates/warp-core/tests/boaw_privacy.rs (1)
11-12: Already flagged: unusedmod common;import.This was identified in a previous review. The
#[allow(unused)]suppresses the warning, which is acceptable as a placeholder until the tests are implemented.crates/warp-core/src/snapshot_accum.rs (1)
336-368: Don’t swallow WSC write errors.
unwrap_or_default()hides real serialization failures. Useexpect(or bubble a Result) so bugs surface instead of producing silent empty WSC bytes.🛠️ Suggested fix
- write_wsc_one_warp(&warp_inputs[0], schema_hash, tick).unwrap_or_default() + write_wsc_one_warp(&warp_inputs[0], schema_hash, tick) + .expect("write_wsc_one_warp should not fail for Vec-backed output")crates/echo-dind-tests/src/rules.rs (1)
113-140: View‑op IDs are still collision‑prone under parallel BOAW.
op_ix = delta.len()is per‑delta, so multiple rule matches (or worker shards) can emit identicalsim/view/op:000…IDs, collapsing events and breaking determinism. You need a globally deterministic sequence (e.g., includeOpOriginfields or a merge‑time canonical index), not a per‑delta length.🔧 Directional fix (requires plumbing OpOrigin into executors)
-fn emit_view_op_delta( - warp_id: WarpId, - delta: &mut TickDelta, - type_id: TypeId, - payload: &[u8], - op_ix: usize, -) { +fn emit_view_op_delta( + warp_id: WarpId, + delta: &mut TickDelta, + type_id: TypeId, + payload: &[u8], + origin: OpOrigin, +) { let view_id = make_node_id("sim/view"); delta.push(WarpOp::UpsertNode { /* ... */ }); - let seq = op_ix as u64; - let op_id = make_node_id(&format!("sim/view/op:{:016}", seq)); + let op_id = make_node_id(&format!( + "sim/view/op:{:08x}:{:08x}:{:08x}", + origin.rule_id, origin.match_ix, origin.op_ix + )); delta.push(WarpOp::UpsertNode { /* ... */ }); delta.push(WarpOp::UpsertEdge { /* ... */ }); delta.push(WarpOp::SetAttachment { /* ... */ }); }Also applies to: 519-571
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review continued from previous batch...
- Add trailing newlines to tour-04, 10, 11, 12, 13, 15 mmd files - Add explicit S3 node definition in tour-10.mmd - Clarify BLAKE3 label in tour-13.mmd - Add explicit Screen node in tour-15.mmd - Fix echo-visual-atlas-with-diagrams.tex: - Set author/date metadata - Remove dead pandoc cruft and unused counter - Remove vacuous frontmatter/backmatter - Map diagram references to existing tour-*.pdf files - Fix clippy pedantic warnings in warp-core - Fix imports in boaw_footprints.rs Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 28
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (4)
docs/book/echo/sections/01-what-is-echo.tex (1)
28-33: Grammatical incoherence: "which give it:" followed by a list of use cases.The sentence "It uses WARP graphs, which give it:" implies the list will enumerate capabilities or properties that WARP graphs confer. Instead, the list enumerates domains (RTS Games, Fighting Games, etc.). The sentence structure is broken.
📝 Proposed fix
-Echo is designed for general use. It uses WARP graphs, which give it: +Echo is designed for general use. Its WARP graph foundation makes it particularly suited for:Alternatively:
-Echo is designed for general use. It uses WARP graphs, which give it: +Echo is designed for general use. WARP graphs enable applications such as:crates/echo-dind-harness/src/dind.rs (2)
162-168: You defined the constants. Now use them.You've added
ELOG_VERSION,HASH_DOMAIN, andHASH_ALGat the top of this file, yetcreate_repro_bundlestill hardcodes1,"DIND_STATE_HASH_V2", and"BLAKE3". The same sin is repeated inCommands::Record(lines 284-289) andCommands::Torture(lines 311-316).This is exactly the "scattered literals" problem the constants were meant to solve. When
ELOG_VERSIONbumps to2, you'll have four places to update instead of one—and you'll miss at least one.🔧 Fix all three locations
In
create_repro_bundle(lines 162-168):let actual_golden = Golden { - elog_version: 1, - schema_hash_hex: hex::encode(header.schema_hash), - hash_domain: "DIND_STATE_HASH_V2".to_string(), - hash_alg: "BLAKE3".to_string(), + elog_version: ELOG_VERSION, + schema_hash_hex: hex::encode(&header.schema_hash), + hash_domain: HASH_DOMAIN.to_string(), + hash_alg: HASH_ALG.to_string(), hashes_hex: actual_hashes.to_vec(), };In
Commands::Record(lines 284-289):let golden = Golden { - elog_version: 1, - schema_hash_hex: hex::encode(header.schema_hash), - hash_domain: "DIND_STATE_HASH_V2".to_string(), - hash_alg: "BLAKE3".to_string(), + elog_version: ELOG_VERSION, + schema_hash_hex: hex::encode(&header.schema_hash), + hash_domain: HASH_DOMAIN.to_string(), + hash_alg: HASH_ALG.to_string(), hashes_hex: hashes, };In
Commands::Torture(lines 311-316):let baseline_golden = Golden { - elog_version: 1, - schema_hash_hex: hex::encode(header.schema_hash), - hash_domain: "DIND_STATE_HASH_V2".to_string(), - hash_alg: "BLAKE3".to_string(), + elog_version: ELOG_VERSION, + schema_hash_hex: hex::encode(&header.schema_hash), + hash_domain: HASH_DOMAIN.to_string(), + hash_alg: HASH_ALG.to_string(), hashes_hex: baseline_hashes.clone(), };
169-172: Optional: Addsync_all()for robustness in repro bundle writes.The test harness (coverage.rs) now calls
sync_all()after writing golden files. For consistency and to avoid data loss on sudden termination, consider doing the same here—especially since repro bundles are debugging artifacts you really don't want to lose.♻️ Optional: sync after write
- let f_actual = File::create(out_dir.join("actual.hashes.json")) + let mut f_actual = File::create(out_dir.join("actual.hashes.json")) .context("failed to create actual.hashes.json")?; serde_json::to_writer_pretty(f_actual, &actual_golden) .context("failed to serialize actual.hashes.json")?; + f_actual.sync_all().context("failed to sync actual.hashes.json")?;Apply similarly to
expected.hashes.jsonwrite.crates/warp-core/tests/common/mod.rs (1)
672-699:for_each_permutationalso usesis_multiple_of(2)— same unstable API issue.Line 687 uses
i.is_multiple_of(2). This is nightly-only onusizeas well.🔧 Use stable modulo check
- if i.is_multiple_of(2) { + if i % 2 == 0 { items.swap(0, i); } else {
🤖 Fix all issues with AI agents
In `@crates/echo-dind-harness/tests/coverage.rs`:
- Around line 69-75: The golden struct is incorrectly produced by
hex::encode(SCHEMA_HASH) which double-encodes an already-hex string; update the
construction of echo_dind_harness::dind::Golden (the variable named golden) to
use the SCHEMA_HASH hex string directly (e.g., SCHEMA_HASH or
SCHEMA_HASH.to_string()) for schema_hash_hex so it matches header.schema_hash
(see SCHEMA_HASH usage in dind.rs).
In `@crates/warp-benches/Cargo.toml`:
- Around line 19-23: The dependency version specifiers in Cargo.toml are
visually inconsistent (mixing minor-start like "~0.5" with patch-start like
"~1.8.2" and "~2.1.1"); pick a consistent tilde style and normalize the entries
for criterion, blake3 and rustc-hash (e.g., all minor-start "~0.5", "~1.8",
"~2.1" or all patch-start "~0.5.0", "~1.8.2", "~2.1.1") and update the version
strings for the criterion, blake3 and rustc-hash entries accordingly so the file
is uniform.
In `@crates/warp-core/src/boaw/exec.rs`:
- Around line 120-121: Change partition_into_shards to accept a slice
(&[ExecItem]) instead of taking ownership of Vec<ExecItem> to avoid allocating
items.to_vec() in exec.rs; update the implementation in shard.rs to iterate over
the slice and copy/clone each ExecItem only when inserting into the shard
containers (since ExecItem is Copy, use simple copy) and adjust all call sites
(e.g., the call in exec.rs that currently does
partition_into_shards(items.to_vec())) to pass items directly; alternatively, if
shared ownership was intentional, document the allocation behavior in
partition_into_shards to justify the Vec API.
In `@crates/warp-core/src/boaw/shard.rs`:
- Around line 100-124: The public struct VirtualShard is missing a Debug derive
which makes diagnostics and test logging harder; update the struct definition
for VirtualShard to derive Debug (in addition to Default) so that its items:
Vec<ExecItem> can be formatted (ExecItem already implements Debug), then run
cargo test/build to ensure the derive is accepted—look for the VirtualShard type
and its use in partition_into_shards to verify no further changes are needed.
- Around line 88-98: The function shard_of manually constructs a [u8;8] from
scope.as_bytes(), which is verbose; replace the manual indexing with a safe
conversion using try_into() (e.g., take the first 8 bytes slice and call
try_into()) to produce the [u8;8], then call u64::from_le_bytes and apply
SHARD_MASK as before; update the shard_of function to use
scope.as_bytes()[..8].try_into() (handling the Result with expect/unreachable
comment if desired) while keeping NodeId and SHARD_MASK usage unchanged.
In `@crates/warp-core/src/tick_delta.rs`:
- Around line 243-247: The emit method increments next_op_ix without overflow
protection; add a defensive check using checked_add when advancing next_op_ix in
emit (and fail fast if it returns None) so we never silently wrap the op index.
Concretely, in emit (which sets origin.op_ix and then increments next_op_ix
before calling inner.emit_with_origin), compute next =
self.next_op_ix.checked_add(1) and if None panic with a clear message like
"ScopedDelta next_op_ix overflow" (or return an error if you prefer a
Result-based API), otherwise assign self.next_op_ix = next.unwrap() and proceed
to call inner.emit_with_origin(op, origin).
In `@docs/adr/ADR-0007-BOAW-Storage.md`:
- Around line 277-282: The ADR's commit-hash formula currently lists
"schema_hash || tick || policy_hashes" but the implementation in
compute_commit_hash_v2 uses a different ordering and fields (it hashes
version_tag, parents_len, parents, state_root, patch_digest, and policy_id);
update ADR-0007-BOAW-Storage to either (A) replace the formula on lines
describing commit_hash with the exact canonical input and order used by
compute_commit_hash_v2 (mentioning version_tag, parents_len, parents,
state_root, patch_digest, policy_id), or (B) explicitly mark the ADR entry as
out-of-date and add a short "future revision" note linking the ADR to
compute_commit_hash_v2 so readers are not misled. Ensure you reference
compute_commit_hash_v2, policy_id, and version_tag in the ADR so the mapping is
unambiguous.
In `@docs/book/echo/colophon-06-tour.tex`:
- Around line 40-45: The "\textbf{Phase 6 planning docs}" bullet in the itemize
list is vague; update the entry by replacing the text "\textbf{Phase 6 planning
docs}" with a concrete document reference (for example the actual markdown path
like the planning doc name) or remove the bullet entirely so every Further
Reading item is a specific, concrete reference; ensure you edit the
\textbf{Phase 6 planning docs} line in the itemize block in colophon-06-tour.tex
accordingly.
In `@docs/book/echo/sections/01-what-is-echo.tex`:
- Line 12: The document uses inconsistent capitalization for "state" (e.g.,
"state preservation", "game state" vs "WARP State"); pick one convention and
apply it consistently—either treat "WARP State" as a defined term and capitalize
it everywhere (and add a definition/first-use note declaring "WARP State" as the
canonical term) or normalize to lowercase "warp state"/"state" throughout;
update the string "WARP State" in the Renderer Agnosticism item to match the
chosen convention and sweep the document to make all occurrences consistent.
- Line 7: In the sentence containing \AION{} Foundations Series papers and the
quoted title "Worldline Algebra for Recursive Provenance" (WARP), remove the
stray double space between the closing quotation mark and the parenthesis so
there is only a single space before "(WARP)"; update the text in the file
section that contains that sentence to collapse the two spaces into one.
In `@docs/book/echo/sections/16-tour-overview.tex`:
- Around line 9-11: Replace the hardcoded date string "accurate as of
2026-01-18" in the text of docs/book/echo/sections/16-tour-overview.tex with a
reliable, maintainable option: either remove the date entirely and rely on the
existing "verify against HEAD" wording, or substitute it with an auto-generated
date like \today or by including a generated file via
\input{generated-date.tex}; update the sentence containing the phrase "accurate
as of 2026-01-18" accordingly so the document no longer requires manual date
updates.
- Around line 68-70: The protip contains a brittle hard-coded line-number
reference to src/boaw/exec.rs:61-83; remove the numeric range and instead point
readers to the file or a stable symbol so the reference won't break on edits —
update the \begin{protip} text to mention the file path only (src/boaw/exec.rs)
and/or a stable identifier (e.g., "the exec module" or "the relevant function in
src/boaw/exec.rs") and instruct readers to search/jump to that file or symbol
rather than a line range.
In `@docs/book/echo/sections/20-tour-rule-matching.tex`:
- Line 35: The documentation references a non-existent file and brittle line
range: replace the hard-coded "FILE:
crates/warp-core/src/rule_matching.rs:45-89" link with a cross-reference to the
actual struct definition for RewriteRule in crates/warp-core/src/rule.rs; update
the .tex to use the identifier/anchor for the RewriteRule struct (or the label
generated for that symbol) so the PDF links to the struct declaration rather
than fixed line numbers, and remove the old file:line range text.
In `@docs/book/echo/sections/22-tour-delta-merge.tex`:
- Around line 49-65: The snippet in impl WarpOp::sort_key incorrectly patterns
on WarpOp::SetAttachment { node, key, .. } though SetAttachment has fields key:
AttachmentKey and value: Option<AttachmentValue>; update the match arm to
pattern-match only the actual fields (e.g., WarpOp::SetAttachment { key, .. })
and construct the WarpOpKey using the node owner embedded in the AttachmentKey
(use key.owner as the node identifier and the AttachmentKey as the attachment
identifier) so the returned WarpOpKey::SetAttachment uses key.owner and the
attachment key from key.
In `@docs/book/echo/sections/23-tour-hashing.tex`:
- Around line 86-87: Update the documentation line that currently points to
snapshot.rs so it correctly references the file containing the function: change
the file reference to tick_patch.rs and ensure the Entry Point still names
compute_patch_digest_v2() so readers can locate the function in tick_patch.rs
(it lives there around the indicated location).
In `@docs/book/echo/sections/25-tour-call-graph.tex`:
- Around line 106-107: The statement "nothing quadratic or exponential" is too
strong; clarify that overall complexity bounds assume each rule's internal work
is well-behaved. Update the paragraph around the Execute = O(n · W) explanation
(where W = "work per rule") to add a qualifier such as "assuming each rule's
implementation performs at most O(W) work per input (i.e., does not itself
introduce superlinear complexity)"; adjust the following sentence to read that
the system scales linearly with work volume under that assumption (or with
well-behaved rules).
In `@docs/study/build-tour.py`:
- Line 202: The main() function is missing a return-type annotation; since the
implementation uses sys.exit() throughout, update the signature of main (the
main() function) to declare a return type of -> None (and keep sys.exit calls
as-is) and ensure any docstring that claims it returns int is updated to reflect
None/usage of sys.exit() so the annotation and docstring are consistent.
- Line 240: The print call uses an unnecessary f-string: replace print(f"\n===
Success! ===") with a regular string (e.g., print("\n=== Success! ===")) to
remove the unused f-prefix and satisfy Ruff F541; locate the print invocation in
the build-tour.py snippet and update it accordingly.
- Around line 167-171: The current re.sub call uses a brittle negative
lookbehind with the long escaped options; update the replacement to avoid that
lookbehind by matching bare \includegraphics without options and only then
inserting the sizing options: locate the re.sub invocation that operates on the
variable content (the three-argument call shown) and replace the pattern to
detect \includegraphics that does not already have bracketed options (e.g., use
a pattern like matching \includegraphics followed by no '[') or perform a
two-pass approach with an intermediate marker—either way remove the long
negative lookbehind and ensure already-processed \includegraphics[...]{...}
instances are skipped.
- Around line 25-41: In escape_latex, the current replacements list first turns
"\" into "\textbackslash{}" then later escapes "{" and "}", corrupting the
braces; change the routine to first substitute backslashes with a unique
placeholder (e.g., "__BACKSLASH_PLACEHOLDER__"), then run the existing
replacements for &, %, $, #, _, {, }, ~, ^ (or keep the replacements list but
skip backslash), and finally replace the placeholder with the literal
"\textbackslash{}"; update references to the replacements variable and the
for-loop in escape_latex to implement this three-step placeholder flow so the
backslash sequence's braces are not re-escaped.
- Around line 193-195: The current return only checks
tex_file.with_suffix('.pdf').exists() and ignores the xelatex exit code; change
the function so it returns True only when the second xelatex run succeeded and
the PDF exists—i.e., use the previously observed result.returncode (from the
second pass where you print "OK" vs "warnings") and require result.returncode ==
0 && tex_file.with_suffix('.pdf').exists() (or equivalent) instead of just
checking file existence.
In `@docs/study/diagrams/tour-10.mmd`:
- Line 27: Add a trailing newline to the end of docs/study/diagrams/tour-10.mmd
(ensure the file ends with '\n') and update repository settings to prevent
regressions: add the line '*.mmd text eol=lf' to .gitattributes and install a
pre-commit check that verifies files end with a newline (you can reuse the
suggested git ls-files + tail command as the hook). This ensures tour-10.mmd
ends with a newline and future .mmd files are normalized and validated.
In `@docs/study/diagrams/tour-13.mmd`:
- Line 7: The file ends with the line "F --> G[Return Accepted]" missing a
trailing newline; open the diagram file (docs/study/diagrams/tour-13.mmd) and
add a final newline character after that line so the file terminates with '\n'.
In `@docs/study/echo-tour-de-code-directors-cut.tex`:
- Line 819: Replace the incorrect expansion of the acronym in the sentence that
currently reads "BOAW stands for ``Bag of Asynchronous Work.''" with the correct
expansion "Best Of All Worlds" so the document's definition of BOAW matches the
PR description and the deterministic parallel execution model; update the string
"BOAW stands for ``Bag of Asynchronous Work.''" to "BOAW stands for ``Best Of
All Worlds.''" and scan other occurrences of "Bag of Asynchronous Work" or
"BOAW" in the document to ensure consistency.
In `@docs/study/echo-tour-de-code-with-commentary.tex`:
- Around line 113-114: The tcolorbox/tourguide environment uses \ding{46} before
the pifont package is loaded, so move the \usepackage{pifont} declaration to
appear before the newtcolorbox / tourguide environment definitions (so \ding is
defined when newtcolorbox evaluates), and remove the later duplicate
\usepackage{pifont} that appears after the environment to avoid redundant
loading.
- Around line 797-807: The documentation's radix-sort pass description is
inconsistent: the text states scope_hash is 128 bits (8 passes) but the
bucket-extraction table and the statement "passes 4-19" imply 16 passes (256
bits) for scope_hash, yielding 20 total passes; reconcile by matching the prose
to the actual algorithm. Update the narrative around scope_hash/rule_id/nonce
and the pass numbering so they reflect the real implementation used by the LSD
radix sort (e.g., if passes 4-19 are used for scope_bytes[0:32], state that
scope_hash covers 256 bits and adjust the counts and rationale for nonce being
passes 0-1), or change the bucket extraction table to match a 128-bit scope_hash
(8 passes). Ensure references to "passes 0-1", "passes 4-19", "scope_hash",
"rule_id", and "nonce" are consistent across the explanation and the bucket
extraction table.
In `@docs/study/echo-visual-atlas-with-diagrams.tex`:
- Around line 100-101: The document has two different hard-coded dates
(\date{2026-01-19} vs footer "generated 2026-01-18"); create a single macro
(e.g. \newcommand{\docdate}{2026-01-19}) and replace the literal date in \date
and the footer generation text with \docdate so the date is
single-source-of-truth (update occurrences around \date and the footer lines
referenced near 280–281 to use \docdate instead of literals).
- Around line 122-135: The document currently embeds duplicate diagrams; update
the \includegraphics targets so each section uses its correct unique diagram:
replace the duplicate reference in the "BOAW Parallel Execution Model" section
(heading: BOAW Parallel Execution Model) to the intended diagram file for that
model instead of diagrams/tour-10.pdf, change the "Virtual Shard Routing"
section (heading: Virtual Shard Routing) to its specific diagram file, and
likewise replace diagrams/tour-09.pdf in the "State Root Hash Computation" and
"Commit Hash v2 Structure" sections with their correct, distinct diagram
filenames; ensure the \includegraphics call parameters (max width, max height,
keepaspectratio) remain unchanged and verify the new diagram files exist in the
diagrams directory.
♻️ Duplicate comments (18)
docs/study/diagrams/tour-11.mmd (1)
17-17: Missing trailing newline—still.The prior review flagged this exact issue. Line 17 ends without a POSIX-compliant
\n. This file remains non-compliant.docs/study/diagrams/tour-03.mmd (1)
20-20: Missing trailing newline—the saga continues.Prior review flagged this. Line 20 still lacks a trailing
\n. POSIX weeps.docs/study/diagrams/tour-04.mmd (1)
16-16: Missing trailing newline—the quadrilogy becomes a pentalogy.Prior review flagged this. Line 16 still ends without
\n. At this point, consider an.editorconfigwithinsert_final_newline = trueor a pre-commit hook, because manual discipline has demonstrably failed.docs/study/diagrams/tour-12.mmd (1)
16-16: Missing trailing newline—are we detecting a pattern yet?Prior review called this out. Line 16 remains newline-deficient. The hexalogy of missing newlines is now complete across tour-03, tour-04, tour-10, tour-11, and tour-12.
docs/study/diagrams/tour-10.mmd (1)
18-21: Execution subgraph is incomplete—claimed shards S1 and S3 vanish into the void.Worker 0 claims S0 and S1. Worker 1 claims S2 and S3. Yet the Execution subgraph only shows:
S0 → D0[TickDelta 0]S2 → D1[TickDelta 1]Where do S1 and S3 go? If workers claim shards, those shards should produce deltas that flow into the merge. Either:
- Show S1 feeding into D0 (same worker's delta), and S3 feeding into D1, or
- Add explicit arrows for all claimed shards
As drawn, half the work-stealing claims lead nowhere.
✏️ Proposed fix: Complete the execution flow
subgraph "Execution" S0 --> D0[TickDelta 0] + S1 --> D0 S2 --> D1[TickDelta 1] + S3 --> D1 enddocs/study/echo-tour-de-code-with-commentary.tex (1)
959-961: BOAW expansion is still incorrect—this is the SECOND time I'm seeing this.Line 960 states BOAW as "Best Of All Worlds" which is correct per the PR description. However, I notice this was flagged in a past review. Confirming the current text is now correct.
Actually, re-reading: the text says
BOAW---``Best Of All Worlds''---is where Echo's determinism meets parallelism.which IS correct.docs/study/echo-tour-de-code-directors-cut.tex (1)
1144-1146: CoW claim has been properly corrected.The previous review flagged an incorrect statement about Rust's
clone()being "typically copy-on-write." The text now correctly explains thatWarpStatespecifically usesArcinternally, making the clone cheap, and clarifies that Rust'sClonetrait performs deep copies unless the type uses shared-reference wrappers.docs/study/build-tour.py (3)
25-57: LaTeX escaping now properly implemented—past review addressed.The
escape_latexfunction correctly handles all LaTeX special characters, andconvert_commentary_to_latexcallsescape_latex(inner)on the captured content before wrapping. This fixes the previously flagged issue where raw commentary would break on_,%,#, etc.
102-145: Duplicate package loading fixed—only adjustbox, tcolorbox, and fvextra added now.The comment on lines 103-104 explicitly documents that graphicx and geometry are already loaded by Pandoc, and the packages string no longer includes them. This addresses the previously flagged option clash issue.
47-47: Missing return type annotation on nested function.Per Ruff ANN202:
replace_commentaryshould have a return type annotation.✏️ Add type annotation
- def replace_commentary(match): + def replace_commentary(match: re.Match[str]) -> str:docs/study/aion.cls (1)
84-132: Metadata validation is broken;\ifx#1\emptynever catches empty values.
Line 93 compares against\empty, which does not match\def\foo{}. Empty titles slide past validation, and Lines 127–131 still render blank blocks. Also, requiring users to\renewcommandis a UX footgun. Fix this with internal storage + setter macros and\@empty.🔧 Proposed fix (robust storage + checks)
-% Metadata Commands -\newcommand{\papertitle}{} -\newcommand{\papernumber}{} -\newcommand{\paperversion}{} -\newcommand{\paperdate}{} -\newcommand{\paperauthor}{} -\newcommand{\paperaffiliation}{} -\newcommand{\paperorcid}{} -\newcommand{\paperdoi}{} +% Metadata Commands +\newcommand{\AION@papertitle}{\@empty} +\newcommand{\AION@papernumber}{\@empty} +\newcommand{\AION@paperversion}{\@empty} +\newcommand{\AION@paperdate}{\@empty} +\newcommand{\AION@paperauthor}{\@empty} +\newcommand{\AION@paperaffiliation}{\@empty} +\newcommand{\AION@paperorcid}{\@empty} +\newcommand{\AION@paperdoi}{\@empty} + +\newcommand{\papertitle}[1]{\gdef\AION@papertitle{`#1`}} +\newcommand{\papernumber}[1]{\gdef\AION@papernumber{`#1`}} +\newcommand{\paperversion}[1]{\gdef\AION@paperversion{`#1`}} +\newcommand{\paperdate}[1]{\gdef\AION@paperdate{`#1`}} +\newcommand{\paperauthor}[1]{\gdef\AION@paperauthor{`#1`}} +\newcommand{\paperaffiliation}[1]{\gdef\AION@paperaffiliation{`#1`}} +\newcommand{\paperorcid}[1]{\gdef\AION@paperorcid{`#1`}} +\newcommand{\paperdoi}[1]{\gdef\AION@paperdoi{`#1`}} -\newcommand{\AION@require}[2]{% - \ifx#1\empty +\newcommand{\AION@require}[2]{% + \ifx#1\@empty \ClassError{aion}{`#2` not set}{You must define `#2` before \string\AIONTitlePage} \fi } -\AION@require{\papertitle}{\string\papertitle} -\AION@require{\papernumber}{\string\papernumber} -\AION@require{\paperauthor}{\string\paperauthor} -\AION@require{\paperdate}{\string\paperdate} +\AION@require{\AION@papertitle}{\string\papertitle} +\AION@require{\AION@papernumber}{\string\papernumber} +\AION@require{\AION@paperauthor}{\string\paperauthor} +\AION@require{\AION@paperdate}{\string\paperdate} -{\Huge\bfseries \papertitle \par} +{\Huge\bfseries \AION@papertitle \par} -AI$\Omega$N Foundations Series — \papernumber \par} +AI$\Omega$N Foundations Series — \AION@papernumber \par} -\paperauthor \par} +\AION@paperauthor \par} -\ifx\paperaffiliation\empty\else -{\normalsize \paperaffiliation \par} +\ifx\AION@paperaffiliation\@empty\else +{\normalsize \AION@paperaffiliation \par} \fi -\ifx\paperorcid\empty\else -{\normalsize ORCID: \paperorcid \par} +\ifx\AION@paperorcid\@empty\else +{\normalsize ORCID: \AION@paperorcid \par} \fi -\paperdate \par} +\AION@paperdate \par}docs/book/echo/sections/20-tour-rule-matching.tex (1)
57-57: Another brittle line-number reference.
crates/warp-core/src/pending.rs:12-24— remove the:12-24suffix. You've already been told this.✏️ Suggested fix
-\textbf{File:} \texttt{crates/warp-core/src/pending.rs:12-24} +\textbf{File:} \texttt{crates/warp-core/src/pending.rs}docs/book/echo/sections/24-tour-commit.tex (1)
16-53: Call trace still diverges from the real commit path.The trace describes
execute_parallel → merge_deltas → construct_delta_patch → apply_reserved_rewrites(&merged), but the actualcommit_with_receiptflow usesreserve_for_receipt(...), clones pre-state, applies reserved rewrites directly, finalizes the materialization bus, and builds the patch fromdiff_state/WarpTickPatchV1. Either align the trace with the actual flow or label it as a simplified/aspirational path.crates/warp-core/tests/boaw_cow.rs (1)
16-115: Ignored +todo!tests are dead weight—wire or track them.These six tests are all ignored and unimplemented, so they provide zero coverage. Either implement them now or replace the placeholder COW-00X tags with real, tracked issue IDs and a concrete plan to enable the tests. As per coding guidelines, tests are mandatory once the harness exists.
docs/study/echo-tour-de-code.tex (1)
114-118: Intro says “symbol names, not line numbers,” but the doc still embeds line anchors.
Line 114-118 conflicts with later references likeengine_impl.rs:1216, which creates mixed expectations about stability. Either remove line numbers or update the intro to acknowledge them (and keep the generated-date note aligned).✏️ Suggested wording tweak
-This document traces EVERY function call involved in processing a user -action through the Echo engine. References use \textbf{symbol names} -(functions, structs) rather than line numbers to reduce maintenance burden. -Run \texttt{scripts/validate-tour-refs.sh} to verify all referenced symbols -still exist in the codebase. +This document traces EVERY function call involved in processing a user +action through the Echo engine. References include \textbf{symbol names} +(functions, structs) and point‑in‑time line anchors for precision. +Run \texttt{scripts/validate-tour-refs.sh} to verify referenced symbols +still exist, and refresh line anchors alongside the generated date.crates/warp-core/tests/boaw_footprints.rs (1)
10-11: Nightly-onlyis_multiple_ofinrandom_footprintwill break stable builds.
Line 10 importsrandom_footprint; incrates/warp-core/tests/common/mod.rsthat helper callsrng.next_u64().is_multiple_of(2), which is nightly-only. Replace with% 2 == 0in all occurrences to keep tests compiling on stable.🔧 Suggested fix (crates/warp-core/tests/common/mod.rs)
- if rng.next_u64().is_multiple_of(2) { + if rng.next_u64() % 2 == 0 {(Apply the same replacement for every
is_multiple_of(2)in that file.)Run this to confirm remaining usages:
#!/bin/bash rg -n 'is_multiple_of' crates/warp-core/tests/common/mod.rsdocs/book/echo/sections/22-tour-delta-merge.tex (1)
79-95:apply_deltapseudocode shows incorrectSetAttachmentfields.Same issue as the sort_key snippet: line 91 shows
{ node, key, value }butSetAttachmentonly has{ key, value }. Thenodeis embedded inAttachmentKey.🔧 Correct the pseudocode
- +-- WarpOp::SetAttachment { node, key, value } → - | self.attachments.insert((node, key), value) + +-- WarpOp::SetAttachment { key, value } → + | self.attachments.insert(key, value)crates/warp-core/src/tick_delta.rs (1)
453-459: Unstable sort incanonicalize_opscan cause non-deterministic validation results.
sort_by_keyuses an unstable sort algorithm. Whendelta_opsanddiff_opscontain ops with identicalsort_key()values but different original positions, the resulting order may differ non-deterministically between the two slices. This produces flaky validation failures that are painful to debug.This differs from
finalize()which uses insertion index as a deterministic tie-breaker.🔧 Use stable sort or deterministic tie-breaker
fn canonicalize_ops(ops: &[WarpOp]) -> Vec<WarpOp> { let mut sorted = ops.to_vec(); - sorted.sort_by_key(WarpOp::sort_key); + // Use stable sort to preserve relative order of equal-key ops, + // ensuring deterministic comparison between different input orderings. + sorted.sort_by(|a, b| a.sort_key().cmp(&b.sort_key())); sorted }Note: Rust's
sort_byis stable, whilesort_by_keyis not. Alternatively, include a secondary sort criterion (e.g., hash of full op) for absolute determinism regardless of input order.
- Update LaTeX documentation sections for echo tour - Fix Mermaid diagram references - Improve test and harness code quality - Remove expect() usage to satisfy clippy::expect_used - Fix needless borrows in dind.rs - Use is_multiple_of() per clippy::manual_is_multiple_of - Fix tuple-to-array conversion per clippy::tuple_array_conversions - Remove redundant clones per clippy::redundant_clone Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Replace local-only ID tracking with warp-scoped resource sets to prevent false conflict detection across warps. This is the foundation for multi-warp parallel execution. Key changes: - Add NodeSet storing BTreeSet<NodeKey> (warp_id, local_id pairs) - Add EdgeSet storing BTreeSet<EdgeKey> (warp_id, local_id pairs) - Update PortSet to store (WarpId, PortKey) tuples - Simplify scheduler has_conflict/mark_all to use warp-scoped keys directly - Update all footprint construction sites to use insert_with_warp() - Add new test: reserve_different_warps_no_conflict The warp-scoped design ensures: - Same local ID in different warps won't false-conflict - Footprint independence checks remain O(n log n) - BTreeSet ordering guarantees deterministic iteration Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Adds conditional output for \paperdoi similar to existing affiliation and ORCID rendering. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Corrects PendingRewrite location (scheduler.rs) and fields - Updates Footprint to show all 9 fields with correct types - Clarifies bloom-filter false positive behavior for factor_mask Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Disambiguates 'immutable commit ledger' to explicitly reference WARP state history rather than git commits. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Converts S2 from ellipsis placeholder to concrete shard node for consistency with its usage in Work Stealing and Execution. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Uses try_into() for proper slice to fixed-size array conversion. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Updates the ingress apply loop to allow NoMatch errors to continue rather than panicking, matching the documented behavior in the comment. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Prevents potential collisions by using all bytes of the NodeId scope instead of truncating to 16 bytes when deriving view op and edge IDs. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Adds tests for mutation behavior (create/update/delete ops), reachability filtering, and state_root determinism. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Reverts the erroneous "zero-edge nodes" hashing block that was mistakenly added. The legacy compute_state_root does NOT hash nodes with zero outgoing edges separately - they're only hashed in the nodes section, not in the edges section. This restores parity with the legacy implementation. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
The "general use" framing was immediately contradicted by a list of specialized domains. Reframes as "any project where determinism and state integrity matter" which accurately bridges to the examples. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Use placeholder to prevent double-escaping in escape_latex - Simplify unnecessary negative lookbehind regex - Check xelatex return code, not just PDF existence - Add type annotation to main() - Remove extraneous f-string prefix Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Corrects the contradictory claims about scope_hash bit width and total sorting bits to match the actual implementation. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Replaces debug_assert_eq with a runtime check that fails deterministically in all builds when conflicting ops share the same sort_key. This prevents silent non-determinism in release builds. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
rule_matching.rs does not exist; the correct file is rule.rs. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
snapshot_accum: - compute_reachability() builds adjacency index once (O(E)) then BFS with O(1) lookups instead of full edge scan per node - edge_atts_index now iterates edges vector (not self.edges.keys()) - Added test_from_warp_state_roundtrip and test_build_idempotence echo-dind-tests: - drop_ball_rule: minimal footprint (ball node + attachment only) warp-core/tests/common: - todo!() replaced with safe zeroed placeholder docs: - AGENTS.md: timestamp → <ISO-8601 timestamp> placeholder - CHANGELOG.md: restructured, consolidated Phase 6B, removed dupes - TECH-DEBT-BOAW.md: scope_hex example updated to 32 bytes - .markdownlint.json: MD007 indent=4 to align with Prettier Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
dind.rs: - Add sync_all() after golden file write for crash consistency - Matches repro bundle durability guarantees (lines 173-175, 183-185) tick_patch.rs: - Expand WarpOpKey rustdoc with Invariants and Usage sections - Documents total ordering, deduplication semantics, and consumers Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
.githooks/pre-commit:
- Use git diff --cached -z --diff-filter=ACMRT to exclude deletions
- Load into array with mapfile -d '' for whitespace-safe filenames
- Expand as "${MD_FILES[@]}" for proper quoting
- Remove || true from prettier --write to surface real failures
docs/notes/pr-257-review-issues.md:
- Deleted: contained outdated issues, 11 of 13 already fixed in HEAD
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Changed >/dev/null 2>&1 to >/dev/null so prettier errors (config issues, missing files) are visible instead of silently swallowed. The exit code checking remains intact for proper flow control. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
The boaw_small_scenario_serial_parallel_equivalence test was only checking commit_hash; now checks all three hashes (state_root, patch_digest, commit_hash) for full parity like the other end-to-end tests. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
…ions - Export WarpOpKey from warp-core public API - Change WarpOp::sort_key() from pub(crate) to pub - Update merge_conflict_reports_correct_key test to use direct WarpOpKey comparison instead of brittle Debug string matching This enables external code to verify deterministic key ordering and makes tests resilient to Debug format changes. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Added discoverable TODO/FIXME comments with tracking ID T7.1 to four OpenPortal scheduling placeholder tests: - openportal_creates_valid_child_warp - openportal_footprint_declares_child_warp_write - openportal_multiple_child_warps_same_tick - openportal_same_attachment_slot_conflict The #[ignore] attributes and unimplemented!() bodies remain in place as spec markers until scheduling infrastructure lands. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
The two_warps_256_shard_coverage_stress test was documented as running 10k rewrites but only executed 5k. Additionally, the second store creation was redundant since make_large_test_store uses the same ID format, producing identical node IDs. Updated documentation to reflect actual behavior (5k ops) and removed the unused second store. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Added entries for: - API changes: WarpOpKey and sort_key() now public - Fixed: pre-commit stderr, hash checks, WarpOpKey assertion, stress test - Docs: TODO(T7.1) tracking for OpenPortal tests Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Changed WarpOpKey doc reference to merge_deltas from intra-doc link to plain text (merge_deltas is feature-gated, breaks doc build) - Regenerated DIND golden hashes after API changes Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 20
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
crates/echo-dind-harness/src/dind.rs (1)
177-189: Durability inconsistency:diff.txtandscenario.eintlogare not synced.You've diligently added
sync_all()for both JSON files, butstd::fs::write(line 189) andstd::fs::copy(line 158) do not guarantee data reaches stable storage — they only guarantee the kernel buffer is flushed upon close, not that it survives a power failure.If the goal is crash-proof reproduction bundles (and given you're calling this "Ironclad Nightmare Drills," I assume it is), this is an incomplete fix.
🔧 Proposed fix for consistent durability
// 1. Copy scenario - std::fs::copy(scenario_path, out_dir.join("scenario.eintlog")) - .context("failed to copy scenario")?; + { + let dest = out_dir.join("scenario.eintlog"); + std::fs::copy(scenario_path, &dest).context("failed to copy scenario")?; + File::open(&dest) + .and_then(|f| f.sync_all()) + .context("failed to sync scenario.eintlog")?; + }And for
diff.txt:// 4. Write diff.txt - std::fs::write(out_dir.join("diff.txt"), failure_msg).context("failed to write diff.txt")?; + { + let diff_path = out_dir.join("diff.txt"); + let mut f_diff = File::create(&diff_path).context("failed to create diff.txt")?; + std::io::Write::write_all(&mut f_diff, failure_msg.as_bytes()) + .context("failed to write diff.txt")?; + f_diff.sync_all().context("failed to sync diff.txt")?; + }
🤖 Fix all issues with AI agents
In @.githooks/pre-commit:
- Around line 133-150: The markdown formatter branch for the _auto_fmt case
currently only formats-then-aborts when enabled, causing disabled/unknown values
to skip checks; update the case for _auto_fmt in the .githooks/pre-commit to
mirror the Rust logic by adding branches for disabled (0|false|... ) and a
default/unknown branch that run `npx prettier --check "${MD_FILES[@]}"` and exit
non-zero on failures (i.e., fail the hook if files are unformatted) so that
MD_FILES, the case on _auto_fmt, and the existing prettier --write/--check
behavior are consistent across enabled/disabled/unknown values.
- Around line 126-129: The process substitution used to populate MD_FILES
(mapfile -d '' MD_FILES < <(git diff --cached --name-only -z --diff-filter=ACMRT
-- '*.md')) can hide git diff failures; change this to write git diff output to
a temp file or use readarray with explicit exit-code capture, then check git
diff's exit status and fail the hook if non-zero so markdown linting isn't
silently skipped; update the hook so after running git diff you test its $? (or
the captured exit code) and only proceed to run npx when the command succeeded,
otherwise exit non-zero with a clear error message (referencing the MD_FILES
array population logic and the git diff invocation).
In @.markdownlint.json:
- Line 5: The allowed_elements array contains an invalid HTML tag "u8" (key:
"allowed_elements") — replace "u8" with the correct underline element "u" (or
remove the entry if underline shouldn't be allowed) so the linter permits the
valid <u> element rather than a nonstandard token.
In `@crates/warp-core/src/snapshot_accum.rs`:
- Around line 554-572: The current code clones edge_list with edge_list.clone()
before sorting (in the block that builds out_edges for each node_id); replace
that clone with edge_list.to_vec() for clearer intent (or, if you prefer a
slightly more optimal refactor, sort when constructing edges_from so this
per-node sort is unnecessary). Update the loop that iterates sorted to use the
new Vec from to_vec() and keep the same behavior for pushing OutEdgeRef into
out_edges and Range into out_index.
- Line 840: Remove or update the stale inline comment "// More tests will be
added as we integrate with the engine." in
crates/warp-core/src/snapshot_accum.rs; locate that exact comment string and
either delete it or replace it with a current summary that reflects that
comprehensive tests have been added (e.g., "Comprehensive tests added for engine
integration"), ensuring the comment accurately describes the following test
block.
- Around line 1-4: The crate-level inner attribute `#![allow(dead_code)]` in
snapshot_accum.rs applies to the entire crate; change it to an
outer/module-level allow or scope it to specific items: remove the
`#![allow(dead_code)]` line from snapshot_accum.rs and either add
`#[allow(dead_code)]` to the module declaration (`mod snapshot_accum;`) in
lib.rs if you intend the whole module to be allowed, or annotate individual
unused functions/structs inside snapshot_accum.rs (e.g., `#[allow(dead_code)]`
on the specific function/struct names) so the allowance is narrowly scoped.
- Around line 827-1351: Add a consensus-parity test that constructs the same
graph in a WarpState and in a SnapshotAccumulator and asserts
compute_state_root(...) from snapshot.rs equals SnapshotAccumulator::build(...).
Specifically: create a small WarpState (using WarpState, WarpInstance,
NodeRecord, EdgeRecord) with a known root NodeKey and schema_hash/tick, call
snapshot::compute_state_root(&warp_state, schema_hash, tick) and also produce a
SnapshotAccumulator with the identical data (either by applying the same WarpOp
sequence via SnapshotAccumulator::apply_ops or by a helper to build from
WarpState) then call SnapshotAccumulator::build(&root_key, schema_hash, tick)
and assert the two state_root values are equal; reference compute_state_root,
SnapshotAccumulator::build, WarpState, WarpOp, and NodeKey to locate code to
modify.
In `@crates/warp-core/tests/boaw_end_to_end.rs`:
- Around line 111-150: The loop in
boaw_conflicts_scenario_deterministic_across_permutations uses a hardcoded 50
iterations which is inconsistent with SHUFFLE_ITERATIONS used elsewhere; either
replace the hardcoded 50 with the existing SHUFFLE_ITERATIONS constant to keep
behavior uniform (update the for _ in 0..50 to for _ in 0..SHUFFLE_ITERATIONS)
or, if conflicts intentionally require more iterations, extract a named constant
(e.g., SHUFFLE_ITERATIONS_CONFLICTS) and use it with a brief comment explaining
why it differs so the intent is explicit.
In `@crates/warp-core/tests/boaw_merge_tripwire.rs`:
- Around line 450-507: The test currently verifies canonical ordering by
extracting AttachmentKey values, but merge_deltas orders by WarpOpKey; update
the assertion to compare each operation's WarpOp::sort_key() instead: iterate
over the merged Vec<WarpOp>, call op.sort_key() for each (matching on
WarpOp::SetAttachment only if needed) to produce a Vec of sort keys, then assert
that the sequence is non-decreasing (keys[i-1] <= keys[i]) to ensure canonical
ordering by WarpOpKey; reference the merged variable, WarpOp::SetAttachment
pattern and the WarpOp::sort_key() method to locate and change the test.
In `@crates/warp-core/tests/boaw_stress_multiwarp.rs`:
- Around line 439-441: The comment above the combined_serial/merge_deltas block
is incorrect: identical operations are deduped by merge_deltas, so the expected
merged op count is 100, not 200; update the comment to reflect that
deduplication occurs and that merge_deltas(vec![combined_serial]) yields 100
ops. Refer to execute_serial, combined_serial, and merge_deltas when making the
change.
- Around line 111-132: The test function two_warps_256_shard_coverage_stress is
misnamed and its comments refer to a non-existent second warp/store; rename the
function to something like one_warp_256_shard_coverage_stress (or
single_warp_256_shard_coverage_stress) and update comments around store1/nodes1
(the block using GraphView::new(&store1) and make_exec_items_for_warp(&nodes1,
0)) to reflect that only one store/warp is used (remove or rewrite the "second
store's nodes are just for ID generation" remark and the "Use the first store
for execution" phrasing), and scan for any other references to two_warps_* in
tests to update accordingly.
In `@docs/adr/ADR-0007-BOAW-Storage.md`:
- Around line 189-196: The nested markdown lists under the "Critical rule" block
are indented by 2 spaces causing markdownlint MD007 failures; update those
nested list items (the lines mentioning adjacency buckets `edges_from[node]`,
`edges_to[node]`, the AttachmentKey/dedicated target keys bullet, and the two
sub-bullets for indexes/caches) to use 4-space indentation for each nesting
level so they render as proper sublists; apply the same 4-space fix to the other
flagged list blocks in ADR-0007-BOAW-Storage.md.
In `@docs/adr/TECH-DEBT-BOAW.md`:
- Around line 27-45: Update the ADR to resolve the timeline/priority mismatch
for removing execute_parallel_stride(): change the action and priority to match
the intended validation plan by either (A) downgrading the removal from P1 to P2
if deletion must wait for production validation, or (B) splitting into two
staged tasks (P1: add observability/metrics and instrument
execute_parallel_stride()/sharded execution for validation; P2: delete
execute_parallel_stride() behind the parallel-stride-fallback feature after
production confirmation). Reference execute_parallel_stride(), the
parallel-stride-fallback feature gate, and the "Delete Stride Fallback
(Post-Release)" action when updating the ADR text to make the timeline and
priority consistent.
- Around line 48-65: Replace all hard-coded line-number references in this ADR
(e.g., "crates/echo-dind-tests/src/rules.rs:591") with durable references: use
the function name emit_view_op_delta() where appropriate, or annotate with a
commit SHA (e.g., "as of commit abc123") or PR reference (e.g., "as of PR
`#257`"), or simply omit the line number; apply this change to every occurrence
called out in the comment (the instances around lines 50, 72, 122, 172, 196) so
the document no longer relies on brittle file:line references.
- Around line 237-247: The Summary Statistics table under "## Summary
Statistics" has inconsistent per-item effort math (P3 per-item > P2) — either
re-estimate each item at item-level granularity (list explicit per-item
estimates for the P1/P2/P3 rows, e.g., "P2: Item A 2d, Item B 1d, Item C 3d") or
remove the numeric hours/days and replace with T-shirt sizing (S/M/L) in the
table and adjust the recommendation paragraph accordingly; update the table rows
for P1/P2/P3 and the following "**Recommendation:**" text to reflect the chosen
approach and ensure "Cross-Warp Parallelism" (and any other heavy items) is
assigned a multi-day estimate or an L/XL size.
- Around line 79-84: The snippet in the ADR is incomplete and misleading: within
the for loop over by_warp the call to execute_parallel_sharded(view, &items,
workers) actually performs per-warp parallel work but the snippet omits what
happens to the returned deltas (e.g., merge/apply) and mislabels the loop as
"Serial iteration across warps". Fix by either (A) expanding the snippet to show
how deltas is used (merge/apply/accumulate) after execute_parallel_sharded and
updating the comment to reflect per-warp parallelism, referencing the loop
variables by_warp and warp_rewrites and the returned deltas, or (B) remove the
code block entirely and keep only the prose that explains the intended behavior
of execute_parallel_sharded and how deltas are combined.
- Around line 14-22: The "Priority Legend" contains an unused P0 level; either
remove the P0 row from the Priority Legend table or update existing ADR items to
include P0 where appropriate. Locate the "Priority Legend" section and either
delete the line/row starting with "**P0**" (and adjust the table alignment) or
scan ADR entries and change their priority cells to "**P0**" for any
blocking/correctness-risk items so the legend matches actual usage.
- Around line 170-191: The ADR's example references a non-existent helper
make_node_id_from_parts; update the ADR to either (A) state explicitly that the
optimization requires adding a new helper (e.g., implement
make_node_id_from_parts or make_node_id_from_bytes in the ident module alongside
make_node_id) or (B) show a compilable alternative using the existing API by
producing a String (e.g., encode scope to hex into a String via hex::encode or
write! into a pre-allocated String) and pass that to make_node_id(label: &str);
mention which option is being demonstrated and keep the code example consistent
with the selected approach and current crate API (make_node_id in
crates/warp-core/src/ident.rs).
In `@docs/book/echo/sections/20-tour-rule-matching.tex`:
- Around line 27-74: Update the call trace and commentary to match the current
apply pipeline: replace references to rule.match_at/view matches and match_ix
with the actual matcher-based flow (Engine::apply -> self.rules.get(rule_name)
-> rule.matcher(view, scope) -> compute scope_hash ->
scheduler.enqueue(PendingRewrite { tx, rule_id: rule.id(), compact_rule: ...,
scope_hash, scope, footprint, phase }) and remove any mention of match_ix; also
correct the file reference from crates/warp-core/src/rules.rs to the file that
defines the matcher (e.g., crates/warp-core/src/rule.rs or wherever rule.matcher
is implemented) and update the PendingRewrite field list and commentary to
reflect the current fields (scope_hash, compact_rule, phase) instead of
match_ix.
In `@docs/book/echo/sections/22-tour-delta-merge.tex`:
- Around line 13-41: Update the doc to reflect the real merge_deltas API and
semantics: change the signature from returning TickDelta to merge_deltas(...) ->
Result<Vec<WarpOp>, MergeError> and remove the claim that “lower OpOrigin wins”;
instead state that when operations share the same WarpOpKey (WarpOpKey =
(op_kind, target_id)) but have conflicting effects, merge_deltas returns a
MergeError (conflicts error out); keep that operations are sorted
deterministically by (WarpOpKey, OpOrigin) for stable processing but do not
imply tie-breaking by OpOrigin to resolve conflicts, and adjust any references
to TickDelta, ops/origins collection, and deduplication accordingly (mention
WarpOp, OpOrigin, WarpOpKey, MergeError, and merge_deltas so readers can locate
the code).
♻️ Duplicate comments (4)
crates/echo-dind-tests/src/rules.rs (2)
593-648: Mark deprecated helper with a real #[deprecated] attribute.
Right now it’s only a comment; this should warn at compile time.♻️ Recommended change
-#[allow(dead_code)] +#[deprecated( + since = "0.2.0", + note = "Use emit_view_op_delta_scoped for parallel-safe determinism" +)] +#[allow(dead_code)] fn emit_view_op_delta(
535-591: Avoid per-byte format allocations in scope_hex.
This is hot-path friendly and easy to tighten without changing behavior.♻️ Suggested micro-optimization
- let scope_hex: String = scope.0.iter().map(|b| format!("{:02x}", b)).collect(); + use std::fmt::Write as _; + let mut scope_hex = String::with_capacity(64); + for b in scope.0 { + let _ = write!(&mut scope_hex, "{:02x}", b); + }crates/warp-core/src/snapshot_accum.rs (1)
408-439: Forward-only reachability traversal: Missing clarifying comment.Previous review requested a comment explaining why only outgoing edges are followed (per ADR-0007 §4.1). The code at lines 408-439 still lacks this inline documentation. Add a brief comment above the edge iteration.
while let Some(current) = queue.pop_front() { - // Follow edges from this node using the adjacency index + // Follow only outgoing edges for forward reachability per ADR-0007 §4.1. + // Reverse edges are not part of the reachability definition. if let Some(outgoing_edges) = edges_from.get(&(current.warp_id, current.local_id)) {crates/warp-core/src/lib.rs (1)
108-115: Public re-exports must carry rustdoc comments; will failcargo clippy --all-targets -- -D missing_docs.Lines 104-115 re-export public APIs without any documentation. Each public re-export—
execute_parallel_stride,execute_parallel,execute_serial,shard_of,ExecItem,MergeConflict,NUM_SHARDS,merge_deltas, andMergeError—must have a///rustdoc comment explaining its purpose, usage, and any invariants. This is non-negotiable per the project's mandatory guidelines.The feature flags themselves (
delta_validate,parallel-stride-fallback) are correctly documented inCargo.toml, but the conditional re-exports gatekeeping them are silent. Document all of them before merge.
| # 9) Markdown formatting and linting for staged .md files | ||
| # Use -z and --diff-filter=ACMRT to exclude deletions and handle whitespace safely | ||
| mapfile -d '' MD_FILES < <(git diff --cached --name-only -z --diff-filter=ACMRT -- '*.md') | ||
| if [[ ${#MD_FILES[@]} -gt 0 ]] && command -v npx >/dev/null 2>&1; then |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick | 🔵 Trivial
Process substitution swallows git diff failures.
< <(git diff ...) does not propagate the subshell's exit code—pipefail is ineffective for process substitution. If git diff fails (corrupt index, signal, OOM), the hook silently skips markdown linting instead of failing fast.
Capture and check explicitly:
🔧 Proposed hardening
-mapfile -d '' MD_FILES < <(git diff --cached --name-only -z --diff-filter=ACMRT -- '*.md')
+MD_FILES=()
+if ! mapfile -d '' MD_FILES < <(git diff --cached --name-only -z --diff-filter=ACMRT -- '*.md'); then
+ echo "pre-commit: failed to enumerate staged markdown files" >&2
+ exit 1
+fiAlternatively, use a temp file or readarray with explicit exit-code capture if you want belt-and-suspenders.
🤖 Prompt for AI Agents
In @.githooks/pre-commit around lines 126 - 129, The process substitution used
to populate MD_FILES (mapfile -d '' MD_FILES < <(git diff --cached --name-only
-z --diff-filter=ACMRT -- '*.md')) can hide git diff failures; change this to
write git diff output to a temp file or use readarray with explicit exit-code
capture, then check git diff's exit status and fail the hook if non-zero so
markdown linting isn't silently skipped; update the hook so after running git
diff you test its $? (or the captured exit code) and only proceed to run npx
when the command succeeded, otherwise exit non-zero with a clear error message
(referencing the MD_FILES array population logic and the git diff invocation).
| case "${_auto_fmt}" in | ||
| 1|true|TRUE|yes|YES|on|ON) | ||
| # Check formatting; suppress stdout (file list) but preserve stderr for errors | ||
| if npx prettier --check "${MD_FILES[@]}" >/dev/null; then | ||
| : # Already formatted | ||
| else | ||
| echo "pre-commit: running prettier on markdown files" >&2 | ||
| # Write changes; suppress stdout but preserve stderr for errors | ||
| if ! npx prettier --write "${MD_FILES[@]}" >/dev/null; then | ||
| echo "pre-commit: prettier failed" >&2 | ||
| exit 1 | ||
| fi | ||
| echo "pre-commit: prettier updated markdown files. Aborting commit to preserve index integrity." >&2 | ||
| echo "Hint: review changes, restage (e.g., 'git add -p' or 'git add -A'), then commit again." >&2 | ||
| exit 1 | ||
| fi | ||
| ;; | ||
| esac |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Asymmetric behavior: disabled/unknown _auto_fmt skips prettier entirely.
The Rust formatting section (lines 72-78) has three branches:
- enabled → format-and-abort-if-changed
- disabled → check-only (fail if unformatted)
- unknown → check-only
This markdown section only has branch (1). When ECHO_AUTO_FMT=0 or an unrecognized value, prettier --check never runs—malformatted markdown silently passes the hook while Rust would fail. Either this is intentional (and should be documented), or you need parity:
🔧 Proposed fix for parity
case "${_auto_fmt}" in
1|true|TRUE|yes|YES|on|ON)
# Check formatting; suppress stdout (file list) but preserve stderr for errors
if npx prettier --check "${MD_FILES[@]}" >/dev/null; then
: # Already formatted
else
echo "pre-commit: running prettier on markdown files" >&2
# Write changes; suppress stdout but preserve stderr for errors
if ! npx prettier --write "${MD_FILES[@]}" >/dev/null; then
echo "pre-commit: prettier failed" >&2
exit 1
fi
echo "pre-commit: prettier updated markdown files. Aborting commit to preserve index integrity." >&2
echo "Hint: review changes, restage (e.g., 'git add -p' or 'git add -A'), then commit again." >&2
exit 1
fi
;;
+ 0|false|FALSE|no|NO|off|OFF)
+ # Check-only mode: fail if markdown is not formatted
+ if ! npx prettier --check "${MD_FILES[@]}" >/dev/null; then
+ echo "pre-commit: markdown formatting check failed. Run: npx prettier --write <files>" >&2
+ exit 1
+ fi
+ ;;
+ *)
+ # Unknown value → safest is check-only
+ if ! npx prettier --check "${MD_FILES[@]}" >/dev/null; then
+ echo "pre-commit: markdown formatting check failed. Run: npx prettier --write <files>" >&2
+ exit 1
+ fi
+ ;;
esac📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| case "${_auto_fmt}" in | |
| 1|true|TRUE|yes|YES|on|ON) | |
| # Check formatting; suppress stdout (file list) but preserve stderr for errors | |
| if npx prettier --check "${MD_FILES[@]}" >/dev/null; then | |
| : # Already formatted | |
| else | |
| echo "pre-commit: running prettier on markdown files" >&2 | |
| # Write changes; suppress stdout but preserve stderr for errors | |
| if ! npx prettier --write "${MD_FILES[@]}" >/dev/null; then | |
| echo "pre-commit: prettier failed" >&2 | |
| exit 1 | |
| fi | |
| echo "pre-commit: prettier updated markdown files. Aborting commit to preserve index integrity." >&2 | |
| echo "Hint: review changes, restage (e.g., 'git add -p' or 'git add -A'), then commit again." >&2 | |
| exit 1 | |
| fi | |
| ;; | |
| esac | |
| case "${_auto_fmt}" in | |
| 1|true|TRUE|yes|YES|on|ON) | |
| # Check formatting; suppress stdout (file list) but preserve stderr for errors | |
| if npx prettier --check "${MD_FILES[@]}" >/dev/null; then | |
| : # Already formatted | |
| else | |
| echo "pre-commit: running prettier on markdown files" >&2 | |
| # Write changes; suppress stdout but preserve stderr for errors | |
| if ! npx prettier --write "${MD_FILES[@]}" >/dev/null; then | |
| echo "pre-commit: prettier failed" >&2 | |
| exit 1 | |
| fi | |
| echo "pre-commit: prettier updated markdown files. Aborting commit to preserve index integrity." >&2 | |
| echo "Hint: review changes, restage (e.g., 'git add -p' or 'git add -A'), then commit again." >&2 | |
| exit 1 | |
| fi | |
| ;; | |
| 0|false|FALSE|no|NO|off|OFF) | |
| # Check-only mode: fail if markdown is not formatted | |
| if ! npx prettier --check "${MD_FILES[@]}" >/dev/null; then | |
| echo "pre-commit: markdown formatting check failed. Run: npx prettier --write <files>" >&2 | |
| exit 1 | |
| fi | |
| ;; | |
| *) | |
| # Unknown value → safest is check-only | |
| if ! npx prettier --check "${MD_FILES[@]}" >/dev/null; then | |
| echo "pre-commit: markdown formatting check failed. Run: npx prettier --write <files>" >&2 | |
| exit 1 | |
| fi | |
| ;; | |
| esac |
🤖 Prompt for AI Agents
In @.githooks/pre-commit around lines 133 - 150, The markdown formatter branch
for the _auto_fmt case currently only formats-then-aborts when enabled, causing
disabled/unknown values to skip checks; update the case for _auto_fmt in the
.githooks/pre-commit to mirror the Rust logic by adding branches for disabled
(0|false|... ) and a default/unknown branch that run `npx prettier --check
"${MD_FILES[@]}"` and exit non-zero on failures (i.e., fail the hook if files
are unformatted) so that MD_FILES, the case on _auto_fmt, and the existing
prettier --write/--check behavior are consistent across enabled/disabled/unknown
values.
| "MD013": false, | ||
| "MD033": { | ||
| "allowed_elements": ["u8", "br", "p", "img"] | ||
| "allowed_elements": ["u8", "br", "p", "img", "a", "strong", "sub"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
# First, let's look at the .markdownlint.json file
cat .markdownlint.jsonRepository: flyingrobots/echo
Length of output: 256
🌐 Web query:
What is the HTML u element and is it valid HTML5?
💡 Result:
The HTML element marks a span of inline text that should be rendered as having a non‑textual, “unarticulated” annotation (visually underlined by default). It was deprecated in HTML4 but reinstated in HTML5 with that semantic meaning — so yes, is valid in HTML5. Use it only for that semantic purpose (e.g., marking misspellings or certain proper‑name conventions); for purely presentational underlining use CSS (text-decoration). [1][2]
Sources:
[1] MDN Web Docs — element.
[2] HTML5 specification — the u element.
🌐 Web query:
Is u8 a valid HTML element in HTML5 specification?
💡 Result:
No. The HTML5/WHATWG spec and the canonical element lists do not include an element named ; only elements like (underline) exist. [1][2]
Sources:
[1] WHATWG HTML Standard — element reference. (html.spec.whatwg.org)
[2] MDN Web Docs — HTML elements reference (alphabetical). (developer.mozilla.org)
Citations:
- 1: https://html.spec.whatwg.org/dev/grouping-content.html?utm_source=openai
- 2: https://developer.mozilla.org/en-US/docs/Web/HTML/Element?utm_source=openai
u8 is not a valid HTML element — this is a typo.
<u8> does not exist in the HTML5 specification (confirmed against WHATWG). You likely meant <u> — the underline element, which is a valid HTML5 element with semantic meaning for unarticulated annotations (e.g., misspellings).
u8 sitting in the allowed elements list does exactly nothing useful; browsers render unknown elements as inline spans with zero semantics. The linter will silently allow <u8> tags that contribute no real value.
🔧 Fix
"MD033": {
- "allowed_elements": ["u8", "br", "p", "img", "a", "strong", "sub"]
+ "allowed_elements": ["u", "br", "p", "img", "a", "strong", "sub"]
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| "allowed_elements": ["u8", "br", "p", "img", "a", "strong", "sub"] | |
| "allowed_elements": ["u", "br", "p", "img", "a", "strong", "sub"] |
🤖 Prompt for AI Agents
In @.markdownlint.json at line 5, The allowed_elements array contains an invalid
HTML tag "u8" (key: "allowed_elements") — replace "u8" with the correct
underline element "u" (or remove the entry if underline shouldn't be allowed) so
the linter permits the valid <u> element rather than a nonstandard token.
| // SPDX-License-Identifier: Apache-2.0 | ||
| // © James Ross Ω FLYING•ROBOTS <https://github.com/flyingrobots> | ||
|
|
||
| #![allow(dead_code)] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
#![allow(dead_code)] is an inner attribute — this applies to the entire crate, not just this module.
Line 4 uses #! (inner attribute syntax), which means this allow(dead_code) propagates to the entire warp-core crate when this module is compiled. You almost certainly want #[allow(dead_code)] (outer attribute) at the module declaration in lib.rs, or apply it only to specific items within this file.
🐛 Fix: Use outer attribute or scoped allows
Option A: Remove line 4 entirely and annotate specific unused items:
-#![allow(dead_code)]Option B: Move to lib.rs at module declaration (if intentional for entire module):
#[allow(dead_code)]
mod snapshot_accum;🤖 Prompt for AI Agents
In `@crates/warp-core/src/snapshot_accum.rs` around lines 1 - 4, The crate-level
inner attribute `#![allow(dead_code)]` in snapshot_accum.rs applies to the
entire crate; change it to an outer/module-level allow or scope it to specific
items: remove the `#![allow(dead_code)]` line from snapshot_accum.rs and either
add `#[allow(dead_code)]` to the module declaration (`mod snapshot_accum;`) in
lib.rs if you intend the whole module to be allowed, or annotate individual
unused functions/structs inside snapshot_accum.rs (e.g., `#[allow(dead_code)]`
on the specific function/struct names) so the allowance is narrowly scoped.
| let start = out_edges.len() as u64; | ||
| if let Some(edge_list) = edges_from.get(&node_id) { | ||
| // Sort by EdgeId for canonical ordering | ||
| let mut sorted: Vec<_> = edge_list.clone(); | ||
| sorted.sort_by_key(|(eid, _)| *eid); | ||
|
|
||
| for (edge_id, edge_ix) in sorted { | ||
| out_edges.push(OutEdgeRef { | ||
| edge_ix_le: (edge_ix as u64).to_le(), | ||
| edge_id: edge_id.0, | ||
| }); | ||
| } | ||
| } | ||
| let len = out_edges.len() as u64 - start; | ||
| out_index.push(Range { | ||
| start_le: start.to_le(), | ||
| len_le: len.to_le(), | ||
| }); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick | 🔵 Trivial
Edge list clone for sorting still present — minor inefficiency.
Line 557 clones edge_list just to sort it:
let mut sorted: Vec<_> = edge_list.clone();This was flagged in a previous review. While the fix suggestion to restructure iteration was complex, you could at least use edge_list.to_vec() for clarity (same cost but more idiomatic) or move sorting into the edges_from map construction. Low priority since edge counts per node are typically small.
♻️ Avoid clone by consuming edges_from
- for key in self.nodes.keys() {
+ // Convert edges_from to owned for in-place sorting
+ let mut edges_from_owned: BTreeMap<NodeId, Vec<(EdgeId, usize)>> = edges_from;
+
+ for key in self.nodes.keys() {
if key.warp_id != warp_id || !reachable_nodes.contains(key) {
continue;
}
let node_id = key.local_id;
let start = out_edges.len() as u64;
- if let Some(edge_list) = edges_from.get(&node_id) {
- let mut sorted: Vec<_> = edge_list.clone();
- sorted.sort_by_key(|(eid, _)| *eid);
+ if let Some(mut edge_list) = edges_from_owned.remove(&node_id) {
+ edge_list.sort_by_key(|(eid, _)| *eid);
- for (edge_id, edge_ix) in sorted {
+ for (edge_id, edge_ix) in edge_list {🤖 Prompt for AI Agents
In `@crates/warp-core/src/snapshot_accum.rs` around lines 554 - 572, The current
code clones edge_list with edge_list.clone() before sorting (in the block that
builds out_edges for each node_id); replace that clone with edge_list.to_vec()
for clearer intent (or, if you prefer a slightly more optimal refactor, sort
when constructing edges_from so this per-node sort is unnecessary). Update the
loop that iterates sorted to use the new Vec from to_vec() and keep the same
behavior for pushing OutEdgeRef into out_edges and Range into out_index.
| ```rust | ||
| for (warp_id, warp_rewrites) in by_warp { | ||
| // Serial iteration across warps | ||
| let deltas = execute_parallel_sharded(view, &items, workers); | ||
| } | ||
| ``` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick | 🔵 Trivial
Code snippet provides incomplete context.
The snippet shows:
for (warp_id, warp_rewrites) in by_warp {
// Serial iteration across warps
let deltas = execute_parallel_sharded(view, &items, workers);
}But this is misleading because:
- The comment "Serial iteration across warps" implies this is the problem, but...
execute_parallel_sharded()is called inside the loop, which actually provides per-warp parallelism- The snippet doesn't show what happens to
deltasafter it's computed
Either show the complete logic (including how deltas are merged/applied) or remove the snippet entirely and rely on the prose explanation. Incomplete code snippets breed confusion.
🤖 Prompt for AI Agents
In `@docs/adr/TECH-DEBT-BOAW.md` around lines 79 - 84, The snippet in the ADR is
incomplete and misleading: within the for loop over by_warp the call to
execute_parallel_sharded(view, &items, workers) actually performs per-warp
parallel work but the snippet omits what happens to the returned deltas (e.g.,
merge/apply) and mislabels the loop as "Serial iteration across warps". Fix by
either (A) expanding the snippet to show how deltas is used
(merge/apply/accumulate) after execute_parallel_sharded and updating the comment
to reflect per-warp parallelism, referencing the loop variables by_warp and
warp_rewrites and the returned deltas, or (B) remove the code block entirely and
keep only the prose that explains the intended behavior of
execute_parallel_sharded and how deltas are combined.
| ### 8. Reduce `format!` Allocations in `emit_view_op_delta_scoped` | ||
|
|
||
| **Location:** `crates/echo-dind-tests/src/rules.rs:556-558` | ||
|
|
||
| **Issue:** The current implementation uses `format!` and string collection: | ||
|
|
||
| ```rust | ||
| let scope_hex: String = scope.0.iter().map(|b| format!("{:02x}", b)).collect(); | ||
| let op_id = make_node_id(&format!("sim/view/op:{}", scope_hex)); | ||
| ``` | ||
|
|
||
| **Opportunity:** Use a fixed-size buffer and `write!` to avoid heap allocations: | ||
|
|
||
| ```rust | ||
| let mut buf = [0u8; 64]; | ||
| hex::encode_to_slice(&scope.0, &mut buf).unwrap(); | ||
| let op_id = make_node_id_from_parts(b"sim/view/op:", &buf); | ||
| ``` | ||
|
|
||
| **Rationale:** P3 because this is in test rules (DIND), not production code, and the | ||
| allocation overhead is negligible compared to graph operations. | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
# Search for make_node_id_from_parts function definition
rg -n "make_node_id_from_parts" --type rustRepository: flyingrobots/echo
Length of output: 43
🏁 Script executed:
# Also check the current implementation in the referenced file
rg -n "fn make_node_id" --type rust -A 3Repository: flyingrobots/echo
Length of output: 339
🏁 Script executed:
# Look at the actual code in rules.rs around line 556-558
fd "rules.rs" --type f -x cat -n {} | grep -A 10 -B 10 "emit_view_op_delta_scoped\|scope_hex"Repository: flyingrobots/echo
Length of output: 3716
Suggested code example references non-existent function that will not compile.
The optimization suggestion (lines 183–187) proposes using make_node_id_from_parts, which does not exist in the codebase. The only available function is make_node_id(label: &str) in crates/warp-core/src/ident.rs:93, which requires a string parameter, not binary slices.
Either:
- Clearly mark this as requiring a new helper function to be implemented, or
- Provide a working alternative using the existing
make_node_idAPI:
let scope_hex: String = scope.0.iter().map(|b| format!("{:02x}", b)).collect();
let op_id = make_node_id(&format!("sim/view/op:{}", scope_hex));An ADR's code examples must be compilable or explicitly flagged as pseudo-code/aspirational design.
🤖 Prompt for AI Agents
In `@docs/adr/TECH-DEBT-BOAW.md` around lines 170 - 191, The ADR's example
references a non-existent helper make_node_id_from_parts; update the ADR to
either (A) state explicitly that the optimization requires adding a new helper
(e.g., implement make_node_id_from_parts or make_node_id_from_bytes in the ident
module alongside make_node_id) or (B) show a compilable alternative using the
existing API by producing a String (e.g., encode scope to hex into a String via
hex::encode or write! into a pre-allocated String) and pass that to
make_node_id(label: &str); mention which option is being demonstrated and keep
the code example consistent with the selected approach and current crate API
(make_node_id in crates/warp-core/src/ident.rs).
| ## Summary Statistics | ||
|
|
||
| | Priority | Count | Estimated Effort | | ||
| | -------- | ----- | ---------------- | | ||
| | P1 | 2 | ~2 hours | | ||
| | P2 | 3 | ~2-4 days | | ||
| | P3 | 4 | ~1-2 weeks | | ||
|
|
||
| **Recommendation:** Address P1 items in the next cleanup pass. P2 items should be | ||
| data-driven (benchmark first, then optimize). P3 items are exploratory and should | ||
| only be pursued if profiling reveals bottlenecks. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick | 🔵 Trivial
Effort estimates don't align with item complexity.
The estimates claim:
- P1 (2 items): ~2 hours → 1 hour per item
- P2 (3 items): ~2-4 days → 0.7-1.3 days per item
- P3 (4 items): ~1-2 weeks → 2-3 days per item
This is backwards. Per-item effort for P3 (2-3 days each) exceeds P2 (less than 1 day each), yet P2 is supposedly "medium effort" vs. P3 "nice to have."
Either:
- Re-estimate with item-level granularity (P2 Item 3 "Cross-Warp Parallelism" alone justifies multi-day effort), or
- Remove specific hour/day estimates and use T-shirt sizing (S/M/L/XL)
Shipping obviously wrong math in an ADR undermines credibility.
🤖 Prompt for AI Agents
In `@docs/adr/TECH-DEBT-BOAW.md` around lines 237 - 247, The Summary Statistics
table under "## Summary Statistics" has inconsistent per-item effort math (P3
per-item > P2) — either re-estimate each item at item-level granularity (list
explicit per-item estimates for the P1/P2/P3 rows, e.g., "P2: Item A 2d, Item B
1d, Item C 3d") or remove the numeric hours/days and replace with T-shirt sizing
(S/M/L) in the table and adjust the recommendation paragraph accordingly; update
the table rows for P1/P2/P3 and the following "**Recommendation:**" text to
reflect the chosen approach and ensure "Cross-Warp Parallelism" (and any other
heavy items) is assigned a multi-day estimate or an L/XL size.
| \begin{verbatim} | ||
| Engine::apply(tx, rule_name, scope) | ||
| | | ||
| +-- self.rules.get(rule_name) → Option<&Rule> | ||
| | FILE: crates/warp-core/src/rules.rs | ||
| | Returns None if rule not registered | ||
| | | ||
| +-- rule.match_at(view, scope) → Vec<Match> | ||
| | FILE: crates/warp-core/src/rule.rs | ||
| | | | ||
| | +-- FOR pattern IN rule.patterns: | ||
| | | | ||
| | +-- pattern.try_match(view, scope) → Option<Match> | ||
| | | Evaluates pattern predicates against graph | ||
| | | | ||
| | +-- IF matched: matches.push(Match { ... }) | ||
| | | ||
| +-- FOR (idx, m) IN matches.enumerate(): | ||
| | | ||
| +-- self.scheduler.enqueue(PendingRewrite { | ||
| tx, | ||
| rule_id: rule.id(), | ||
| scope: *scope, | ||
| match_ix: idx, | ||
| footprint: m.footprint.clone(), | ||
| }) | ||
| \end{verbatim} | ||
|
|
||
| \section{PendingRewrite Structure} | ||
|
|
||
| \textbf{File:} \texttt{crates/warp-core/src/scheduler.rs} | ||
|
|
||
| \begin{lstlisting}[language=Rust] | ||
| pub(crate) struct PendingRewrite { | ||
| pub rule_id: Hash, // Identifier of the rule | ||
| pub compact_rule: CompactRuleId, // In-process handle | ||
| pub scope_hash: Hash, // For deterministic ordering | ||
| pub scope: NodeKey, // Scope node from apply() | ||
| pub footprint: Footprint, // Independence checks | ||
| pub phase: RewritePhase, // State machine phase | ||
| } | ||
| \end{lstlisting} | ||
|
|
||
| \begin{commentary} | ||
| The \texttt{match\_ix} field is subtle but important. A single rule can match multiple times at the same scope (e.g., a ``connect all neighbors'' rule). Each match becomes a separate \texttt{PendingRewrite} with a different \texttt{match\_ix}. | ||
|
|
||
| This index is part of the canonical sort key, ensuring deterministic ordering even when multiple matches occur. | ||
| \end{commentary} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Call trace and commentary no longer match the actual apply pipeline.
Lines 27–53 still describe rule.match_at/matches and enqueue match_ix, and Lines 70–74 explain a match_ix field that no longer exists. The real flow uses rule.matcher(view, scope) -> bool, computes scope_hash, and enqueues PendingRewrite without match_ix. Also, rules.rs is a wrong file reference for this path.
✏️ Suggested doc alignment
-+-- self.rules.get(rule_name) → Option<&Rule>
-| FILE: crates/warp-core/src/rules.rs
+-- self.rules.get(rule_name) → Option<&RewriteRule>
+| FILE: crates/warp-core/src/engine_impl.rs (apply_in_warp)
@@
-+-- rule.match_at(view, scope) → Vec<Match>
-| FILE: crates/warp-core/src/rule.rs
-| |
-| +-- FOR pattern IN rule.patterns:
-| |
-| +-- pattern.try_match(view, scope) → Option<Match>
-| | Evaluates pattern predicates against graph
-| |
-| +-- IF matched: matches.push(Match { ... })
+-- rule.matcher(view, scope) → bool
+| FILE: crates/warp-core/src/rule.rs
@@
-+-- FOR (idx, m) IN matches.enumerate():
- |
- +-- self.scheduler.enqueue(PendingRewrite {
- tx,
- rule_id: rule.id(),
- scope: *scope,
- match_ix: idx,
- footprint: m.footprint.clone(),
- })
+ +-- if matched: compute footprint + enqueue PendingRewrite {
+ rule_id: rule.id,
+ compact_rule: CompactRuleId,
+ scope_hash: scope_hash(...),
+ scope: NodeKey,
+ footprint: compute_footprint(...),
+ phase: Matched,
+ })
@@
-\begin{commentary}
-The \texttt{match\_ix} field is subtle but important. ...
-\end{commentary}
+% match_ix no longer exists in PendingRewrite; remove or replace with current fields.🤖 Prompt for AI Agents
In `@docs/book/echo/sections/20-tour-rule-matching.tex` around lines 27 - 74,
Update the call trace and commentary to match the current apply pipeline:
replace references to rule.match_at/view matches and match_ix with the actual
matcher-based flow (Engine::apply -> self.rules.get(rule_name) ->
rule.matcher(view, scope) -> compute scope_hash ->
scheduler.enqueue(PendingRewrite { tx, rule_id: rule.id(), compact_rule: ...,
scope_hash, scope, footprint, phase }) and remove any mention of match_ix; also
correct the file reference from crates/warp-core/src/rules.rs to the file that
defines the matcher (e.g., crates/warp-core/src/rule.rs or wherever rule.matcher
is implemented) and update the PendingRewrite field list and commentary to
reflect the current fields (scope_hash, compact_rule, phase) instead of
match_ix.
| \textbf{Entry Point:} \texttt{merge\_deltas()} \\ | ||
| \textbf{File:} \texttt{crates/warp-core/src/boaw/merge.rs} | ||
|
|
||
| \begin{verbatim} | ||
| merge_deltas(deltas: Vec<TickDelta>) → TickDelta | ||
| | | ||
| +-- let mut all_ops: Vec<(WarpOp, OpOrigin)> = Vec::new() | ||
| | | ||
| +-- FOR delta IN deltas: | ||
| | +-- all_ops.extend(delta.ops.into_iter().zip(delta.origins)) | ||
| | | ||
| +-- all_ops.sort_by_key(|(op, origin)| { | ||
| | (op.sort_key(), origin.clone()) | ||
| | }) | ||
| | | | ||
| | +-- Sort key: (WarpOpKey, OpOrigin) | ||
| | +-- WarpOpKey: (op_kind, target_id) | ||
| | +-- OpOrigin: (intent_id, rule_id, match_ix, op_ix) | ||
| | | ||
| +-- Dedupe identical ops (same key + same effect) | ||
| | | ||
| +-- RETURN TickDelta { ops, origins } | ||
| \end{verbatim} | ||
|
|
||
| \begin{commentary} | ||
| The sort key is crucial. \texttt{WarpOpKey} groups operations by kind and target (e.g., all \texttt{SetAttachment} ops on node X come together). \texttt{OpOrigin} breaks ties deterministically---the rewrite that was admitted first (lower intent/rule/match IDs) wins. | ||
|
|
||
| Why dedupe? Multiple workers might emit identical operations (e.g., two rules both setting the same attribute to the same value). We keep only one. | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
merge_deltas signature/semantics are wrong in the doc.
The real API is merge_deltas(...) -> Result<Vec<WarpOp>, MergeError> and conflicts error out; no “lower OpOrigin wins” behavior. This is a functional misdescription.
✏️ Suggested correction
-merge_deltas(deltas: Vec<TickDelta>) → TickDelta
+merge_deltas(deltas: Vec<TickDelta>) → Result<Vec<WarpOp>, MergeError>
@@
-+-- Dedupe identical ops (same key + same effect)
-|
-+-- RETURN TickDelta { ops, origins }
++-- Dedupe identical ops (same key + same effect)
++-- If same-key ops differ → Err(MergeError::Conflict)
++-- If same-tick write to new warp → Err(MergeError::WriteToNewWarp)
++-- RETURN Vec<WarpOp>
@@
-OpOrigin breaks ties deterministically---the rewrite that was admitted first ... wins.
+OpOrigin is used for deterministic ordering and error attribution; conflicting
+same-key ops are rejected rather than “winning.”🤖 Prompt for AI Agents
In `@docs/book/echo/sections/22-tour-delta-merge.tex` around lines 13 - 41, Update
the doc to reflect the real merge_deltas API and semantics: change the signature
from returning TickDelta to merge_deltas(...) -> Result<Vec<WarpOp>, MergeError>
and remove the claim that “lower OpOrigin wins”; instead state that when
operations share the same WarpOpKey (WarpOpKey = (op_kind, target_id)) but have
conflicting effects, merge_deltas returns a MergeError (conflicts error out);
keep that operations are sorted deterministically by (WarpOpKey, OpOrigin) for
stable processing but do not imply tie-breaking by OpOrigin to resolve
conflicts, and adjust any references to TickDelta, ops/origins collection, and
deduplication accordingly (mention WarpOp, OpOrigin, WarpOpKey, MergeError, and
merge_deltas so readers can locate the code).
BOAW: Best Of All Worlds
This is what programming without concurrency problems feels like. Lockless, mathematically safe. BOOM!
Lil' humble brag: Just built the thing almost nobody builds correctly: a parallel system that's deterministic by construction.
Most engines pick two:
We're taking all four by doing the adult thing:
This is the "free money" because once you have canonical merge, scaling workers doesn't change truth---it only changes throughput.
Summary (Phase 6B)
execute_parallel_sharded()intoengine_impl.rs::apply_reserved_rewrites()emit_view_op_delta_scoped()derives IDs from intent scopeECHO_WORKERSenv var andEngineBuilder::workers(n)for configurationTECH-DEBT-BOAW.mdTest plan
cargo test -p warp-core --features delta_validatecargo test -p echo-dind-harness(golden hashes regenerated)Closes #258