Skip to content

Conversation

@rickstaa
Copy link
Member

@rickstaa rickstaa commented Jan 12, 2026

What does this pull request do? Explain your changes. (required)

Allow a 64–96 byte auxData field to support additional on-chain metadata. This was part of the protocol work to distinguish different job types on-chain (see this tech spec: https://www.notion.so/On-Chain-Ticket-Job-Type-Distinction-2bf660222d08814d89b1fa7221ad3291).

This proposal was originally drafted as a protocol change that would not materially increase gas costs or code complexity. However, given the new requirement to also encode payment clearinghouse information on-chain, please review this approach with @mehrdadmms, @yondonfu, and @j0sh to confirm whether this is still the preferred solution.

Specific updates (required)

  • Extended auxData field to accept 96 bytes.
  • Updated tests.

How did you test each of these updates (required)

Not fully tested, however made sure tests pass.

Does this pull request close any open issues?

Checklist:

  • README and other documentation updated
  • All tests using yarn test pass

@rickstaa rickstaa force-pushed the feat/extend-auxdata branch from e0df306 to 9c71618 Compare January 12, 2026 11:44
Allow 64–96 byte auxData to support additional on-chain metadata.

Co-authored-by: Yondon Fu <yondon.fu@gmail.com>
@rickstaa rickstaa force-pushed the feat/extend-auxdata branch from 9c71618 to 8f548fa Compare January 12, 2026 12:24
@codecov
Copy link

codecov bot commented Jan 12, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 100.00000%. Comparing base (d03671e) to head (8f548fa).

Additional details and impacted files

Impacted file tree graph

@@               Coverage Diff               @@
##                delta         #652   +/-   ##
===============================================
  Coverage   100.00000%   100.00000%           
===============================================
  Files              29           29           
  Lines            1331         1331           
  Branches          223          223           
===============================================
  Hits             1331         1331           
Files with missing lines Coverage Δ
contracts/pm/mixins/MixinTicketProcessor.sol 100.00000% <100.00000%> (ø)

Continue to review full report in Codecov by Sentry.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update d03671e...8f548fa. Read the comment docs.

Files with missing lines Coverage Δ
contracts/pm/mixins/MixinTicketProcessor.sol 100.00000% <100.00000%> (ø)
🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

@j0sh
Copy link

j0sh commented Jan 29, 2026

Thanks for checking in on this @rickstaa

Question: does the auxData need to a strict upper bound check? Is there an issue with allowing unbounded auxData through? That would allow us to add more data to tickets as needed, without on-chain changes.

The on-chain clearinghouse data is still TBD but I imagine it'd be somewhere in the neighborhood of 8-32 additional bytes, depending on the final encoding. We'd probably want to encode one or two fields, eg a clearinghouse "customer ID" and maybe an "user ID" of the customer's own customer.

We will probably need some way of signaling which types of metadata are encoded in the auxData, because ordinary gateways don't need to publish clearinghouse data that isn't relevant to them.

One simple way of signaling is to use a bit string indicating which metadata types are present. This can be a variable-length encoding, eg using the high bit as a continuation flag. So the overhead for that metadata signaling would be just 1 byte, to begin with. If we ever have more than 127 types of metadata, then we can set the high bit and add another byte.

The interpretation of that bit string (and the metadata bytes associated with each bit) can happen off-chain; I don't think this is anything the on-chain protocol needs to be aware of.

The actual metadata would naturally need to be encoded the same order they are signaled, and there would need to be consensus among clients (eg, a registry) to avoid new fields from stomping over one another.

So for example:

On-chain job type: 0b1
Clearinghouse customer ID: 0b10
Clearinghouse user ID: 0b100

Encoding all these would be:

<Current 64-byte auxData>
0b00000111 ( 0x07 )
<32-bit job metadata>
<8-byte clearinghouse customer ID>
<8-byte clearinghouse user ID>

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants