Skip to content

Conversation

@YannByron
Copy link
Contributor

Purpose

Linked issue: close #2376

Brief change log

Tests

API and Format

Documentation

@YannByron YannByron force-pushed the main-spark branch 2 times, most recently from 02d49b2 to cbd062d Compare January 18, 2026 14:44
Copy link
Member

@wuchong wuchong left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@YannByron thanks, I left some comments.

Comment on lines 132 to 135
FlussUpsertInputPartition(
tableBucket,
snapshotIdOpt.getAsLong,
logOffsetOpt.getAsLong
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since this is a batch InputPartition, we should add an end offset to make the log split bounded. The latest end offset can be got from OffsetsInitializer.latest().getBucketOffsets(..) method.

We should:

  1. fetch the latest kvSnapshots, it is a map<bucket, snapshot_id&log_start_offset>.
  2. fetch the latest offset from OffsetsInitializer.latest, it is a map<bucket, log_end_offset>.
  3. Join the kvSnapshots and OffsetsInitializer.latest, to generate a input partition list for each bucket.

Copy link
Contributor Author

@YannByron YannByron Jan 21, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, then we should move OffsetsInitializer-related code to fluss-client first.

}

// Poll for more log records
val scanRecords: ScanRecords = logScanner.poll(POLL_TIMEOUT)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

logScanner.poll() is a best-effort API: it may return an empty result due to transient issues (e.g., network glitches) even when unread log records remain on the server. Therefore, we should poll in a loop until we reach the known end offset.

The end offset should be determined at job startup using OffsetsInitializer.latest().getBucketOffsets(...), which gives us the high-watermark for each bucket at the beginning of the batch job.

Since there’s no built-in API to read a bounded log split, we must manually:

  • Skip any records with offsets beyond the precomputed end offset, and
  • Signal there is no next once all buckets have reached their respective end offsets.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please make sure that BatchScanner.pollBatch() is also a best-effort API?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@YannByron the differnece is that LogScanner is unbounded and BatchScanner is bounded. So when org.apache.fluss.client.table.scanner.log.LogScanner#poll returns an empty ScanRecords, it doesn't mean the source is finished. When the BatchScanner.pollBatch() returns null iterator, it means it reaches to the batch end.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Understand. Thank you.

logRecords = bucketRecords.iterator()
if (logRecords.hasNext) {
val scanRecord = logRecords.next()
currentRow = convertToSparkRow(scanRecord)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The LogRecord is a changelog that contains -D (delete) and -U (update-before) records. To produce a consistent view, we need to merge these changes with the KV snapshot data in a union-read fashion—just like how we combine data lake snapshots with changelogs.

Fortunately, the KV snapshot scan is already sorted by primary key. We can leverage this by:

  1. Materializing the delta changes into a temporary delta table;
  2. Sorting the delta table by primary key using org.apache.fluss.row.encode.KeyEncoder#of(...);
  3. Performing a sort-merge between the sorted KV snapshot reader and the sorted delta table reader.

This enables an efficient and correct merge without requiring random lookups or hash-based joins.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So, I should fetch and keep all the changes between starting offset and stopping offset since the delta changes is no sorted, then sort then, and execute a sort-merge with kv snapshot, right?
And, if there are too many changes, spill is necessary in the future.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@YannByron you are right. But Flink connector also didn't implement spill logic yet.

@wuchong wuchong merged commit 29a071b into apache:main Jan 21, 2026
14 of 16 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[spark] Support Spark Batch Read

2 participants