-
Notifications
You must be signed in to change notification settings - Fork 35
Fix: Large table causes sql error #2242
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: feature/speedup-value-formatting
Are you sure you want to change the base?
Fix: Large table causes sql error #2242
Conversation
410c7ff to
e497d0b
Compare
4e43530 to
f792683
Compare
3eb630d to
491ee41
Compare
491ee41 to
b0c0685
Compare
Signed-off-by: Kostiantyn Miakshyn <molodchick@gmail.com>
Signed-off-by: Kostiantyn Miakshyn <molodchick@gmail.com>
f792683 to
f599f40
Compare
enjeck
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Testing locally:
10k rows before vs after I don't notice any speedups. How are you measuring the timing?
And 100k rows fails for both
| $row->setLastEditAt($sleeve['last_edit_at']); | ||
| $row->setTableId((int)$sleeve['table_id']); | ||
|
|
||
| $cachedCells = json_decode($sleeve['cached_cells'] ?? '{}', true); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are we falling back to the normal way if nothing is returned from cached_cells somewhere else?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
no, it's not needed as we are caching all rows during migration, see OCA\Tables\Migration\CacheSleeveCells and we're updating cached_cells on any cell edit. So, I have no any use-case when cached_cells can be missing/outdated.
| foreach ($columnIds as $columnId) { | ||
| $columns[$columnId] = $this->columnMapper->find($columnId); | ||
| $mappers[$columnId] = $this->columnsHelper->getCellMapperFromType($columns[$columnId]->getType()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks like we run into the n+1 problem?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure that this is a big problem here, as we're calling ColumnMapper::preloadColumns() before Row2Mapper::getRows() call and it memoizes loaded columns
|
hey @enjeck!
I'm really surprised that you haven't any performance improvement with my PRs. I've spent some time and recorded screencast for you that compares:
nextcloud-tables-performance-2026-01-27_17.42.33.mp4
I'm so sorry, but are you sure that you have switched to a correct branch? Can you share xlsx file with table data, so I can import it on my instance and re-test? BTW as we're talking about importing of the large files - I highly recommend to merge another my PR #1801 😃 |
This PR built on top of #2238.
Closes #1490.
It fixes loading of the large tables. Right now it is not possible to load a table with 30k rows and 8 columns due to an error:
PR contains 2 commits: fix itself and performance improvement.
I've measured latency for a various scenarios for
/row/table/{tableId}endpoint:Next step: move filtering/sorting/pagination to BE side and speedup FE. But this is completely separate topic which will be implemented in upcoming PR.