Skip to content

Conversation

@0xbigapple
Copy link
Owner

What does this PR do?

  • Optimize memory allocation for the filter interface of JSON-RPC, eth_newFilter, eth_newBlockFilter
  • add config item node.jsonrpc.maxBlockFilterNum

Why are these changes required?

This PR has been tested by:

  • Unit Tests
  • Manual Testing

Follow up

Extra details

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

## 📋 Review Summary

This pull request introduces optimizations for the JSON-RPC filter interface by adding caching to reduce memory allocation, which is a significant improvement. It also adds a configurable limit for the number of block filters. The changes are well-implemented, but there are a few minor areas for improvement regarding resource management and code consistency.

🔍 General Feedback

  • Good Caching Implementation: The use of Guava's Cache to reduce object creation for LogFilterElement and block hashes is a good performance enhancement.
  • New Exception: A new exception JsonRpcExceedLimitException was added but not used. It should be used to provide more specific error information.
  • Resource Cleanup: Not all new caches are being invalidated in the close() method, which could lead to memory leaks.

return result;
}

@Override

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 It seems like `blockHashCache` is not invalidated in the `close()` method, which could lead to a memory leak. You should invalidate it along with `logElementCache`.
Suggested change
@Override
logElementCache.invalidateAll();
blockHashCache.invalidateAll();

} else {
blockFilter2Result = blockFilter2ResultSolidity;
}
if (blockFilter2Result.size() >= maxBlockFilterNum) {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 You've introduced a new exception `JsonRpcExceedLimitException`, but you're throwing `JsonRpcInvalidParamsException` here. It would be more consistent and specific to use the new exception you've created.
Suggested change
if (blockFilter2Result.size() >= maxBlockFilterNum) {
if (blockFilter2Result.size() >= maxBlockFilterNum) {
throw new JsonRpcExceedLimitException(
"exceed max block filters: " + maxBlockFilterNum + ", try again later");
}

} catch (ExecutionException e) {
logger.error("Getting/loading blockHash from cache failed", e); //never happen
cachedBlockHash = originalBlockHash;
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟢 The comment `//never happen` isn't very descriptive. It would be better to explain *why* it will never happen. For example, you could explain that the callable for the cache does not throw a checked exception.
Suggested change
}
// The callable cannot throw a checked exception, so this should never happen.
logger.error("Getting/loading blockHash from cache failed", e);

Comment on lines +514 to +515
result = result + (removed ? 1 : 0);
return result;

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟢 The `hashCode()` implementation can be made more concise and readable by using `java.util.Objects.hash()`.
Suggested change
result = result + (removed ? 1 : 0);
return result;
@Override
public int hashCode() {
return Objects.hash(blockHash, transactionHash, transactionIndex, logIndex, removed);
}

@0xbigapple 0xbigapple closed this Jan 20, 2026
@0xbigapple 0xbigapple reopened this Jan 20, 2026
Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

📋 Review Summary

This PR introduces memory optimizations for JSON-RPC filters by implementing caching for LogFilterElement and block hashes, which is a solid improvement. It also adds a limit to the number of block filters to prevent resource exhaustion.

🔍 General Feedback

  • Unused Exception: The new JsonRpcExceedLimitException class is added but not utilized in the implementation.
  • Scope Creep: There are several changes unrelated to the PR's stated objective, such as the P2P service startup order change in TronNetService.java, new unrelated tests in NodePersistServiceTest.java, and ShieldedReceiveTest.java. These should ideally be separated or explicitly justified.
  • Production Code Comments: There are "TODO fix test" comments added to production code (e.g., SnapshotManager.java) which indicate that production code is being modified to workaround test issues.
  • Configuration: Consider exposing the cache sizes as configuration parameters for better tunability.

blockFilter2Result = blockFilter2ResultSolidity;
}
if (blockFilter2Result.size() >= maxBlockFilterNum) {
throw new JsonRpcInvalidParamsException(

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 The class JsonRpcExceedLimitException was added in this PR but is not used here. You are throwing JsonRpcInvalidParamsException instead.

If you intend to use JsonRpcExceedLimitException, please ensure you:

  1. Update this throw statement.
  2. Update the newBlockFilter method signature in TronJsonRpc.java to throw JsonRpcExceedLimitException.
  3. Add the corresponding @JsonRpcError annotation in TronJsonRpc.java.

private static final int maxBlockFilterNum = Args.getInstance().getJsonRpcMaxBlockFilterNum();
private static final Cache<LogFilterElement, LogFilterElement> logElementCache =
CacheBuilder.newBuilder()
.maximumSize(300_000L) // 300s * tps(1000) * 1 log/tx ≈ 300_000

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟢 The cache maximum sizes are hardcoded. Consider making these configurable via Args or CommonParameter, similar to jsonRpcMaxBlockFilterNum. This would provide flexibility for nodes with different memory constraints.


private void deleteCheckpoint() {
if(checkTmpStore == null) {
// only occurs in mock test. TODO fix test

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟢 Adding comments about mock tests in production code is generally discouraged. If this null check is required for safety in production, the comment might be misleading. If it is only for tests, the tests should ideally be fixed to set up the environment correctly rather than patching production code.

effectiveCheckService.init();

// Move P2P service start to the end
setP2pConfig(getConfig());

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 This change alters the startup order of services and seems unrelated to the PR's objective of optimizing memory allocation for JSON-RPC. While postponing P2P start might be a valid fix, it should ideally be in a separate PR or clearly documented in the PR description to ensure it gets proper visibility and testing regarding side effects.

}

@Test
public void testIsMining() {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟢 This test case seems unrelated to the PR's scope.

@0xbigapple 0xbigapple closed this Jan 20, 2026
@0xbigapple 0xbigapple reopened this Jan 20, 2026
Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

📋 Review Summary

The PR successfully implements memory optimization for JSON-RPC filters by introducing a caching mechanism for LogFilterElement and block hashes, and it adds a configurable limit to the number of block filters. These are valuable improvements for stability and resource usage.

🔍 General Feedback

  • Scope & Risk: There is a significant change to the service startup sequence in TronNetService that appears unrelated to the PR's objective. This poses a potential regression risk and should be scrutinized or reverted.
  • Unrelated Changes: Several files (NodePersistServiceTest, ShieldedReceiveTest) contain changes that do not align with the PR title. It is best practice to keep PRs focused on a single responsibility.
  • Defensive Coding: Some null checks were added to production code explicitly to fix test failures ("only occurs in mock test"). While safe, it is preferable to fix the tests themselves to avoid polluting production logic.

Comment on lines +102 to +105
// Move P2P service start to the end
setP2pConfig(getConfig());
p2pService.start(p2pConfig);
p2pService.register(p2pEventHandler);

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟠 This change modifies the service startup sequence by moving p2pService initialization to the end. This falls outside the scope of the PR's objective ("Optimize memory allocation for JSON-RPC").

Changing the startup order can have significant side effects if other services (like PeerManager, syncService, etc.) rely on the P2P service being active during their initialization.

Please revert this change unless it is strictly necessary for the memory optimization, in which case, please provide a detailed explanation of the dependency.


@Override
public void close() throws IOException {
logElementCache.invalidateAll();

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟢 For consistency and to ensure all resources are released, consider invalidating the blockHashCache here as well.

Suggested change
logElementCache.invalidateAll();
logElementCache.invalidateAll();
blockHashCache.invalidateAll();

@@ -0,0 +1,40 @@
package org.tron.core.net.service.nodepersist;

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟢 This new test file appears unrelated to the PR's objective of optimizing JSON-RPC memory allocation. It would be better to include this in a separate PR to keep changes focused.


private void deleteCheckpoint() {
if(checkTmpStore == null) {
// only occurs in mock test. TODO fix test

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟢 While this check prevents the test failure, leaving a TODO in production code regarding a test fix is technically technical debt. It is recommended to fix the test setup to ensure checkTmpStore is properly initialized rather than patching the production code.

}
int queueSize = 0;
if (eventListeners == null || eventListeners.isEmpty()) {
//only occurs in mock test. TODO fix test

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟢 Similar to the comment in SnapshotManager, try to address the root cause in the test environment instead of modifying production code with test-specific fixes.

@0xbigapple 0xbigapple closed this Jan 20, 2026
@0xbigapple 0xbigapple reopened this Jan 20, 2026
Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

📋 Review Summary

This PR introduces a configurable limit for JSON-RPC block filters to prevent excessive memory usage, which is a good improvement. The use of Guava's Cache for managing filter elements and block hashes is also a solid choice for performance and memory management.

🔍 General Feedback

  • The PR includes some unrelated changes (e.g., in TronNetService, and some test files). It would be best to move these to separate PRs to keep this one focused on the JSON-RPC filter optimization.
  • There are some minor improvements that can be made, like using the newly created exception and simplifying some code.
  • Please address the new TODO comments.

Overall, the core changes are good, but the PR would be stronger if it was more focused.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟢 The new test file NodePersistServiceTest.java seems unrelated to the PR's goal, which is to optimize memory allocation for the filter interface of JSON-RPC. Please consider moving this to a separate PR to keep this PR focused.

public JsonRpcExceedLimitException(String message, Throwable cause) {
super(message, cause);
}
} No newline at end of file

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟢 Nitpick: Please add a newline at the end of the file.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟢 The changes in this file, moving the P2P service start to the end, seem unrelated to the PR's goal of optimizing memory allocation for the JSON-RPC filter interface. Could you please clarify why these changes are included in this PR or move them to a separate one?

Comment on lines +1446 to 1447
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 You've created a specific exception JsonRpcExceedLimitException, but here you are throwing a generic JsonRpcInvalidParamsException. It would be more consistent to use the new exception.

Suggested change
}
throw new JsonRpcExceedLimitException(
"exceed max block filters: " + maxBlockFilterNum + ", try again later");

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟢 This new test testIsMining seems unrelated to the PR's goal of optimizing the JSON-RPC filter interface. Please consider moving it to a separate, more relevant PR.

Comment on lines +482 to +517

@Override
public boolean equals(Object o) {
if (this == o) {
return true;
}
if (o == null || this.getClass() != o.getClass()) {
return false;
}
LogFilterElement item = (LogFilterElement) o;
if (!Objects.equals(blockHash, item.blockHash)) {
return false;
}
if (!Objects.equals(transactionHash, item.transactionHash)) {
return false;
}
if (!Objects.equals(transactionIndex, item.transactionIndex)) {
return false;
}
if (!Objects.equals(logIndex, item.logIndex)) {
return false;
}
return removed == item.removed;
}

@Override
public int hashCode() {
int result = 0;
result = 31 * result + (blockHash == null ? 0 : blockHash.hashCode());
result = 31 * result + (transactionHash == null ? 0 : transactionHash.hashCode());
result = 31 * result + (transactionIndex == null ? 0 : transactionIndex.hashCode());
result = 31 * result + (logIndex == null ? 0 : logIndex.hashCode());
result = result + (removed ? 1 : 0);
return result;
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟢 The equals and hashCode implementations can be simplified using java.util.Objects.

Suggested change
@Override
public boolean equals(Object o) {
if (this == o) {
return true;
}
if (o == null || this.getClass() != o.getClass()) {
return false;
}
LogFilterElement item = (LogFilterElement) o;
if (!Objects.equals(blockHash, item.blockHash)) {
return false;
}
if (!Objects.equals(transactionHash, item.transactionHash)) {
return false;
}
if (!Objects.equals(transactionIndex, item.transactionIndex)) {
return false;
}
if (!Objects.equals(logIndex, item.logIndex)) {
return false;
}
return removed == item.removed;
}
@Override
public int hashCode() {
int result = 0;
result = 31 * result + (blockHash == null ? 0 : blockHash.hashCode());
result = 31 * result + (transactionHash == null ? 0 : transactionHash.hashCode());
result = 31 * result + (transactionIndex == null ? 0 : transactionIndex.hashCode());
result = 31 * result + (logIndex == null ? 0 : logIndex.hashCode());
result = result + (removed ? 1 : 0);
return result;
}
@Override
public boolean equals(Object o) {
if (this == o) {
return true;
}
if (o == null || getClass() != o.getClass()) {
return false;
}
LogFilterElement that = (LogFilterElement) o;
return removed == that.removed && Objects.equals(blockHash, that.blockHash)
&& Objects.equals(transactionHash, that.transactionHash)
&& Objects.equals(transactionIndex, that.transactionIndex) && Objects
.equals(logIndex, that.logIndex);
}
@Override
public int hashCode() {
return Objects.hash(blockHash, transactionHash, transactionIndex, logIndex, removed);
}

}

private void deleteCheckpoint() {
if(checkTmpStore == null) {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟢 It's good practice to address TODO comments. Could you please fix the test as the comment suggests? If not in this PR, please create a ticket to track this.

@0xbigapple 0xbigapple closed this Jan 20, 2026
@0xbigapple 0xbigapple reopened this Jan 20, 2026
Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

📋 Review Summary

This pull request introduces performance optimizations for JSON-RPC filter interfaces by adding caching for log elements and block hashes, and it also imposes a configurable limit on the number of active block filters to prevent resource exhaustion. Additionally, the PR includes some unrelated changes, such as new tests and reordering of service initialization.

🔍 General Feedback

  • The use of Guava caches is a good approach for optimizing memory and performance. The configuration and usage patterns are generally sound.
  • The addition of a limit for block filters is a valuable safeguard for the node's stability.
  • The PR could be improved by separating the unrelated changes (e.g., new tests for different packages, P2P service initialization reordering) into their own pull requests for better review focus and clearer commit history.
  • The newly implemented hashCode method in LogFilterElement is functional but could be simplified and made more robust by using Objects.hash().
  • A new exception JsonRpcExceedLimitException was added but not used. This appears to be dead code.
  • Remember to invalidate all caches in the close() method.

public JsonRpcExceedLimitException(String message, Throwable cause) {
super(message, cause);
}
} No newline at end of file

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 This new exception class is not used anywhere in the codebase. In TronJsonRpcImpl.java, a JsonRpcInvalidParamsException is thrown when the block filter limit is exceeded.

Consider either using this new exception there (and updating the interface and error mappings accordingly) or removing this class to avoid dead code.

Comment on lines +509 to +516
int result = 0;
result = 31 * result + (blockHash == null ? 0 : blockHash.hashCode());
result = 31 * result + (transactionHash == null ? 0 : transactionHash.hashCode());
result = 31 * result + (transactionIndex == null ? 0 : transactionIndex.hashCode());
result = 31 * result + (logIndex == null ? 0 : logIndex.hashCode());
result = result + (removed ? 1 : 0);
return result;
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 The custom hashCode implementation could be improved. While it is functional, using java.util.Objects.hash() would be more concise, less error-prone, and is generally considered a best practice for generating hash codes from multiple fields.

Suggested change
int result = 0;
result = 31 * result + (blockHash == null ? 0 : blockHash.hashCode());
result = 31 * result + (transactionHash == null ? 0 : transactionHash.hashCode());
result = 31 * result + (transactionIndex == null ? 0 : transactionIndex.hashCode());
result = 31 * result + (logIndex == null ? 0 : logIndex.hashCode());
result = result + (removed ? 1 : 0);
return result;
}
@Override
public int hashCode() {
return Objects.hash(blockHash, transactionHash, transactionIndex, logIndex, removed);
}

return result;
}

@Override

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟢 For consistency and to ensure all cache resources are released upon closing, you should also invalidate blockHashCache.

Suggested change
@Override
logElementCache.invalidateAll();
blockHashCache.invalidateAll();

@0xbigapple 0xbigapple closed this Jan 20, 2026
@0xbigapple 0xbigapple reopened this Jan 20, 2026
Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

## 📋 Review Summary

This pull request effectively optimizes memory allocation for JSON-RPC filters by introducing a configurable limit and using a cache for filter elements. The core logic is sound and should improve performance and stability under heavy loads.

🔍 General Feedback

  • Good Optimization: The use of Guava's Cache is a good choice for managing the lifecycle of filter elements and block hashes, preventing potential memory leaks.
  • Unrelated Changes: The PR includes some changes that seem unrelated to the main goal, such as modifications to the P2P service startup and the addition of unrelated tests. It's generally better to keep PRs focused on a single concern.
  • Code Clarity: The new code is generally clear and well-written. Using the newly defined JsonRpcExceedLimitException would further improve the clarity of error handling.

} else {
blockFilter2Result = blockFilter2ResultSolidity;
}
if (blockFilter2Result.size() >= maxBlockFilterNum) {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 The newly created `JsonRpcExceedLimitException` should be used here instead of `JsonRpcInvalidParamsException` to provide a more specific error when the block filter limit is exceeded. This improves error handling clarity.
Suggested change
if (blockFilter2Result.size() >= maxBlockFilterNum) {
throw new JsonRpcExceedLimitException(
"exceed max block filters: " + maxBlockFilterNum + ", try again later");

Comment on lines +508 to +514
public int hashCode() {
int result = 0;
result = 31 * result + (blockHash == null ? 0 : blockHash.hashCode());
result = 31 * result + (transactionHash == null ? 0 : transactionHash.hashCode());
result = 31 * result + (transactionIndex == null ? 0 : transactionIndex.hashCode());
result = 31 * result + (logIndex == null ? 0 : logIndex.hashCode());
result = result + (removed ? 1 : 0);

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟢 For better readability and conciseness, you can use `Objects.hash()` to implement the `hashCode` method.
Suggested change
public int hashCode() {
int result = 0;
result = 31 * result + (blockHash == null ? 0 : blockHash.hashCode());
result = 31 * result + (transactionHash == null ? 0 : transactionHash.hashCode());
result = 31 * result + (transactionIndex == null ? 0 : transactionIndex.hashCode());
result = 31 * result + (logIndex == null ? 0 : logIndex.hashCode());
result = result + (removed ? 1 : 0);
public int hashCode() {
return Objects.hash(blockHash, transactionHash, transactionIndex, logIndex, removed);
}

public JsonRpcExceedLimitException(String message, Throwable cause) {
super(message, cause);
}
} No newline at end of file

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟢 It's a good practice to add a newline at the end of the file. Most Java style guides recommend it.

Comment on lines 98 to +101
PeerManager.init();
relayService.init();
effectiveCheckService.init();

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 The changes in this file, particularly moving the P2P service initialization, seem unrelated to the primary goal of this pull request, which is optimizing JSON-RPC filter memory allocation. Could you clarify the reason for this change or consider moving it to a separate PR to maintain focus and clarity?

@@ -0,0 +1,40 @@
package org.tron.core.net.service.nodepersist;

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 This new test file seems unrelated to the JSON-RPC filter optimization. To keep pull requests focused on a single concern, it would be better to submit this in a separate PR.

Comment on lines +231 to +233
@Test
public void testIsMining() {
Assert.assertTrue(wallet.isMining());

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 The new test `testIsMining` seems unrelated to the main purpose of this PR. Please consider moving unrelated changes to a separate pull request to ensure each PR addresses a single, specific issue.

@0xbigapple 0xbigapple closed this Jan 21, 2026
@0xbigapple 0xbigapple reopened this Jan 21, 2026
Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

📋 Review Summary

This pull request introduces optimizations and hardening for the JSON-RPC service, primarily by adding caching for log filters and block hashes, and limiting the number of active block filters to prevent resource exhaustion. The changes appear to be safe and improve the robustness of the service.

🔍 General Feedback

  • The introduction of caching with reasonable expiration and size limits is a good performance enhancement.
  • The startup order change in TronNetService improves lifecycle safety by ensuring dependent services are initialized before the P2P service starts.
  • The addition of unit tests for new functionality is appreciated.

Comment on lines +485 to +512
if (this == o) {
return true;
}
if (o == null || this.getClass() != o.getClass()) {
return false;
}
LogFilterElement item = (LogFilterElement) o;
if (!Objects.equals(blockHash, item.blockHash)) {
return false;
}
if (!Objects.equals(transactionHash, item.transactionHash)) {
return false;
}
if (!Objects.equals(transactionIndex, item.transactionIndex)) {
return false;
}
if (!Objects.equals(logIndex, item.logIndex)) {
return false;
}
return removed == item.removed;
}

@Override
public int hashCode() {
int result = 0;
result = 31 * result + (blockHash == null ? 0 : blockHash.hashCode());
result = 31 * result + (transactionHash == null ? 0 : transactionHash.hashCode());
result = 31 * result + (transactionIndex == null ? 0 : transactionIndex.hashCode());

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟢 These equals and hashCode implementations can be simplified using java.util.Objects.

Suggested change
if (this == o) {
return true;
}
if (o == null || this.getClass() != o.getClass()) {
return false;
}
LogFilterElement item = (LogFilterElement) o;
if (!Objects.equals(blockHash, item.blockHash)) {
return false;
}
if (!Objects.equals(transactionHash, item.transactionHash)) {
return false;
}
if (!Objects.equals(transactionIndex, item.transactionIndex)) {
return false;
}
if (!Objects.equals(logIndex, item.logIndex)) {
return false;
}
return removed == item.removed;
}
@Override
public int hashCode() {
int result = 0;
result = 31 * result + (blockHash == null ? 0 : blockHash.hashCode());
result = 31 * result + (transactionHash == null ? 0 : transactionHash.hashCode());
result = 31 * result + (transactionIndex == null ? 0 : transactionIndex.hashCode());
@Override
public boolean equals(Object o) {
if (this == o) {
return true;
}
if (o == null || getClass() != o.getClass()) {
return false;
}
LogFilterElement that = (LogFilterElement) o;
return removed == that.removed &&
Objects.equals(blockHash, that.blockHash) &&
Objects.equals(transactionHash, that.transactionHash) &&
Objects.equals(transactionIndex, that.transactionIndex) &&
Objects.equals(logIndex, that.logIndex);
}
@Override
public int hashCode() {
return Objects.hash(blockHash, transactionHash, transactionIndex, logIndex, removed);
}

public JsonRpcExceedLimitException(String message, Throwable cause) {
super(message, cause);
}
} No newline at end of file

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟢 It's a good practice to have a newline at the end of the file.

Suggested change
}
}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants