-
Notifications
You must be signed in to change notification settings - Fork 14
refactor: 消息回调 -> 节点状态机 #156
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey - 我发现了 3 个问题,并提供了一些高层次的反馈:
- 在
runtime_control.py中,_lock定义在RuntimeControl上但从未使用;如果你预期有来自非 UI 线程的并发修改,要么删除它,要么用它来保护共享状态(比如_recognition_stack和_reco_id_map)。 - 新增了一些比较宽泛的
Any类型标注(例如on_resource_loading中的detail: Any,以及各个ContextEventSink处理函数),这会丢失 Maa 类型本身的结构信息;建议把这些重新收紧为具体的 sink detail 类型,从而保留类型安全和编辑器辅助能力。 - 新的状态机和 event-sink 层引入了大量基于
print的调试日志;更可控的做法是通过 logger 输出,或者加一个调试开关来控制,这样正常运行时不会被控制台输出淹没。
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- In `runtime_control.py`, `_lock` is defined on `RuntimeControl` but never used; either remove it or use it to guard access to shared state like `_recognition_stack` and `_reco_id_map` if you expect concurrent modifications from non-UI threads.
- There are several broad `Any` type annotations introduced (e.g. `detail: Any` in `on_resource_loading` and the various `ContextEventSink` handlers) which lose the useful structure of the Maa types; consider tightening these back to the concrete sink detail types to preserve type safety and editor assistance.
- The new state-machine and event-sink layer introduces many `print`-based debug logs; it would be more controllable to route these through a logger or guard them behind a debug flag so normal runs are not flooded with console output.
## Individual Comments
### Comment 1
<location> `src/MaaDebugger/webpage/index_page/runtime_control.py:263-267` </location>
<code_context>
+ msg_type = msg.get("msg", "")
+ print(f"[DEBUG on_graph_change] msg_type={msg_type}, msg={msg}")
+
+ # 将消息放入队列
+ self._pending_messages.put(msg)
+
+ # 使用 background_tasks 在主线程中处理消息
+ background_tasks.create(self._process_pending_messages())
+
+ async def _process_pending_messages(self):
</code_context>
<issue_to_address>
**issue (bug_risk):** Multiple background tasks may process the same queue concurrently and interleave message handling unexpectedly.
`on_graph_change` starts a new `background_tasks.create(self._process_pending_messages())` on every call, but `_process_pending_messages` drains the whole queue. With frequent events, you end up with multiple processors racing over `_pending_messages`, causing out-of-order UI updates and redundant work.
To avoid this, either ensure only one `_process_pending_messages` task runs at a time (e.g. with a `_processing_messages` flag), or switch to a single long-lived consumer coroutine that waits on the queue instead of spawning a new task per message.
</issue_to_address>
### Comment 2
<location> `src/MaaDebugger/webpage/index_page/runtime_control.py:73-76` </location>
<code_context>
self.row_len = 0
self.data = defaultdict(dict)
self.list_data_map: dict[int, ListData] = {}
+ self._pending_messages: Queue = Queue()
+ self._lock = Lock()
+ # 追踪当前正在处理的识别项栈(用于嵌套)
+ # 栈顶是当前正在执行的识别项
</code_context>
<issue_to_address>
**suggestion (bug_risk):** The `_lock` attribute is currently unused and might indicate missing synchronization around shared state.
`self._lock = Lock()` is defined but never used. Since `dispatch` may be called off the UI thread and `on_graph_change` updates `_recognition_stack` and `_reco_id_map`, either remove `_lock` or use it to consistently guard these shared structures, especially for any future non‑NiceGUI access.
```suggestion
self.list_data_map: dict[int, ListData] = {}
self._pending_messages: Queue = Queue()
# 追踪当前正在处理的识别项栈(用于嵌套)
```
</issue_to_address>
### Comment 3
<location> `src/MaaDebugger/maafw/launch_graph.py:332-341` </location>
<code_context>
+ else:
+ print(f"[LaunchGraph] Drop msg: {msg_type}, tracker type: {tracker.type}")
+
+ elif msg_type in ("RecognitionNode.Succeeded", "RecognitionNode.Failed"):
+ if tracker.type == ScopeType.RECO_NODE:
+ tracker.msg = msg
+ tracker.status = (
+ GeneralStatus.SUCCESS
+ if msg_type == "RecognitionNode.Succeeded"
+ else GeneralStatus.FAILED
+ )
+ current.depth -= 1
+ elif (
+ tracker.type == ScopeType.RECO and hasattr(tracker, "reco_detail") is False
+ ):
</code_context>
<issue_to_address>
**issue (bug_risk):** The `hasattr(tracker, "reco_detail") is False` condition is always false for Scope and appears to be dead code.
In this branch:
```python
elif tracker.type == ScopeType.RECO and hasattr(tracker, "reco_detail") is False:
current.depth -= 1
```
`Scope` always has a `reco_detail` attribute from the dataclass, so `hasattr(tracker, "reco_detail")` is always true and this branch is unreachable. If you meant to distinguish based on value/shape (e.g. `tracker.reco_detail is None`), please update the condition accordingly or remove this branch as dead code.
</issue_to_address>帮我变得更有用!请在每条评论上点 👍 或 👎,我会根据这些反馈改进后续的评审。
Original comment in English
Hey - I've found 3 issues, and left some high level feedback:
- In
runtime_control.py,_lockis defined onRuntimeControlbut never used; either remove it or use it to guard access to shared state like_recognition_stackand_reco_id_mapif you expect concurrent modifications from non-UI threads. - There are several broad
Anytype annotations introduced (e.g.detail: Anyinon_resource_loadingand the variousContextEventSinkhandlers) which lose the useful structure of the Maa types; consider tightening these back to the concrete sink detail types to preserve type safety and editor assistance. - The new state-machine and event-sink layer introduces many
print-based debug logs; it would be more controllable to route these through a logger or guard them behind a debug flag so normal runs are not flooded with console output.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- In `runtime_control.py`, `_lock` is defined on `RuntimeControl` but never used; either remove it or use it to guard access to shared state like `_recognition_stack` and `_reco_id_map` if you expect concurrent modifications from non-UI threads.
- There are several broad `Any` type annotations introduced (e.g. `detail: Any` in `on_resource_loading` and the various `ContextEventSink` handlers) which lose the useful structure of the Maa types; consider tightening these back to the concrete sink detail types to preserve type safety and editor assistance.
- The new state-machine and event-sink layer introduces many `print`-based debug logs; it would be more controllable to route these through a logger or guard them behind a debug flag so normal runs are not flooded with console output.
## Individual Comments
### Comment 1
<location> `src/MaaDebugger/webpage/index_page/runtime_control.py:263-267` </location>
<code_context>
+ msg_type = msg.get("msg", "")
+ print(f"[DEBUG on_graph_change] msg_type={msg_type}, msg={msg}")
+
+ # 将消息放入队列
+ self._pending_messages.put(msg)
+
+ # 使用 background_tasks 在主线程中处理消息
+ background_tasks.create(self._process_pending_messages())
+
+ async def _process_pending_messages(self):
</code_context>
<issue_to_address>
**issue (bug_risk):** Multiple background tasks may process the same queue concurrently and interleave message handling unexpectedly.
`on_graph_change` starts a new `background_tasks.create(self._process_pending_messages())` on every call, but `_process_pending_messages` drains the whole queue. With frequent events, you end up with multiple processors racing over `_pending_messages`, causing out-of-order UI updates and redundant work.
To avoid this, either ensure only one `_process_pending_messages` task runs at a time (e.g. with a `_processing_messages` flag), or switch to a single long-lived consumer coroutine that waits on the queue instead of spawning a new task per message.
</issue_to_address>
### Comment 2
<location> `src/MaaDebugger/webpage/index_page/runtime_control.py:73-76` </location>
<code_context>
self.row_len = 0
self.data = defaultdict(dict)
self.list_data_map: dict[int, ListData] = {}
+ self._pending_messages: Queue = Queue()
+ self._lock = Lock()
+ # 追踪当前正在处理的识别项栈(用于嵌套)
+ # 栈顶是当前正在执行的识别项
</code_context>
<issue_to_address>
**suggestion (bug_risk):** The `_lock` attribute is currently unused and might indicate missing synchronization around shared state.
`self._lock = Lock()` is defined but never used. Since `dispatch` may be called off the UI thread and `on_graph_change` updates `_recognition_stack` and `_reco_id_map`, either remove `_lock` or use it to consistently guard these shared structures, especially for any future non‑NiceGUI access.
```suggestion
self.list_data_map: dict[int, ListData] = {}
self._pending_messages: Queue = Queue()
# 追踪当前正在处理的识别项栈(用于嵌套)
```
</issue_to_address>
### Comment 3
<location> `src/MaaDebugger/maafw/launch_graph.py:332-341` </location>
<code_context>
+ else:
+ print(f"[LaunchGraph] Drop msg: {msg_type}, tracker type: {tracker.type}")
+
+ elif msg_type in ("RecognitionNode.Succeeded", "RecognitionNode.Failed"):
+ if tracker.type == ScopeType.RECO_NODE:
+ tracker.msg = msg
+ tracker.status = (
+ GeneralStatus.SUCCESS
+ if msg_type == "RecognitionNode.Succeeded"
+ else GeneralStatus.FAILED
+ )
+ current.depth -= 1
+ elif (
+ tracker.type == ScopeType.RECO and hasattr(tracker, "reco_detail") is False
+ ):
</code_context>
<issue_to_address>
**issue (bug_risk):** The `hasattr(tracker, "reco_detail") is False` condition is always false for Scope and appears to be dead code.
In this branch:
```python
elif tracker.type == ScopeType.RECO and hasattr(tracker, "reco_detail") is False:
current.depth -= 1
```
`Scope` always has a `reco_detail` attribute from the dataclass, so `hasattr(tracker, "reco_detail")` is always true and this branch is unreachable. If you meant to distinguish based on value/shape (e.g. `tracker.reco_detail is None`), please update the condition accordingly or remove this branch as dead code.
</issue_to_address>Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
| self.list_data_map: dict[int, ListData] = {} | ||
| self._pending_messages: Queue = Queue() | ||
| self._lock = Lock() | ||
| # 追踪当前正在处理的识别项栈(用于嵌套) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
suggestion (bug_risk): _lock 属性目前未被使用,这可能意味着对共享状态缺失必要的同步。
self._lock = Lock() 已经定义但从未使用。由于 dispatch 可能在非 UI 线程中被调用,而 on_graph_change 会更新 _recognition_stack 和 _reco_id_map,建议要么删除 _lock,要么用它来一致地保护这些共享结构,尤其是考虑未来可能出现的非 NiceGUI 访问场景。
| self.list_data_map: dict[int, ListData] = {} | |
| self._pending_messages: Queue = Queue() | |
| self._lock = Lock() | |
| # 追踪当前正在处理的识别项栈(用于嵌套) | |
| self.list_data_map: dict[int, ListData] = {} | |
| self._pending_messages: Queue = Queue() | |
| # 追踪当前正在处理的识别项栈(用于嵌套) |
Original comment in English
suggestion (bug_risk): The _lock attribute is currently unused and might indicate missing synchronization around shared state.
self._lock = Lock() is defined but never used. Since dispatch may be called off the UI thread and on_graph_change updates _recognition_stack and _reco_id_map, either remove _lock or use it to consistently guard these shared structures, especially for any future non‑NiceGUI access.
| self.list_data_map: dict[int, ListData] = {} | |
| self._pending_messages: Queue = Queue() | |
| self._lock = Lock() | |
| # 追踪当前正在处理的识别项栈(用于嵌套) | |
| self.list_data_map: dict[int, ListData] = {} | |
| self._pending_messages: Queue = Queue() | |
| # 追踪当前正在处理的识别项栈(用于嵌套) |
| elif msg_type in ("RecognitionNode.Succeeded", "RecognitionNode.Failed"): | ||
| if tracker.type == ScopeType.RECO_NODE: | ||
| tracker.msg = msg | ||
| tracker.status = ( | ||
| GeneralStatus.SUCCESS | ||
| if msg_type == "RecognitionNode.Succeeded" | ||
| else GeneralStatus.FAILED | ||
| ) | ||
| current.depth -= 1 | ||
| elif ( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
issue (bug_risk): 对于 Scope 而言,hasattr(tracker, "reco_detail") is False 这个条件永远为假,因此这段代码看起来是死代码。
在这个分支中:
elif tracker.type == ScopeType.RECO and hasattr(tracker, "reco_detail") is False:
current.depth -= 1Scope 从 dataclass 定义起就始终拥有 reco_detail 属性,因此 hasattr(tracker, "reco_detail") 永远为真,这个分支不可达。如果你想根据取值/结构来区分(例如 tracker.reco_detail is None),请相应地更新条件,或者直接删除该分支作为死代码。
Original comment in English
issue (bug_risk): The hasattr(tracker, "reco_detail") is False condition is always false for Scope and appears to be dead code.
In this branch:
elif tracker.type == ScopeType.RECO and hasattr(tracker, "reco_detail") is False:
current.depth -= 1Scope always has a reco_detail attribute from the dataclass, so hasattr(tracker, "reco_detail") is always true and this branch is unreachable. If you meant to distinguish based on value/shape (e.g. tracker.reco_detail is None), please update the condition accordingly or remove this branch as dead code.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This PR refactors the callback-based event handling to a state machine pattern for tracking MaaFramework task execution lifecycle. The implementation is inspired by a TypeScript reference implementation.
Changes:
- Introduced a new
LaunchGraphstate machine to track task execution hierarchies - Replaced callback-based event sinks (
MyContextEventSink) with state machine-driven event sinks (LaunchGraphContextEventSink,LaunchGraphTaskerEventSink) - Added support for nested recognition tracking with UI expansion components
Reviewed changes
Copilot reviewed 3 out of 3 changed files in this pull request and generated 38 comments.
| File | Description |
|---|---|
| src/MaaDebugger/maafw/launch_graph.py | New file implementing the state machine core with LaunchGraph, Scope, and reducer functions |
| src/MaaDebugger/maafw/init.py | Replaced MyContextEventSink with state machine-driven event sinks and added LaunchGraphManager |
| src/MaaDebugger/webpage/index_page/runtime_control.py | Updated to use state machine subscription pattern with nested recognition support |
Comments suppressed due to low confidence (1)
src/MaaDebugger/webpage/index_page/runtime_control.py:223
- Using
# type: ignoresuppresses type checking. Consider defining a proper type annotation or refactoring the code to avoid the need for this suppression.
with ui.item(on_click=lambda data=data: self.on_click_item(data)): # type: ignore
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| self.data = defaultdict(dict) | ||
| self.list_data_map: dict[int, ListData] = {} | ||
| self._pending_messages: Queue = Queue() | ||
| self._lock = Lock() |
Copilot
AI
Jan 14, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The _lock field is initialized but never used anywhere in the code. This suggests incomplete implementation or dead code that should be removed.
| ) | ||
| current.depth -= 1 | ||
| elif ( | ||
| tracker.type == ScopeType.RECO and hasattr(tracker, "reco_detail") is False |
Copilot
AI
Jan 14, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Using hasattr(tracker, 'reco_detail') is False is non-idiomatic Python. Use not hasattr(tracker, 'reco_detail') instead. Additionally, since reco_detail is defined in the Scope dataclass with a default value of None, this check will always return False - the attribute always exists. Consider checking tracker.reco_detail is None if you want to check for the absence of a value.
| 将消息放入队列,由 NiceGUI 主线程处理 | ||
| """ | ||
| msg_type = msg.get("msg", "") | ||
| print(f"[DEBUG on_graph_change] msg_type={msg_type}, msg={msg}") |
Copilot
AI
Jan 14, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Multiple debug print statements are scattered throughout the code. Consider using Python's logging module instead for better control over debug output levels and consistent formatting.
| if msg_type == "NextList.Starting": | ||
| name = msg.get("name", "") | ||
| next_names = msg.get("next_list", []) | ||
| print(f"[DEBUG] NextList.Starting: name={name}, next_list={next_names}") |
Copilot
AI
Jan 14, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Multiple debug print statements are scattered throughout the code. Consider using Python's logging module instead for better control over debug output levels and consistent formatting.
| elif msg_type == "RecognitionNode.Starting": | ||
| name = msg.get("name", "") | ||
| node_id = msg.get("node_id", 0) | ||
| print(f"[DEBUG] RecognitionNode.Starting: name={name}, node_id={node_id}") |
Copilot
AI
Jan 14, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Multiple debug print statements are scattered throughout the code. Consider using Python's logging module instead for better control over debug output levels and consistent formatting.
|
|
||
| def _create_nested_item(self, data: ItemData): | ||
| """创建嵌套的识别项 UI""" | ||
| with ui.item(on_click=lambda data=data: self.on_click_item(data)).classes("ml-4"): # type: ignore |
Copilot
AI
Jan 14, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Using # type: ignore suppresses type checking. Consider defining a proper type annotation or refactoring the code to avoid the need for this suppression.
| with ( | ||
| ui.expansion(value=False) | ||
| .classes("w-full nested-reco") | ||
| .props("dense header-class='text-caption'") as expansion | ||
| ): |
Copilot
AI
Jan 14, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The expansion component is created with value=False (collapsed) and then immediately set to invisible with expansion.set_visibility(False). Consider whether both states are necessary or if this could be simplified.
| try: | ||
| callback(self._graph, msg) | ||
| except Exception as e: | ||
| print(f"[LaunchGraphManager] Subscriber error: {e}") |
Copilot
AI
Jan 14, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Error handling that only prints the error message swallows exceptions. Consider using logging and potentially re-raising or collecting errors for better debugging.
| if item.status == Status.PENDING and item.name == name: | ||
| item.reco_id = reco_id | ||
| item.status = Status.SUCCEEDED if hit else Status.FAILED | ||
| matched_item = item |
Copilot
AI
Jan 14, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Variable matched_item is not used.
| if item.status == Status.PENDING and item.name == name: | ||
| item.reco_id = reco_id | ||
| item.status = Status.SUCCEEDED if hit else Status.FAILED | ||
| matched_item = item |
Copilot
AI
Jan 14, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This assignment to 'matched_item' is unnecessary as it is redefined before this value is used.
代码参考来源于 https://github.com/neko-para/maa-support-extension/blob/main/pkgs/webview/src/launch/states/launch.ts
代码来源于 vibe功能已基本可用 在确认可维护性后可以考虑合并
Summary by Sourcery
重构 MaaDebugger 的运行时事件处理逻辑,引入集中式的 LaunchGraph 状态机,用于跟踪任务、流水线、识别与动作节点的生命周期,并更新 UI 运行时控制逻辑,以消费增量状态变化并可视化嵌套识别。
New Features:
Enhancements:
ContextEventSink回调,从而实现基于消息的增量式 UI 更新。Original summary in English
Summary by Sourcery
Refactor MaaDebugger runtime event handling to use a central launch graph state machine that tracks task, pipeline, recognition, and action node lifecycles, and update the UI runtime control to consume incremental state changes and visualize nested recognitions.
New Features:
Enhancements: