Skip to content

Conversation

@weinibuliu
Copy link
Member

@weinibuliu weinibuliu commented Jan 14, 2026

代码参考来源于 https://github.com/neko-para/maa-support-extension/blob/main/pkgs/webview/src/launch/states/launch.ts
代码来源于 vibe

功能已基本可用 在确认可维护性后可以考虑合并

  • review 代码以保证可维护性
  • 移除调试用输出 (仅在 --DEBUG 下展示)

Summary by Sourcery

重构 MaaDebugger 的运行时事件处理逻辑,引入集中式的 LaunchGraph 状态机,用于跟踪任务、流水线、识别与动作节点的生命周期,并更新 UI 运行时控制逻辑,以消费增量状态变化并可视化嵌套识别。

New Features:

  • 引入 LaunchGraph 状态机及其管理器,用于建模 MaaFramework 的任务执行流程,包括任务、流水线节点、识别节点、动作节点以及 next-list 作用域。
  • 在运行时控制 UI 中新增嵌套识别可视化支持,在父级识别项下展示层级化的识别条目。

Enhancements:

  • 用由 LaunchGraph 驱动的上下文与任务器事件接收器替换直接的 ContextEventSink 回调,从而实现基于消息的增量式 UI 更新。
  • 通过仅在上下文、资源与任务器接收器存在时才进行惰性挂接,并将其事件统一路由至新的状态机,使 MaaFW 更加健壮。
Original summary in English

Summary by Sourcery

Refactor MaaDebugger runtime event handling to use a central launch graph state machine that tracks task, pipeline, recognition, and action node lifecycles, and update the UI runtime control to consume incremental state changes and visualize nested recognitions.

New Features:

  • Introduce a LaunchGraph state machine and manager to model MaaFramework task execution, including tasks, pipeline nodes, recognition nodes, action nodes, and next-list scopes.
  • Add support for nested recognition visualization in the runtime control UI, showing hierarchical recognition items under parent recognitions.

Enhancements:

  • Replace direct ContextEventSink callbacks with launch-graph-driven context and tasker event sinks, allowing message-based, incremental UI updates.
  • Make MaaFW more robust by lazily attaching context, resource, and tasker sinks only when present and routing their events through the new state machine.

Copy link

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey - 我发现了 3 个问题,并提供了一些高层次的反馈:

  • runtime_control.py 中,_lock 定义在 RuntimeControl 上但从未使用;如果你预期有来自非 UI 线程的并发修改,要么删除它,要么用它来保护共享状态(比如 _recognition_stack_reco_id_map)。
  • 新增了一些比较宽泛的 Any 类型标注(例如 on_resource_loading 中的 detail: Any,以及各个 ContextEventSink 处理函数),这会丢失 Maa 类型本身的结构信息;建议把这些重新收紧为具体的 sink detail 类型,从而保留类型安全和编辑器辅助能力。
  • 新的状态机和 event-sink 层引入了大量基于 print 的调试日志;更可控的做法是通过 logger 输出,或者加一个调试开关来控制,这样正常运行时不会被控制台输出淹没。
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- In `runtime_control.py`, `_lock` is defined on `RuntimeControl` but never used; either remove it or use it to guard access to shared state like `_recognition_stack` and `_reco_id_map` if you expect concurrent modifications from non-UI threads.
- There are several broad `Any` type annotations introduced (e.g. `detail: Any` in `on_resource_loading` and the various `ContextEventSink` handlers) which lose the useful structure of the Maa types; consider tightening these back to the concrete sink detail types to preserve type safety and editor assistance.
- The new state-machine and event-sink layer introduces many `print`-based debug logs; it would be more controllable to route these through a logger or guard them behind a debug flag so normal runs are not flooded with console output.

## Individual Comments

### Comment 1
<location> `src/MaaDebugger/webpage/index_page/runtime_control.py:263-267` </location>
<code_context>
+        msg_type = msg.get("msg", "")
+        print(f"[DEBUG on_graph_change] msg_type={msg_type}, msg={msg}")
+
+        # 将消息放入队列
+        self._pending_messages.put(msg)
+
+        # 使用 background_tasks 在主线程中处理消息
+        background_tasks.create(self._process_pending_messages())
+
+    async def _process_pending_messages(self):
</code_context>

<issue_to_address>
**issue (bug_risk):** Multiple background tasks may process the same queue concurrently and interleave message handling unexpectedly.

`on_graph_change` starts a new `background_tasks.create(self._process_pending_messages())` on every call, but `_process_pending_messages` drains the whole queue. With frequent events, you end up with multiple processors racing over `_pending_messages`, causing out-of-order UI updates and redundant work.

To avoid this, either ensure only one `_process_pending_messages` task runs at a time (e.g. with a `_processing_messages` flag), or switch to a single long-lived consumer coroutine that waits on the queue instead of spawning a new task per message.
</issue_to_address>

### Comment 2
<location> `src/MaaDebugger/webpage/index_page/runtime_control.py:73-76` </location>
<code_context>
         self.row_len = 0
         self.data = defaultdict(dict)
         self.list_data_map: dict[int, ListData] = {}
+        self._pending_messages: Queue = Queue()
+        self._lock = Lock()
+        # 追踪当前正在处理的识别项栈(用于嵌套)
+        # 栈顶是当前正在执行的识别项
</code_context>

<issue_to_address>
**suggestion (bug_risk):** The `_lock` attribute is currently unused and might indicate missing synchronization around shared state.

`self._lock = Lock()` is defined but never used. Since `dispatch` may be called off the UI thread and `on_graph_change` updates `_recognition_stack` and `_reco_id_map`, either remove `_lock` or use it to consistently guard these shared structures, especially for any future non‑NiceGUI access.

```suggestion
        self.list_data_map: dict[int, ListData] = {}
        self._pending_messages: Queue = Queue()
        # 追踪当前正在处理的识别项栈(用于嵌套)
```
</issue_to_address>

### Comment 3
<location> `src/MaaDebugger/maafw/launch_graph.py:332-341` </location>
<code_context>
+        else:
+            print(f"[LaunchGraph] Drop msg: {msg_type}, tracker type: {tracker.type}")
+
+    elif msg_type in ("RecognitionNode.Succeeded", "RecognitionNode.Failed"):
+        if tracker.type == ScopeType.RECO_NODE:
+            tracker.msg = msg
+            tracker.status = (
+                GeneralStatus.SUCCESS
+                if msg_type == "RecognitionNode.Succeeded"
+                else GeneralStatus.FAILED
+            )
+            current.depth -= 1
+        elif (
+            tracker.type == ScopeType.RECO and hasattr(tracker, "reco_detail") is False
+        ):
</code_context>

<issue_to_address>
**issue (bug_risk):** The `hasattr(tracker, "reco_detail") is False` condition is always false for Scope and appears to be dead code.

In this branch:
```python
elif tracker.type == ScopeType.RECO and hasattr(tracker, "reco_detail") is False:
    current.depth -= 1
```
`Scope` always has a `reco_detail` attribute from the dataclass, so `hasattr(tracker, "reco_detail")` is always true and this branch is unreachable. If you meant to distinguish based on value/shape (e.g. `tracker.reco_detail is None`), please update the condition accordingly or remove this branch as dead code.
</issue_to_address>

Sourcery 对开源项目免费使用——如果你觉得这些评论有帮助,欢迎分享 ✨
帮我变得更有用!请在每条评论上点 👍 或 👎,我会根据这些反馈改进后续的评审。
Original comment in English

Hey - I've found 3 issues, and left some high level feedback:

  • In runtime_control.py, _lock is defined on RuntimeControl but never used; either remove it or use it to guard access to shared state like _recognition_stack and _reco_id_map if you expect concurrent modifications from non-UI threads.
  • There are several broad Any type annotations introduced (e.g. detail: Any in on_resource_loading and the various ContextEventSink handlers) which lose the useful structure of the Maa types; consider tightening these back to the concrete sink detail types to preserve type safety and editor assistance.
  • The new state-machine and event-sink layer introduces many print-based debug logs; it would be more controllable to route these through a logger or guard them behind a debug flag so normal runs are not flooded with console output.
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- In `runtime_control.py`, `_lock` is defined on `RuntimeControl` but never used; either remove it or use it to guard access to shared state like `_recognition_stack` and `_reco_id_map` if you expect concurrent modifications from non-UI threads.
- There are several broad `Any` type annotations introduced (e.g. `detail: Any` in `on_resource_loading` and the various `ContextEventSink` handlers) which lose the useful structure of the Maa types; consider tightening these back to the concrete sink detail types to preserve type safety and editor assistance.
- The new state-machine and event-sink layer introduces many `print`-based debug logs; it would be more controllable to route these through a logger or guard them behind a debug flag so normal runs are not flooded with console output.

## Individual Comments

### Comment 1
<location> `src/MaaDebugger/webpage/index_page/runtime_control.py:263-267` </location>
<code_context>
+        msg_type = msg.get("msg", "")
+        print(f"[DEBUG on_graph_change] msg_type={msg_type}, msg={msg}")
+
+        # 将消息放入队列
+        self._pending_messages.put(msg)
+
+        # 使用 background_tasks 在主线程中处理消息
+        background_tasks.create(self._process_pending_messages())
+
+    async def _process_pending_messages(self):
</code_context>

<issue_to_address>
**issue (bug_risk):** Multiple background tasks may process the same queue concurrently and interleave message handling unexpectedly.

`on_graph_change` starts a new `background_tasks.create(self._process_pending_messages())` on every call, but `_process_pending_messages` drains the whole queue. With frequent events, you end up with multiple processors racing over `_pending_messages`, causing out-of-order UI updates and redundant work.

To avoid this, either ensure only one `_process_pending_messages` task runs at a time (e.g. with a `_processing_messages` flag), or switch to a single long-lived consumer coroutine that waits on the queue instead of spawning a new task per message.
</issue_to_address>

### Comment 2
<location> `src/MaaDebugger/webpage/index_page/runtime_control.py:73-76` </location>
<code_context>
         self.row_len = 0
         self.data = defaultdict(dict)
         self.list_data_map: dict[int, ListData] = {}
+        self._pending_messages: Queue = Queue()
+        self._lock = Lock()
+        # 追踪当前正在处理的识别项栈(用于嵌套)
+        # 栈顶是当前正在执行的识别项
</code_context>

<issue_to_address>
**suggestion (bug_risk):** The `_lock` attribute is currently unused and might indicate missing synchronization around shared state.

`self._lock = Lock()` is defined but never used. Since `dispatch` may be called off the UI thread and `on_graph_change` updates `_recognition_stack` and `_reco_id_map`, either remove `_lock` or use it to consistently guard these shared structures, especially for any future non‑NiceGUI access.

```suggestion
        self.list_data_map: dict[int, ListData] = {}
        self._pending_messages: Queue = Queue()
        # 追踪当前正在处理的识别项栈(用于嵌套)
```
</issue_to_address>

### Comment 3
<location> `src/MaaDebugger/maafw/launch_graph.py:332-341` </location>
<code_context>
+        else:
+            print(f"[LaunchGraph] Drop msg: {msg_type}, tracker type: {tracker.type}")
+
+    elif msg_type in ("RecognitionNode.Succeeded", "RecognitionNode.Failed"):
+        if tracker.type == ScopeType.RECO_NODE:
+            tracker.msg = msg
+            tracker.status = (
+                GeneralStatus.SUCCESS
+                if msg_type == "RecognitionNode.Succeeded"
+                else GeneralStatus.FAILED
+            )
+            current.depth -= 1
+        elif (
+            tracker.type == ScopeType.RECO and hasattr(tracker, "reco_detail") is False
+        ):
</code_context>

<issue_to_address>
**issue (bug_risk):** The `hasattr(tracker, "reco_detail") is False` condition is always false for Scope and appears to be dead code.

In this branch:
```python
elif tracker.type == ScopeType.RECO and hasattr(tracker, "reco_detail") is False:
    current.depth -= 1
```
`Scope` always has a `reco_detail` attribute from the dataclass, so `hasattr(tracker, "reco_detail")` is always true and this branch is unreachable. If you meant to distinguish based on value/shape (e.g. `tracker.reco_detail is None`), please update the condition accordingly or remove this branch as dead code.
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

Comment on lines 73 to 76
self.list_data_map: dict[int, ListData] = {}
self._pending_messages: Queue = Queue()
self._lock = Lock()
# 追踪当前正在处理的识别项栈(用于嵌套)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion (bug_risk): _lock 属性目前未被使用,这可能意味着对共享状态缺失必要的同步。

self._lock = Lock() 已经定义但从未使用。由于 dispatch 可能在非 UI 线程中被调用,而 on_graph_change 会更新 _recognition_stack_reco_id_map,建议要么删除 _lock,要么用它来一致地保护这些共享结构,尤其是考虑未来可能出现的非 NiceGUI 访问场景。

Suggested change
self.list_data_map: dict[int, ListData] = {}
self._pending_messages: Queue = Queue()
self._lock = Lock()
# 追踪当前正在处理的识别项栈(用于嵌套)
self.list_data_map: dict[int, ListData] = {}
self._pending_messages: Queue = Queue()
# 追踪当前正在处理的识别项栈(用于嵌套)
Original comment in English

suggestion (bug_risk): The _lock attribute is currently unused and might indicate missing synchronization around shared state.

self._lock = Lock() is defined but never used. Since dispatch may be called off the UI thread and on_graph_change updates _recognition_stack and _reco_id_map, either remove _lock or use it to consistently guard these shared structures, especially for any future non‑NiceGUI access.

Suggested change
self.list_data_map: dict[int, ListData] = {}
self._pending_messages: Queue = Queue()
self._lock = Lock()
# 追踪当前正在处理的识别项栈(用于嵌套)
self.list_data_map: dict[int, ListData] = {}
self._pending_messages: Queue = Queue()
# 追踪当前正在处理的识别项栈(用于嵌套)

Comment on lines 332 to 341
elif msg_type in ("RecognitionNode.Succeeded", "RecognitionNode.Failed"):
if tracker.type == ScopeType.RECO_NODE:
tracker.msg = msg
tracker.status = (
GeneralStatus.SUCCESS
if msg_type == "RecognitionNode.Succeeded"
else GeneralStatus.FAILED
)
current.depth -= 1
elif (
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

issue (bug_risk): 对于 Scope 而言,hasattr(tracker, "reco_detail") is False 这个条件永远为假,因此这段代码看起来是死代码。

在这个分支中:

elif tracker.type == ScopeType.RECO and hasattr(tracker, "reco_detail") is False:
    current.depth -= 1

Scope 从 dataclass 定义起就始终拥有 reco_detail 属性,因此 hasattr(tracker, "reco_detail") 永远为真,这个分支不可达。如果你想根据取值/结构来区分(例如 tracker.reco_detail is None),请相应地更新条件,或者直接删除该分支作为死代码。

Original comment in English

issue (bug_risk): The hasattr(tracker, "reco_detail") is False condition is always false for Scope and appears to be dead code.

In this branch:

elif tracker.type == ScopeType.RECO and hasattr(tracker, "reco_detail") is False:
    current.depth -= 1

Scope always has a reco_detail attribute from the dataclass, so hasattr(tracker, "reco_detail") is always true and this branch is unreachable. If you meant to distinguish based on value/shape (e.g. tracker.reco_detail is None), please update the condition accordingly or remove this branch as dead code.

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR refactors the callback-based event handling to a state machine pattern for tracking MaaFramework task execution lifecycle. The implementation is inspired by a TypeScript reference implementation.

Changes:

  • Introduced a new LaunchGraph state machine to track task execution hierarchies
  • Replaced callback-based event sinks (MyContextEventSink) with state machine-driven event sinks (LaunchGraphContextEventSink, LaunchGraphTaskerEventSink)
  • Added support for nested recognition tracking with UI expansion components

Reviewed changes

Copilot reviewed 3 out of 3 changed files in this pull request and generated 38 comments.

File Description
src/MaaDebugger/maafw/launch_graph.py New file implementing the state machine core with LaunchGraph, Scope, and reducer functions
src/MaaDebugger/maafw/init.py Replaced MyContextEventSink with state machine-driven event sinks and added LaunchGraphManager
src/MaaDebugger/webpage/index_page/runtime_control.py Updated to use state machine subscription pattern with nested recognition support
Comments suppressed due to low confidence (1)

src/MaaDebugger/webpage/index_page/runtime_control.py:223

  • Using # type: ignore suppresses type checking. Consider defining a proper type annotation or refactoring the code to avoid the need for this suppression.
        with ui.item(on_click=lambda data=data: self.on_click_item(data)):  # type: ignore

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

self.data = defaultdict(dict)
self.list_data_map: dict[int, ListData] = {}
self._pending_messages: Queue = Queue()
self._lock = Lock()
Copy link

Copilot AI Jan 14, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The _lock field is initialized but never used anywhere in the code. This suggests incomplete implementation or dead code that should be removed.

Copilot uses AI. Check for mistakes.
)
current.depth -= 1
elif (
tracker.type == ScopeType.RECO and hasattr(tracker, "reco_detail") is False
Copy link

Copilot AI Jan 14, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Using hasattr(tracker, 'reco_detail') is False is non-idiomatic Python. Use not hasattr(tracker, 'reco_detail') instead. Additionally, since reco_detail is defined in the Scope dataclass with a default value of None, this check will always return False - the attribute always exists. Consider checking tracker.reco_detail is None if you want to check for the absence of a value.

Copilot uses AI. Check for mistakes.
将消息放入队列,由 NiceGUI 主线程处理
"""
msg_type = msg.get("msg", "")
print(f"[DEBUG on_graph_change] msg_type={msg_type}, msg={msg}")
Copy link

Copilot AI Jan 14, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Multiple debug print statements are scattered throughout the code. Consider using Python's logging module instead for better control over debug output levels and consistent formatting.

Copilot uses AI. Check for mistakes.
if msg_type == "NextList.Starting":
name = msg.get("name", "")
next_names = msg.get("next_list", [])
print(f"[DEBUG] NextList.Starting: name={name}, next_list={next_names}")
Copy link

Copilot AI Jan 14, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Multiple debug print statements are scattered throughout the code. Consider using Python's logging module instead for better control over debug output levels and consistent formatting.

Copilot uses AI. Check for mistakes.
elif msg_type == "RecognitionNode.Starting":
name = msg.get("name", "")
node_id = msg.get("node_id", 0)
print(f"[DEBUG] RecognitionNode.Starting: name={name}, node_id={node_id}")
Copy link

Copilot AI Jan 14, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Multiple debug print statements are scattered throughout the code. Consider using Python's logging module instead for better control over debug output levels and consistent formatting.

Copilot uses AI. Check for mistakes.

def _create_nested_item(self, data: ItemData):
"""创建嵌套的识别项 UI"""
with ui.item(on_click=lambda data=data: self.on_click_item(data)).classes("ml-4"): # type: ignore
Copy link

Copilot AI Jan 14, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Using # type: ignore suppresses type checking. Consider defining a proper type annotation or refactoring the code to avoid the need for this suppression.

Copilot uses AI. Check for mistakes.
Comment on lines +238 to +242
with (
ui.expansion(value=False)
.classes("w-full nested-reco")
.props("dense header-class='text-caption'") as expansion
):
Copy link

Copilot AI Jan 14, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The expansion component is created with value=False (collapsed) and then immediately set to invisible with expansion.set_visibility(False). Consider whether both states are necessary or if this could be simplified.

Copilot uses AI. Check for mistakes.
try:
callback(self._graph, msg)
except Exception as e:
print(f"[LaunchGraphManager] Subscriber error: {e}")
Copy link

Copilot AI Jan 14, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Error handling that only prints the error message swallows exceptions. Consider using logging and potentially re-raising or collecting errors for better debugging.

Copilot uses AI. Check for mistakes.
if item.status == Status.PENDING and item.name == name:
item.reco_id = reco_id
item.status = Status.SUCCEEDED if hit else Status.FAILED
matched_item = item
Copy link

Copilot AI Jan 14, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Variable matched_item is not used.

Copilot uses AI. Check for mistakes.
if item.status == Status.PENDING and item.name == name:
item.reco_id = reco_id
item.status = Status.SUCCEEDED if hit else Status.FAILED
matched_item = item
Copy link

Copilot AI Jan 14, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This assignment to 'matched_item' is unnecessary as it is redefined before this value is used.

Copilot uses AI. Check for mistakes.
@weinibuliu weinibuliu changed the title refactor: 回调 -> 节点状态机 refactor: 消息回调 -> 节点状态机 Jan 17, 2026
@weinibuliu weinibuliu merged commit 104ce7c into main Jan 17, 2026
3 checks passed
@weinibuliu weinibuliu deleted the dev/fix/RunRecognition branch January 17, 2026 14:21
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants