Skip to content

Conversation

@constantinius
Copy link
Contributor

Description

Previously we only extracted only text parts were extracted. Now the full range of possibilities are covered.

Issues

Closes https://linear.app/getsentry/issue/TET-1638/redact-images-google-genai

@constantinius constantinius requested a review from a team as a code owner January 5, 2026 11:16
@linear
Copy link

linear bot commented Jan 5, 2026

Base automatically changed from constantinius/fix/redact-message-parts-type-blob to master January 13, 2026 09:56
@github-actions
Copy link
Contributor

github-actions bot commented Jan 13, 2026

Semver Impact of This PR

🟢 Patch (bug fixes)

📋 Changelog Preview

This is how your changes will appear in the changelog.
Entries from this PR are highlighted with a left border (blockquote style).


New Features ✨

  • feat(ai): add parse_data_uri function to parse a data URI by constantinius in #5311
  • feat(asyncio): Add on-demand way to enable AsyncioIntegration by sentrivana in #5288
  • feat(openai-agents): Inject propagation headers for HostedMCPTool by alexander-alderman-webb in #5297

Bug Fixes 🐛

  • fix(ai): redact message parts content of type blob by constantinius in #5243
  • fix(clickhouse): Guard against module shadowing by alexander-alderman-webb in #5250
  • fix(gql): Revert signature change of patched gql.Client.execute by alexander-alderman-webb in #5289
  • fix(grpc): Derive interception state from channel fields by alexander-alderman-webb in #5302
  • fix(integrations): google-genai: reworked gen_ai.request.messages extraction from parameters by constantinius in #5275
  • fix(litellm): Guard against module shadowing by alexander-alderman-webb in #5249
  • fix(pure-eval): Guard against module shadowing by alexander-alderman-webb in #5252
  • fix(ray): Guard against module shadowing by alexander-alderman-webb in #5254
  • fix(threading): Handle channels shadowing by sentrivana in #5299
  • fix(typer): Guard against module shadowing by alexander-alderman-webb in #5253
  • fix: Stop suppressing exception chains in AI integrations by alexander-alderman-webb in #5309
  • fix: Send client reports for span recorder overflow by sentrivana in #5310

Documentation 📚

  • docs(metrics): Remove experimental notice by alexander-alderman-webb in #5304
  • docs: Update Python versions banner in README by sentrivana in #5287

Internal Changes 🔧

Release

  • ci(release): Bump Craft version to fix issues by BYK in #5305
  • ci(release): Switch from action-prepare-release to Craft by BYK in #5290

Other

  • chore(gen_ai): add auto-enablement for google genai by shellmayr in #5295
  • chore: Add type for metric units by sentrivana in #5312
  • ci: Update tox and handle generic classifiers by sentrivana in #5306

🤖 This preview updates automatically when you update the PR.

Comment on lines 387 to 397
if isinstance(function_response, dict):
tool_call_id = function_response.get("id")
tool_name = function_response.get("name")
response_dict = function_response.get("response") or {}
# Prefer "output" key if present, otherwise use entire response
output = response_dict.get("output", response_dict)
else:
# FunctionResponse object
tool_call_id = getattr(function_response, "id", None)
tool_name = getattr(function_response, "name", None)
response_obj = getattr(function_response, "response", None) or {}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've seen this .get() vs getattr pattern a lot in our AI integrations. Feels like introducing a helper function that would try both at once would potentially deduplicate a lot of code.

Not something that needs to be done in this PR, mostly thinking out loud.

…AI messages

Add transform_content_part() and transform_message_content() functions
to standardize content part handling across all AI integrations.

These functions transform various SDK-specific formats (OpenAI, Anthropic,
Google, LangChain) into a unified format:
- blob: base64-encoded binary data
- uri: URL references (including file URIs)
- file: file ID references

Also adds get_modality_from_mime_type() helper to infer content modality
(image/audio/video/document) from MIME types.
…rmats

Replace inline_data and file_data dict handling with the shared
transform_content_part function. Keep Google SDK object handling
and PIL.Image support local since those are Google-specific.
"type": "blob",
"mime_type": mime_type,
"file_uri": file_uri,
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Missing modality field in blob outputs from Part handling

Medium Severity

Blob outputs for Part objects (file_data, inline_data), File objects, and PIL images are missing the modality field. However, dict parts handled via transform_content_part do include modality. The get_modality_from_mime_type function was imported but never used in these code paths, suggesting the intent was to include modality but it was forgotten. This creates inconsistent output format where some blobs have modality and others don't.

Additional Locations (2)

Fix in Cursor Fix in Web

"type": "blob",
"mime_type": mime_type,
"content": BLOB_DATA_SUBSTITUTE,
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Part object inline_data with non-bytes data silently dropped

Low Severity

For Part objects with inline_data, only bytes data is handled. If data is not bytes (e.g., a base64 string), the condition at line 340 fails and the function falls through to return None, silently dropping the data. However, dict inline_data with string data is preserved via transform_content_part. This asymmetric behavior could lead to silent data loss when processing Part objects with non-bytes inline_data.

Fix in Cursor Fix in Web

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants