Skip to content

Conversation

@NicoGrande
Copy link
Collaborator

@NicoGrande NicoGrande commented Jan 20, 2026

Description

Adds optional support for chat templates in vllm_decode.py. This helps achieve similar responses to the native vLLM model implementation when compared to MaxText on vLLM.

FIXES: b/476253050

Tests

Running the following command resulted in the following diff with respect to the vLLM native model:

python3 -m MaxText.vllm_decode --model_name=qwen3-8b --load_parameters_path=$CHECKPOINT_PATH --tokenizer_path=Qwen/Qwen3-8B --hf_model_name=Qwen/Qwen3-8B --max_target_length=1024  --prompt="Suggest some famous landmarks in London." --decode_sampling_temperature=0.0 --decode_sampling_nucleus_p=1.0 --decode_sampling_top_k=0 --hf_config_path=src/MaxText/integration/vllm/maxtext_vllm_adapter --use_chat_template=true

https://diff.googleplex.com/#key=j8DNmAFF934Q

Checklist

Before submitting this PR, please make sure (put X in square brackets):

  • I have performed a self-review of my code. For an optional AI review, add the gemini-review label.
  • I have necessary comments in my code, particularly in hard-to-understand areas.
  • I have run end-to-end tests tests and provided workload links above if applicable.
  • I have made or will make corresponding changes to the doc if needed, including adding new documentation pages to the relevant Table of Contents (toctree directive) as explained in our documentation.

@codecov
Copy link

codecov bot commented Jan 20, 2026

Codecov Report

❌ Patch coverage is 0% with 28 lines in your changes missing coverage. Please review.

Files with missing lines Patch % Lines
src/MaxText/vllm_decode.py 0.00% 28 Missing ⚠️

📢 Thoughts on this report? Let us know!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant