-
-
Notifications
You must be signed in to change notification settings - Fork 70
LM Studio Quick Start
NCMcClure edited this page May 24, 2025
·
3 revisions
- Visit https://lmstudio.ai/ and download for your platform (Windows, macOS, Linux)
-
System Requirements:
- Windows/Linux: 24GB+ VRAM recommended or similar ARM unified memory architecture
- An Apple Silicon Mac with 24GB+ of unified memory is also a very viable option.
Note
Just as with Ollama, the amount, type, and speed of memory your LLM is loaded onto (alongside other system specs) will greatly impact not only the speed of translations, but also the model (and its capabilities) that you can feasibly run.
- Launch LM Studio and use the search page (Purple button) to find models (e.g., "qwen3-32b") that will work with your system (Reasoning models above 14b parameters are bare minimum for small blueprint graphs)
- Click the model and download appropriate quantization (Q4_K_M for good quality/size balance)
- If you're not sure which one to select, then follow LM Studio's recommendations as it will try to auto-detect what your system can run.
- Once you have finished downloading the model, click the
Select a model to loadbutton at the top-center, and then toggle onManually choose model load parametersat the bottom of that dialogue. - When you click on the model you want to load, make sure you set the context length to at least 8000 tokens -- the higher you can do the better as this will determine how many blueprint nodes you can translate in a single request.
- Click the
Remember settings for ...checkbox at the bottom left so you don't have to set this again. - You have to click the
Load Modelbutton for theRemember settings for ...setting to save.
-
Option A: Through the LM Studio app's UI
- Make sure you have
Power Userenabled at the bottom left of the window. - Go to the
Developerpage (Green button) - Click the toggle at the top left to start the server if it is not already started. You should see
Status: Running - Click the
Settingsbutton to the right of that toggle and make sureJust-In-Time Model Loadingis enabled - Make sure your local server address is
http://127.0.0.1:1234at the top right of the window.
- Make sure you have
-
Option B: Use CLI:
lms server start(requires CLI bootstrap:~/.lmstudio/bin/lms bootstrap)
Note
Default server will run on http://localhost:1234
- In the Unreal Engine Editor, go to Edit → Project Settings → Plugins → Node to Code
- Set LLM Provider to "LM Studio"
- Configure LM Studio Settings:
-
Model Name: Enter the exact model name (e.g., "qwen3-32b") that you see in the
Models(Red button) page of LM Studio- You can also click the
...button to the right of the model and clickCopy Default Identifierand paste it into theModel Namefield in the Plugin settings
- You can also click the
-
Server Endpoint:
http://localhost:1234(default) -
Prepended Model Command: Optional commands like
/no_thinkfor faster responses with Qwen3 at the expense of potential translation quality degradation
-
Model Name: Enter the exact model name (e.g., "qwen3-32b") that you see in the
- Load any Blueprint in the Blueprint Editor
- Click the Node to Code button in the toolbar
- Select "Translate Blueprint"
- Your translation will be processed locally using LM Studio!
- LM Studio Documentation - Official setup guides and API reference
- LM Studio CLI Guide - Command-line interface documentation