Skip to content

Commit 7cbf2d9

Browse files
authored
Doc: LM Studio guide (OpenHands#2875)
1 parent e45d46c commit 7cbf2d9

1 file changed

Lines changed: 70 additions & 6 deletions

File tree

docs/modules/usage/llms/localLLMs.md

Lines changed: 70 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -53,6 +53,7 @@ docker run \
5353
-e SANDBOX_USER_ID=$(id -u) \
5454
-e LLM_API_KEY="ollama" \
5555
-e LLM_BASE_URL="http://host.docker.internal:11434" \
56+
-e LLM_OLLAMA_BASE_URL="http://host.docker.internal:11434" \
5657
-e WORKSPACE_MOUNT_PATH=$WORKSPACE_BASE \
5758
-v $WORKSPACE_BASE:/opt/workspace_base \
5859
-v /var/run/docker.sock:/var/run/docker.sock \
@@ -68,12 +69,16 @@ Use the instructions in [Development.md](https://github.com/OpenDevin/OpenDevin/
6869
Make sure `config.toml` is there by running `make setup-config` which will create one for you. In `config.toml`, enter the followings:
6970

7071
```
71-
LLM_MODEL="ollama/codellama:7b"
72-
LLM_API_KEY="ollama"
73-
LLM_EMBEDDING_MODEL="local"
74-
LLM_BASE_URL="http://localhost:11434"
75-
WORKSPACE_BASE="./workspace"
76-
WORKSPACE_DIR="$(pwd)/workspace"
72+
[core]
73+
workspace_base="./workspace"
74+
75+
[llm]
76+
model="ollama/codellama:7b"
77+
api_key="ollama"
78+
embedding_model="local"
79+
base_url="http://localhost:11434"
80+
ollama_base_url="http://localhost:11434"
81+
7782
```
7883

7984
Replace `LLM_MODEL` of your choice if you need to.
@@ -142,3 +147,62 @@ ollama list # get list of installed models
142147
docker ps # get list of running docker containers, for most accurate test choose the open devin sandbox container.
143148
docker exec [CONTAINER ID] curl http://host.docker.internal:11434/api/generate -d '{"model":"[NAME]","prompt":"hi"}'
144149
```
150+
151+
152+
# Local LLM with LM Studio
153+
154+
Steps to set up LM Studio:
155+
1. Open LM Studio
156+
2. Go to the Local Server tab.
157+
3. Click the "Start Server" button.
158+
4. Select the model you want to use from the dropdown.
159+
160+
161+
Set the following configs:
162+
```bash
163+
LLM_MODEL="openai/lmstudio"
164+
LLM_BASE_URL="http://localhost:1234/v1"
165+
CUSTOM_LLM_PROVIDER="openai"
166+
```
167+
168+
### Docker
169+
170+
```bash
171+
docker run \
172+
-it \
173+
--pull=always \
174+
-e SANDBOX_USER_ID=$(id -u) \
175+
-e LLM_MODEL="openai/lmstudio"
176+
-e LLM_BASE_URL="http://host.docker.internal:1234/v1" \
177+
-e CUSTOM_LLM_PROVIDER="openai"
178+
-e WORKSPACE_MOUNT_PATH=$WORKSPACE_BASE \
179+
-v $WORKSPACE_BASE:/opt/workspace_base \
180+
-v /var/run/docker.sock:/var/run/docker.sock \
181+
-p 3000:3000 \
182+
ghcr.io/opendevin/opendevin:main
183+
```
184+
185+
You should now be able to connect to `http://localhost:3000/`
186+
187+
In the development environment, you can set the following configs in the `config.toml` file:
188+
189+
```
190+
[core]
191+
workspace_base="./workspace"
192+
193+
[llm]
194+
model="openai/lmstudio"
195+
base_url="http://localhost:1234/v1"
196+
custom_llm_provider="openai"
197+
```
198+
199+
Done! Now you can start Devin by: `make run` without Docker. You now should be able to connect to `http://localhost:3000/`
200+
201+
# Note:
202+
203+
For WSL, run the following commands in cmd to set up the networking mode to mirrored:
204+
205+
```
206+
python -c "print('[wsl2]\nnetworkingMode=mirrored',file=open(r'%UserProfile%\.wslconfig','w'))"
207+
wsl --shutdown
208+
```

0 commit comments

Comments
 (0)