@@ -26,51 +26,58 @@ First, make sure Docker is running:
2626``` bash
2727docker ps # this should exit successfully
2828```
29+
2930Then pull our latest image [ here] ( https://github.com/opendevin/OpenDevin/pkgs/container/sandbox )
3031``` bash
3132docker pull ghcr.io/opendevin/sandbox:v0.1
3233```
34+
35+ Then copy ` config.toml.template ` to ` config.toml ` . Add an API key to ` config.toml ` .
36+ (See below for how to use different models.)
37+ ``` toml
38+ OPENAI_API_KEY =" ..."
39+ WORKSPACE_DIR =" ..."
40+ ```
41+
42+ Next, start the backend.
3343We manage python packages and the virtual environment with ` pipenv ` .
34- Make sure python >= 3.10.
44+ Make sure you have python >= 3.10.
3545``` bash
3646python -m pip install pipenv
3747pipenv install -v
3848pipenv shell
3949
40- export OPENAI_API_KEY=" ..."
41- export WORKSPACE_DIR=" /path/to/your/project"
4250python -m pip install -r requirements.txt
4351uvicorn opendevin.server.listen:app --port 3000
4452```
4553
46- Then in a second terminal:
54+ Then, in a second terminal, start the frontend :
4755``` bash
4856cd frontend
4957npm install
5058npm start
5159```
52- The virtual environment is now activated and you should see ` (OpenDevin) ` in front of your cmdline prompt.
5360
5461### Picking a Model
5562We use LiteLLM, so you can run OpenDevin with any foundation model, including OpenAI, Claude, and Gemini.
5663LiteLLM has a [ full list of providers] ( https://docs.litellm.ai/docs/providers ) .
5764
58- To change the model, set the ` LLM_MODEL ` and ` LLM_API_KEY ` environment variables .
65+ To change the model, set the ` LLM_MODEL ` and ` LLM_API_KEY ` in ` config.toml ` .
5966
6067For example, to run Claude:
61- ``` bash
62- export LLM_API_KEY=" your-api-key"
63- export LLM_MODEL=" claude-3-opus-20240229"
68+ ``` toml
69+ LLM_API_KEY =" your-api-key"
70+ LLM_MODEL =" claude-3-opus-20240229"
6471```
6572
6673You can also set the base URL for local/custom models:
67- ``` bash
68- export LLM_BASE_URL=" https://localhost:3000"
74+ ``` toml
75+ LLM_BASE_URL =" https://localhost:3000"
6976```
7077
7178And you can customize which embeddings are used for the vector database storage:
72- ``` bash
73- export LLM_EMBEDDING_MODEL=" llama2" # can be "llama2", "openai", "azureopenai", or "local"
79+ ``` toml
80+ LLM_EMBEDDING_MODEL =" llama2" # can be "llama2", "openai", "azureopenai", or "local"
7481```
7582
7683### Running the app
0 commit comments