Setting Up Ollama with Zed IDE
Ollama
To set up Ollama using Docker, run the following command:
docker run -d --gpus=all -v ./ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama
Download pre-trained models from the Ollama website with this command:
docker exec -it ollama ollama pull [Model name available on ollama website]
For example, to download llama3.2:latest
, run:
References
Zed
To configure Zed for Ollama, open the settings.json
file by pressing Command + ,
.
Add or update the following sections as needed:
Replace your-machine-ip
with 127.0.0.1
if you’re running locally or use your remote machine’s IP address.
//...
"language_models": {
"ollama": {
"api_url": "http://your-machine-ip:11434",
"low_speed_timeout_in_seconds": 30
}
},
"assistant": {
"default_model": {
"provider": "ollama",
"model": "llama3.2:latest"
},
"version": "2",
"enabled": true,
"dock": "right",
"provider": null
}
My Experience
I’ve had success using mistral-nemo
for refactoring, optimizing, and writing JSDoc for TypeScript code. It works quickly and is usually reliable.
Both llama3.1
and 3.2
models are great for rephrasing and summarizing content, similar to writing JSDoc.
To use Zed, press Command + R
.
To begin interacting with your document in Zed, type /tab
to load the current file into the context.
The /tab
is read and loaded, so if you change it, you will have to reload it.