Back to Resources
🦙

Ollama

LLMs

Simple local LLM runtime with a clean developer experience.

120kstars10kforksGo

About

Ollama lets you run, version, and distribute LLMs locally using a simple CLI and Modelfile format. It abstracts hardware acceleration and model configuration behind an OpenAI-compatible REST API.

Key Features

  • Local models
  • REST API
  • Modelfiles
  • GPU support

Tags

Local LLMInferenceCLISelf-hosted

Related Resources