Back to Resources
Large Language Models
Inference engines, frameworks, and tooling for LLM-powered applications.
7 resources available
🦜
LangChain
LLMsFramework for building LLM-powered applications using composable primitives.
LLMRAGAgents
Python / TypeScript98k
🦙
llama.cpp
LLMsRun LLaMA-class models efficiently on consumer hardware.
LLaMAInferenceLocal LLM
C++72k
🦙
Ollama
LLMsSimple local LLM runtime with a clean developer experience.
Local LLMInferenceCLI
Go105k
âš¡
vLLM
LLMsHigh-throughput, memory-efficient LLM serving engine.
InferenceServingPerformance
Python25k
🔀
LiteLLM
LLMsUnified API interface for 100+ LLM providers.
LLMAPIMulti-provider
Python15k
🎯
Guidance
LLMsConstrained generation and structured output for LLMs.
Structured outputJSONConstrained generation
Python19k
ðŸ§
DSPy
LLMsProgramming framework for self-improving LLM pipelines.
Prompt optimizationAgentsRAG
Python19k