Back to Resources
Large Language Models
Inference engines, frameworks, and tooling for LLM-powered applications.
4 resources available
🦜
LangChain
LLMsFramework for building LLM-powered applications using composable primitives.
LLMRAGAgents
Python / TypeScript98k
🦙
llama.cpp
LLMsRun LLaMA-class models efficiently on consumer hardware.
LLaMAInferenceLocal LLM
C++72k
🦙
Ollama
LLMsSimple local LLM runtime with a clean developer experience.
Local LLMInferenceCLI
Go105k
âš¡
vLLM
LLMsHigh-throughput, memory-efficient LLM serving engine.
InferenceServingPerformance
Python25k