Back to Resources
ποΈ
PEFT
Training & Fine-tuningParameter-efficient fine-tuning methods by Hugging Face.
17kstars1.7kforksPython
About
PEFT (Parameter-Efficient Fine-Tuning) provides LoRA, Prefix Tuning, IAΒ³, and other methods that reduce the compute and memory cost of adapting large models.
Key Features
- LoRA
- Prefix tuning
- Adapter layers
- Quantization-aware training
Tags
LoRAFine-tuningEfficientHugging Face