Back to Resources
πŸŽ›οΈ

PEFT

Training & Fine-tuning

Parameter-efficient fine-tuning methods by Hugging Face.

18kstars1.8kforksPython

About

PEFT provides LoRA, Prefix Tuning, IAΒ³, and other methods that reduce the compute and memory cost of adapting large models to new tasks.

Key Features

  • LoRA
  • Prefix tuning
  • Adapter layers
  • Quantization-aware training

Tags

LoRAFine-tuningEfficientHugging Face

Related Resources