16 lines
672 B
Markdown
16 lines
672 B
Markdown
# LoRA Training on AMD
|
|
|
|
## Overview
|
|
|
|
### `shell.nix`
|
|
Provides a development environment compatible within Nix. It handles:
|
|
- Installing PyTorch optimized for AMD ROCm (`rocm7.2`).
|
|
- Installing `unsloth` and `unsloth-zoo` for efficient fine-tuning.
|
|
- Installing `marimo` and `ipython` as QOL.
|
|
|
|
### `train.py`
|
|
A `marimo` script that executes the fine-tuning process:
|
|
- Loads the `unsloth/Qwen3.5-0.8B` model in 16-bit.
|
|
- Prepares a sample dataset (`laion/OIG/unified_chip2.jsonl`).
|
|
- Configures Parameter-Efficient Fine-Tuning (PEFT) using LoRA (Rank 16).
|
|
- Sets up an `SFTTrainer` (using 8-bit AdamW) and trains the model for 100 steps, saving results to `outputs_qwen35`.
|