Working in AI Workspaces: Hands-On Tutorial

AMD AI Workbench workspaces tutorial llama fine-tuning

Working in AI Workspaces: Hands-On Tutorial#

This hands-on tutorial demonstrates how to effectively work within AMD AI Workbench workspaces using JupyterLab notebooks. You’ll learn how to fine-tune large language models, and leverage the integrated development environment.

Prerequisites#

  • Access to AMD AI Workbench with an active JupyterLab workspace

  • Basic familiarity with Python and Jupyter notebooks

  • A Hugging Face account (for dataset access)

For instructions on launching JupyterLab in AI Workbench, see how to deploy and run inference.

Fine-Tuning Llama-3.1 8B with LLaMa-Factory#

This tutorial guides you through fine-tuning the Llama-3.1 8B large language model (LLM) on AMD ROCm GPUs using LLaMa-Factory. LLaMa-Factory is a user-friendly, unified framework designed for training and fine-tuning large language models with minimal setup requirements.

The complete fine-tuning tutorial is available in the ROCm AI Tutorials.

Tip

You can jump directly to step 4 in the “Prepare the training environment” section. AI Workbench provides JupyterLab notebooks out of the box, so no additional environment setup is required.