Supported GPU Models

Supported GPU Models#

Every AIM supports several GPU models at varying support levels, determined by the types of profiles included in a given AIM. There are the following profile types (from the most optimized to the least optimized):

  • "optimized": Performance-tuned profiles with benchmarked configurations for specific model/hardware combinations

  • "preview": Performance-tuned profiles that do not reach the same level of performance as "optimized" profiles, intended for early access to new configurations

  • "unoptimized": Basic profiles with default or minimal tuning, suitable as starting points for experimentation

  • "general": Generic profiles applicable across multiple models, providing baseline configurations when model-specific profiles are unavailable

If an AIM contains at least one optimized profile for a specific GPU model, then the support level for that GPU model is also optimized. If there are no optimized profiles but at least one preview profile, then the support level is preview. If there are no optimized or preview profiles but there are model-specific unoptimized profiles, then the support level is unoptimized. Otherwise, the support level is general.

The supported GPU models and their support levels for each AIM are based on the latest public release and are summarized in the table below.

# AIM MI250X MI300X MI325X MI350X MI355X
1 CohereLabs/command-a-reasoning-08-2025 unoptimized optimized unoptimized optimized optimized
2 Qwen/Qwen3-235B-A22B general optimized unoptimized optimized optimized
3 Qwen/Qwen3-32B general optimized unoptimized optimized optimized
4 Qwen/Qwen3-Coder-Next general optimized unoptimized optimized optimized
5 Qwen/Qwen3-VL-235B-A22B-Instruct general optimized unoptimized optimized optimized
6 Qwen/Qwen3-VL-235B-A22B-Thinking general optimized unoptimized optimized optimized
7 deepseek-ai/DeepSeek-R1 general optimized unoptimized optimized optimized
8 deepseek-ai/DeepSeek-R1-0528 general optimized unoptimized optimized optimized
9 deepseek-ai/DeepSeek-V3.1 general optimized unoptimized optimized optimized
10 deepseek-ai/DeepSeek-V3.1-Terminus general optimized unoptimized optimized optimized
11 google/gemma-3-27b-it general optimized unoptimized optimized optimized
12 meta-llama/Llama-3.1-405B-Instruct general optimized unoptimized optimized optimized
13 meta-llama/Llama-3.1-8B-Instruct general optimized unoptimized optimized optimized
14 meta-llama/Llama-3.2-1B-Instruct general preview unoptimized optimized optimized
15 meta-llama/Llama-3.2-3B-Instruct general optimized unoptimized optimized optimized
16 meta-llama/Llama-3.3-70B-Instruct general optimized unoptimized optimized optimized
17 mistralai/Ministral-3-14B-Instruct-2512 general unoptimized unoptimized optimized optimized
18 mistralai/Ministral-3-14B-Reasoning-2512 general unoptimized unoptimized optimized optimized
19 mistralai/Mistral-Large-3-675B-Instruct-2512 general optimized unoptimized optimized optimized
20 mistralai/Mistral-Small-3.2-24B-Instruct-2506 general optimized unoptimized optimized optimized
21 mistralai/Mixtral-8x22B-Instruct-v0.1 general optimized unoptimized optimized optimized
22 mistralai/Mixtral-8x7B-Instruct-v0.1 general optimized unoptimized optimized optimized
23 openai/gpt-oss-120b unoptimized optimized unoptimized optimized optimized
24 openai/gpt-oss-20b unoptimized optimized unoptimized optimized optimized

The table should be read as follows:

  • The AIM column contains links to each AIM’s Docker images publicly available on Docker Hub.

  • GPU model columns (MI250X, MI300X, MI325X, MI350X, MI355X, …) show the support level for that GPU in the given AIM.