Available AIM containers#
Qwen#
Qwen/Qwen3-32B (stable)
Qwen is the large language model and large multimodal model series of the Qwen Team, Alibaba Group.
Meta-llama#
meta-llama/Llama-3.1-405B-Instruct (stable)
Massive instruction-tuned version of Llama 3.1 with 405B parameters for the most demanding tasks.
meta-llama/Llama-3.1-8B-Instruct (stable)
Instruction-tuned version of Llama 3.1 8B optimized for chat and instruction following.
meta-llama/Llama-3.2-1B-Instruct (stable)
Compact instruction-tuned Llama 3.2 model with 1B parameters for edge deployment.
meta-llama/Llama-3.2-3B-Instruct (stable)
Balanced instruction-tuned Llama 3.2 model with 3B parameters.
meta-llama/Llama-3.3-70B-Instruct (preview)
Latest instruction-tuned Llama 3.3 model with 70B parameters and improved performance.
Mistralai#
Mistral-Small-3.2-24B-Instruct-2506 is a minor update of Mistral-Small-3.1-24B-Instruct-2503. Small-3.2 improves in the following categories:
Instruction following: Small-3.2 is better at following precise instructions
Repetition errors: Small-3.2 produces less infinite generations or repetitive answers
Function calling: Small-3.2’s function calling template is more robust
docker pull docker.io/amdenterpriseai/aim-mistralai-mistral-small-3-2-24b-instruct-2506:0.8.4
Mixture of experts model with 8 experts of 22B parameters each for efficient scaling.
mistralai/Mixtral-8x7B-Instruct-v0.1 (stable)
The Mixtral-8x7B Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts.