AMD AI Workbench inference chat

Model Inference#

Training AI models and developing your own models are essential parts of AI workflows. Solving problems, making predictions, and processing data is called inference. You can use the platform for all of these use cases.

In short, an AI model requires data for training it properly. The model applies learned patterns to analyze new inputs. With new data, the model generates predictions, classifications, or other types of responses. In the end, the results are used to make decisions and provide services.

The platform features are designed to be user-friendly, with a focus on experimentation and deploying AI models.

Chat#

The chat page allows you to experiment with models you have access to. You can modify several parameters to see how they affect the model’s response. Users can test and compare chat models and quickly switch between views. For more details, see the Chat or Compare pages.

Model deployment and inference#

Read our tutorial on how to deploy a model and run inference to get a kickstart on these features.

Other tutorials and examples#