Create prompt development workflow
Prompt development, sometimes called prompt engineering, is the process of designing and refining inputs to guide AI models, such as large language models (LLMs), to produce high-quality, task-specific outputs. A prompt development workflow begins with uploading a text dataset, selecting an LLM, and crafting system and/or user prompts to tailor the model's response. With prompt versioning, users can iterate on their designs, enabling continuous improvement of AI-driven results.
Prerequisite
Enable required models via the Foundation Model Suite to set up and manage external models. Learn more about using external models.
Create a prompt development workflow
Upload input dataset
- Navigate to the Datasets page.
- Select Upload new dataset.
- Select the train split when uploading.
Create a prompt workflow
-
Go to the Prompts page.
-
Select Create Prompt.
-
Name your workflow.
-
Associate it with an input dataset.
You will see the initial prompt workflow page:
Select model
- Choose an LLM from the dropdown menu.
- Switch models as needed to optimize results.
Enter and run prompts
- Configure prompts, using system prompts, user prompts, or both as needed. Toggle between the system and user prompts tabs make changes to each.
For more about prompts, see Prompt development overview.
- Run your prompt on your entire dataset or a subset of your dataset.
- Review responses for each input in single data point or table view.
- To improve responses, iterate your prompts or LLM settings and re-run workflows.
Manage prompt versions
Select the time back icon ()
to access version history and compare prompt versions, runs, and responses over time.
Favorite and rename prompt versions
Use the prompt versioning feature to add stars and custom names for prompts versions to help you compare prompts and responses over time.

Incorporate SME feedback to improve prompts
-
Select the Create new batch icon to send a batch of prompts and responses to SMEs for annotation.
-
View ground truth provided by SMEs directly from the Develop prompt page to improve on prompts. The SME feedback is displayed next to the LLM Response. You can view ground truth provided on the input dataset and the LLM responses.
Filter data
Select the funnel icon () to filter your input data, LLM responses, and annotations by input dataset ground truth, by slice, and by field value.
