Skip to main content
Version: 25.7

Create prompt development workflow

Prompt development, sometimes called prompt engineering, is the process of designing and refining inputs to guide AI models, such as large language models (LLMs), to produce high-quality, task-specific outputs. A prompt development workflow begins with uploading a text dataset, selecting an LLM, and crafting system and/or user prompts to tailor the model's response. With prompt versioning, users can iterate on their designs, enabling continuous improvement of AI-driven results.

Prerequisite

Enable required models via the Foundation Model Suite to set up and manage external models. Learn more about using external models.

Create a prompt development workflow

Upload input dataset

  1. Navigate to the Datasets page.
  2. Select Upload new dataset.
  3. Select the train split when uploading.

Create a prompt workflow

  1. Go to the Prompts page.

  2. Select Create Prompt.

    Create a Prompt Workflow
  3. Name your workflow.

  4. Associate it with an input dataset.

    You will see the initial prompt workflow page:

    Initial prompt workflow page. View and page through your input dataset.

Select model

  1. Choose an LLM from the dropdown menu. Select a Model
  2. Switch models as needed to optimize results.

Enter and run prompts

  1. Configure prompts, using system prompts, user prompts, or both as needed. Toggle between the system and user prompts tabs to make changes to each. For more about prompts, see Prompt development overview. Enter Prompts
  2. Run your prompt on your entire dataset or a subset of your dataset.
  3. Review responses for each input in single data point or table view. Run Prompt and Review Responses
  4. To improve responses, iterate your prompts or LLM settings and re-run workflows.

Manage prompt versions

Select the time back icon (Time back icon) to access version history and compare prompt versions, runs, and responses over time.

Manage prompt versions

Favorite and rename prompt versions

Use the prompt versioning feature to add stars and custom names for prompt versions to help you compare prompts and responses over time.

Favorite and Rename Prompt Versions

Incorporate SME feedback to improve prompts

  1. Select the Create new batch icon to send a batch of prompts and responses to SMEs for annotation.

    Request SME Annotations
  2. View ground truth provided by SMEs directly from the Develop prompt page to improve on prompts. The SME feedback is displayed next to the LLM Response. You can view ground truth provided on the input dataset and the LLM responses.

    View SME Annotations

Filter data

Select the funnel icon (Funnel icon) to filter your input data, LLM responses, and annotations by input dataset ground truth, by slice, and by field value.

Filter Data

Run prompt on filtered subset of data

You can run your prompt on a filtered subset of your data.

  1. Select the funnel icon (Funnel icon).

  2. Select the filter you want to apply.

  3. Once the filter is applied, select Run (or Re-Run).

  4. Select Only current filtered subset before starting the run.

Run on filtered subset

Export prompt output and template

For your currently selected prompt version and run, select Export.

  • Choose Export output dataset as CSV to export data from the current prompt run, including inputs, model information, and the LLM's response, as a CSV file. Use this option if you care about the response for this particular run.
  • Choose Export prompt as JSON to export a prompt template containing the model, system and evaluation prompts, and metadata for a prompt version, as a JSON file. Use this option if you want to reuse this prompt with all the same settings.

Download prompt run output dataset and prompt template

Enhance prompt with ground truth

Add ground truth annotations to your prompt to quickly transfer good examples from your SMEs to the LLM. Adding examples of expected output to the prompt context is called one-shot or few-shot learning. This is a well-known method to help the LLM better understand the expected output.

Snorkel lets you add examples that your annotators have already completed to the prompt with the click of a button. Each datapoint added as an example includes the input columns, the LLM's reponse from the current run, and the ground truth.

Add one example at a time

Follow these steps to add examples as you review each datapoint:

  1. From the Prompts page, select the prompt where you want to add ground truth.

  2. View each annotation in the Annotations pane on the right. Use the right and left arrows to navigate to the annotation example(s) that you want to add to the prompt.

    Enhance prompt with ground truth examples - record view

  3. Select Add as example to prompt at the bottom of the Annotations pane. You may need to scroll down. Snorkel appends the example to the end of your Prompt in the left pane. It starts with the text #### Examples. The GUI also confirms Prompt enhanced! 1 example has been added to the end of your prompt..

  4. Select Save prompt after you are satisfied with the examples added.

Add multiple examples

You can also select multiple datapoints at a time to add as examples:

  1. From the Prompts page, select the prompt where you want to add ground truth.

  2. Select the Table view icon (Table view icon) to view multiple prompts.

    Enhance prompt with ground truth examples - table view

  3. Select all the datapoints you want to add as examples.

  4. Select Add as examples at the top of the table.

  5. Select Save prompt after you are satisfied with the examples added.

Select label schemas

After selecting datapoints to add as examples, select one or more label schemas where you want to include ground truth annotations.

Enhance prompt - select label schemas