snorkelai.sdk.develop.Criteria
- class snorkelai.sdk.develop.Criteria(*args, **kwargs)
Bases:
BaseModel
A criteria represents a specific characteristic or feature being evaluated as part of a benchmark.
Criteria define what aspects of a model or AI application’s performance are being measured, such as accuracy, relevance, safety, and other qualities. Each criteria is associated with a benchmark and has an evaluator that assesses whether a model’s output satisfies that criteria.
The heart of each criteria is its associated label schema, which defines what, exactly, the criteria is measuring, and maps each option to an integer.
For example, a criteria that measures accuracy might have a label schema that defines the following labels:
INCORRECT
: 0CORRECT
: 1
A criteria that measures readability might have a label schema that defines the following labels:
POOR
: 0ACCEPTABLE
: 1EXCELLENT
: 2
Read more in the Evaluation overview.
Parameters
Parameters
Name Type Default Info benchmark_uid int
The unique identifier of the parent Benchmark. The benchmark_uid
is visible in the URL of the benchmark page in the Snorkel GUI. For example,https://YOUR-SNORKEL-INSTANCE/benchmarks/100/
indicates a benchmark withbenchmark_uid
of100
.criteria_uid int
The unique identifier for this criteria. name str
The name of the criteria. metric_label_schema_uid int
The ID of the schema defining the metric labels. description str
""
A detailed description of what the criteria measures. rationale_label_schema_uid Optional[int], default=None
The ID of the schema defining rationale labels (if applicable). Examples
Create a new criteria:
# Create a new criteria
criteria = Criteria.create(
benchmark_uid=100,
name="Accuracy",
description="Measures response accuracy",
label_map={"Correct": 1, "Incorrect": 0},
requires_rationale=True
)Get an existing criteria:
# Get existing criteria
criteria = Criteria.get(criteria_uid=456, benchmark_uid=123)- __init__(*args, **kwargs)
\_\_init\_\_
__init__
Methods
__init__
(*args, **kwargs)create
(benchmark_uid, name, label_map[, ...])Create a new criteria for a benchmark. get
(criteria_uid)Get an existing criteria by its UID. get_evaluator
()Retrieves the evaluator associated with this criteria. Attributes
description
rationale_label_schema_uid
benchmark_uid
criteria_uid
name
metric_label_schema_uid
- classmethod create(benchmark_uid, name, label_map, description='', requires_rationale=False)
Create a new criteria for a benchmark.
Your
label_map
must use consecutive integers starting from0
. For example, if you have three labels, you must use the values0
,1
, and2
.Parameters
Parameters
Returns
Returns
A new Criteria object representing the created criteria.
Return type
Return type
Raises
Raises
ValueError – If label_map is empty or has invalid values.
Name Type Default Info benchmark_uid int
The unique identifier of the parent Benchmark. name str
The name of the criteria. label_map Dict[str, int]
A dictionary mapping user-friendly labels to numeric values. The key “UNKNOWN” will always be added with value -1. Dictionary values must be consecutive integers starting from 0. description str, default=""
A detailed description of what the criteria measures. requires_rationale bool, default=False
Whether the criteria requires rationale. Example
criteria = Criteria.create(
benchmark_uid=123,
name="Accuracy",
description="Measures response accuracy",
label_map={"Correct": 1, "Incorrect": 0},
requires_rationale=True
)
create
create
- classmethod get(criteria_uid)
Get an existing criteria by its UID.
Parameters
Parameters
Returns
Returns
A Criteria object representing the existing criteria.
Return type
Return type
Raises
Raises
ValueError – If the criteria is not found.
Name Type Default Info criteria_uid int
The unique identifier for the criteria. Example
criteria = Criteria.get(criteria_uid=456, benchmark_uid=123)
get
get
- get_evaluator()
Retrieves the evaluator associated with this criteria.
An evaluator is a prompt or code snippet that assesses whether a model’s output satisfies the criteria. Each criteria has one evaluator that assesses each datapoint against the criteria’s label schema and chooses the most appropriate label, in the form of the associated integer.
The evaluator can be either a code evaluator (using custom Python functions) or a prompt evaluator (using LLM prompts).
Example
Example 1
Example 1
Get the evaluator for a criteria and check its type:
criteria = Criteria.get(criteria_uid=456, benchmark_uid=123)
evaluator = criteria.get_evaluator()
if isinstance(evaluator, CodeEvaluator):
print("This is a code evaluator")
elif isinstance(evaluator, PromptEvaluator):
print("This is a prompt evaluator")
get\_evaluator
get_evaluator
-
benchmark_uid:
int
-
criteria_uid:
int
-
description:
str
= ''
-
metric_label_schema_uid:
int
-
name:
str
-
rationale_label_schema_uid:
Optional
[int
] = None