Skip to main content
Version: 25.5

snorkelai.sdk.client.evaluation.create_evaluation_report

warning

This SDK function is not supported in Snorkel AI Data Development Platform v25.5 and later versions and will be removed in a future release. For assistance finding alternative approaches, please contact your Snorkel support team.

snorkelai.sdk.client.evaluation.create_evaluation_report(dataset, metric_schemas, split=None, slices=None, models=None)

Start a job to compute metrics for a dataset split.

Parameters

NameTypeDefaultInfo
datasetUnion[str, int]Name or UID of the dataset to load unlabeled dataset from.
metric_schemasList[MetricSchema]Create built-in MetricSchema objects.
splitOptional[str]None[DEPRECATED] Name of the split to evaluate.
slicesOptional[List[int]]NoneUID of the slices for which to compute metrics If no slices are provide, all slices in the given dataset will be evaluated.
modelsOptional[List[int]]NoneUID of the models for which to compute metrics If no models are provided, metrics for all models will be computed All models must map to sources associated with datasources in the given dataset.

Returns

A dictionary containing the evaluation results

Return type

Dict[str, Any]