snorkelai.sdk.client.evaluation.create_evaluation_report
warning
This SDK function is not supported in Snorkel AI Data Development Platform v25.5 and later versions and will be removed in a future release. For assistance finding alternative approaches, please contact your Snorkel support team.
- snorkelai.sdk.client.evaluation.create_evaluation_report(dataset, metric_schemas, split=None, slices=None, models=None)
Start a job to compute metrics for a dataset split.
Parameters
Parameters
Returns
Returns
A dictionary containing the evaluation results
Return type
Return type
Dict[str, Any]
Name Type Default Info dataset Union[str, int]
Name or UID of the dataset to load unlabeled dataset from. metric_schemas List[MetricSchema]
Create built-in MetricSchema objects. split Optional[str]
None
[DEPRECATED] Name of the split to evaluate. slices Optional[List[int]]
None
UID of the slices for which to compute metrics If no slices are provide, all slices in the given dataset will be evaluated. models Optional[List[int]]
None
UID of the models for which to compute metrics If no models are provided, metrics for all models will be computed All models must map to sources associated with datasources in the given dataset.