snorkelai.sdk.develop
Object-oriented interfaces of Snorkel SDK
Classes
Batch (name, uid, dataset_uid, label_schemas, ...) | The Batch object represents an annotation batch in Snorkel Flow. |
Benchmark (benchmark_uid, name, created_at, ...) | A benchmark is the collection of characteristics that you care about for a particular GenAI application, and the measurements you use to assess the performance against those characteristics. |
BenchmarkExecution (benchmark_uid, ...) | Represents a single execution run of a benchmark for a dataset. |
Cluster (cluster_uid, error_analysis_uid, name) | Provides methods for viewing and updating clusters and the ability to view datapoints assigned to a cluster. |
CodeEvaluator (benchmark_uid, criteria_uid, ...) | An evaluator that uses custom Python code to assess an AI application's responses. |
Criteria (benchmark_uid, criteria_uid, name, ...) | A criteria represents a specific characteristic or feature being evaluated as part of a benchmark. |
CsvExportConfig ([sep, quotechar, escapechar]) | Benchmark execution CSV export configuration |
Dataset (name, uid, mta_enabled) | The Dataset object represents a dataset in Snorkel Flow. |
ErrorAnalysis (provenance, error_analysis_run_uid) | Provides methods for creating, monitoring, and retrieving results from error analysis clustering runs. |
Evaluator (benchmark_uid, criteria_uid, ...) | Base class for all evaluators. |
JsonExportConfig () | Benchmark execution JSON export configuration |
LabelSchema (name, uid, dataset_uid, ...[, ...]) | The LabelSchema object represents a label schema in Snorkel Flow. |
PromptEvaluator (benchmark_uid, criteria_uid, ...) | An evaluator that uses LLM prompts to assess model outputs. |
Slice (dataset, slice_uid, name[, ...]) | Represents a slice within a Snorkel dataset for identifying and managing subsets of datapoints. |