snorkelai.sdk.develop
(Beta) Object-oriented SDK for Snorkel Flow
Classes
Batch (name, uid, dataset_uid, label_schemas, ...) | The Batch object represents an annotation batch in Snorkel Flow. |
Benchmark (*args, **kwargs) | A benchmark is the collection of characteristics that you care about for a particular GenAI application, and the measurements you use to assess the performance against those characteristics. |
BenchmarkExecution (*args, **kwargs) | Represents a single execution run of a benchmark for a dataset. |
BenchmarkExecutionExportFormat (value) | An enumeration. |
BenchmarkExportFormat (value) | An enumeration. |
CodeEvaluator (*args, **kwargs) | An evaluator that uses custom Python code to assess an AI application's responses. |
Criteria (*args, **kwargs) | A criteria represents a specific characteristic or feature being evaluated as part of a benchmark. |
CsvExportConfig ([sep, quotechar, escapechar]) | |
Dataset (name, uid, mta_enabled) | The Dataset object represents a dataset in Snorkel Flow. |
Evaluator (*args, **kwargs) | Base class for all evaluators. |
JsonExportConfig () | |
LabelSchema (name, uid, dataset_uid, ...[, ...]) | The LabelSchema object represents a label schema in Snorkel Flow. |
ModelNode (uid, application_uid, config) | ModelNode class represents a model node. |
Node (uid, application_uid, config) | The Node object represents atomic data processing units in Snorkel Flow. |
OperatorNode (uid, application_uid, config) | OperatorNode class represents a non-model, operator node. |
PromptEvaluator (*args, **kwargs) | An evaluator that uses LLM prompts to assess model outputs. |
Slice (dataset, slice_uid, name[, ...]) |