Skip to main content
Version: 0.91

Foundation model suite

Overview

The foundation model (FM) Suite is a collection of FM-based features that are incorporated into the end-to-end Snorkel workflow. These features enable you to distill, adapt, and fine-tune foundation models using the data-centric development workflow and train specialized, enterprise-ready production models.

What are foundation models?

Foundation models, also known as large language models (LLMs), are extremely large models trained on massive amounts of data, forming a general foundation for use in more specific tasks.

Snorkel Flow provides the bridge for these powerful (but generic) models to be applied to real-world enterprise AI use cases.

Check out the Snorkel AI blog post, Foundation models: a guide, for more general information on foundation models.

Use cases

The current FM suite focuses on predictive AI use cases. Predictive AI is critical to successfully derive value from AI in enterprise software, especially when it comes to automating mission-critical processes (e.g., underwriting, know your customer (KYC), and document intelligence).

Since most real-world use cases in enterprise software are typically complex and performance-critical, “generalist” foundation models struggle to drive value out of the box due to the lack of domain-specific knowledge. This is where the data-centric FM Suite can help you use modern foundation models to accelerate the development of deployable “specialist” models tailored to the specific use case at hand, all within Snorkel Flow.

What is in the FM suite?

The FM Suite contains three main features:

  1. Prompt Builder: Explore and label data through natural language prompts using FM knowledge and translate it into labels that are fit for your weakly supervised learning use cases. See Prompt builder for more information.
  2. Warm Start: Auto-label training data using the power of foundation models plus state-of-the-art zero/few-shot learning techniques during onboarding. This helps you get to a powerful baseline “first pass” with minimal human effort. See Warm start for more information.
  3. Fine-tuning: Use labeled training data (programmatically or manually created) to train production models of whatever size is preferred, ranging from small easy-to-deploy models, like RoBERTa, to large FMs, like GPT-4. See Foundation model fine-tuning for more information.

Infrastructure requirements

Deployment TypeFeature2024.R1 LTS (v0.91) 
Snorkel-hostedWarm StartModels will be downloaded upon upgrade and readily available for use in Snorkel Flow. Infrastructure:
  • 1 GPU
  • 16 GB Memory

If GPUs are unavailable, contact Snorkel to assess possible alternative options and trade-offs. | | Prompt Builder | Does not require a GPU since it can run on external infrastructure (HuggingFace, OpenAI, VertexAI). Requires a valid account with HuggingFace, OpenAI, or VertexAI. If external connections are not possible, contact the Snorkel team to explore alternative options. | | Fine-tuning | HuggingFace models are accessible in the Model Zoo in Snorkel Flow. Infrastructure:

  • Recommendation: 1 GPU
  • Possible to run on CPU, however, expect significantly longer run times.

OpenAI models require connecting to a dedicated OpenAI account (credentials managed via SDK). Actual model training will run on OpenAI, so no GPUs are required. | | Customer-hosted (on-prem + private cloud) | Warm Start | Requires internet access to download models the first time of use.* Infrastructure:

  • 1 GPU
  • 16 GB Memory

If GPUs are unavailable, contact Snorkel to assess possible alternative options and trade-offs. *If an internet connection is unavailable, contact the Snorkel team for support. | | Prompt Builder | FM inference is widely supported for connections outside of the Snorkel Platform (HuggingFace, OpenAI, VertexAI).   If an internet connection is unavailable, contact the Snorkel team to explore alternative options. | | Fine-tuning | HuggingFace models are accessible in the UI. Infrastructure:

  • Recommended: 1 GPU
  • Possible to run on CPU, however, expect significantly longer run times.

OpenAI models require connecting to a dedicated OpenAI account (credentials managed via in-platform Jupyter notebook). Actual model training will run on OpenAI, so no GPUs are required. |