Skip to main content
Version: 25.2

snorkelflow.client.fm_suite.run_lf_inference

snorkelflow.client.fm_suite.run_lf_inference(node, lf_uid, inference_splits=None, sync=False)

Run the LF on new data. Note, if there are any datapoints for which inference has been computed already the cached result will be used.

Parameters:
  • node (int) – Node uid which contains the LF and data to run over.

  • lf_uid (int) – The LF for which new predictions will be computed.

  • inference_splits (Optional[List[str]], default: None) – Dataset splits to run inference over. Defaults to all, [“train”, “dev”, “valid”, “test”].

  • sync (bool, default: False) – If True, method will block until the inference job is complete. Note the job progress can always be be monitored manually with sf.poll_job_status(job_uid).

Returns:

job_uid – The uid of the Warm Start job which can be used to monitor progress with sf.poll_job_status(job_uid).

Return type:

str

Examples

>>> sf.run_lf_inference(NODE_UID, LF_UID, ["train", "test"])
Note the job progress can be monitored with sf.poll_job_status('123')