What are the different runtimes in MLRun?

Question:

I’m trying to get a feel for how MLRun executes my Python code. What different runtimes are supported and why would I use one vs the other?

Asked By: Nick Schenone

||

Answers:

MLRun has several different ways to run a piece of code. At this time, the following runtimes are supported:

  • Batch runtimes
    • local – execute a Python or shell program in your local environment (i.e. Jupyter, IDE, etc.)
    • job – run the code in a Kubernetes Pod
    • dask – run the code as a Dask Distributed job (over Kubernetes)
    • mpijob – run distributed jobs and Horovod over the MPI job operator, used mainly for deep learning jobs
    • spark – run the job as a Spark job (using Spark Kubernetes Operator)
    • remote-spark – run the job on a remote Spark service/cluster (e.g. Iguazio Spark service)
  • Real-time runtimes
    • nuclio – real-time serverless functions over Nuclio
    • serving – higher level real-time Graph (DAG) over one or more Nuclio functions

If you are interested in learning more about each runtime, see the documentation.

Answered By: Nick Schenone
Categories: questions Tags: , , ,
Answers are sorted by their score. The answer accepted by the question owner as the best is marked with
at the top-right corner.