Skip to content

Reproducible ML workflows for teams

Introduction to dstack

dstack allows you to define your ML workflows as code, and run them in a configured cloud via the CLI. It takes care of managing workflow dependencies, provisioning cloud infrastructure, and versioning data.


  • Workflows as code: Define your ML workflows as code, and run them in a configured cloud via a single CLI command.
  • Reusable artifacts: Save data, models, and environment as workflows artifacts, and reuse them across projects.
  • Built-in containers: Workflow containers are pre-built with Conda, Python, etc. No Docker is needed.

You can use the dstack CLI from both your IDE and your CI/CD pipelines. The dstack CLI automatically tracks your current Git revision, including uncommitted local changes.


For debugging purposes, you can run workflow locally, or attach to them interactive dev environments (e.g. VS Code, and JupyterLab).

How does it work?

  1. Install dstack CLI locally
  2. Configure the cloud credentials locally (e.g. via ~/.aws/credentials)
  3. Define ML workflows in YAML files inside the .dstack/workflows directory (within your project)
  4. Run ML workflows via the dstack run CLI command
  5. Use other dstack CLI commands to manage runs, artifacts, etc.

When you run a workflow via the dstack CLI, it provisions the required compute resources (in a configured cloud account), sets up environment (such as Python, Conda, CUDA, etc), fetches your code, downloads deps, saves artifacts, and tears down compute resources.


Get started in 30 min

Set your first ML workflows up and running should take 30 min or less.

Subscribe to the newsletter to get notified about new updates.