code provider launches a VS Code dev environment.
It comes with Python and Conda pre-installed.
If GPU is requested, the provider pre-installs the CUDA driver too.
workflows: - name: ide provider: code artifacts: - path: output resources: interruptible: true gpu: 1
The following properties are optional:
before_run- (Optional) The list of shell commands to run before running the Python file
requirements- (Optional) The path to the
python- (Optional) The major version of Python. By default, it's
environment- (Optional) The list of environment variables
artifacts- (Optional) The list of output artifacts
resources- (Optional) The hardware resources required by the workflow
working_dir- (Optional) The path to the working directory
The list of output artifacts
path– (Required) The relative path of the folder that must be saved as an output artifact
trueif the artifact files must be saved in real-time. Must be used only when real-time access to the artifacts is important: for storing checkpoints (e.g. if interruptible instances are used) and event files (e.g. TensorBoard event files, etc.) By default, it's
The hardware resources required by the workflow
cpu- (Optional) The number of CPU cores
memory(Optional) The size of RAM memory, e.g.
gpu- (Optional) The number of GPUs, their model name and memory
shm_size- (Optional) The size of shared memory, e.g.
trueif the workflow can run on interruptible instances. By default, it's
If your workflow is using parallel communicating processes (e.g. dataloaders in PyTorch),
you may need to configure the size of the shared memory (
/dev/shm filesystem) via the
The number of GPUs, their name and memory
count- (Optional) The number of GPUs
memory(Optional) The size of GPU memory, e.g.
name(Optional) The name of the GPU model (e.g.