Skip to main content
Network volumes offer persistent storage that exists independently of your compute resources. Your data is retained even when your Pods are terminated or your Serverless workers are scaled to zero. You can use them to share data and maintain datasets across multiple machines and Runpod products. Network volumes are backed by high-performance NVMe SSDs connected via high-speed networks. Transfer speeds typically range from 200-400 MB/s, with peak speeds up to 10 GB/s depending on location and network conditions.

When to use network volumes

Consider using network volumes when you need:
  • Persistent data that outlives compute resources: Your data remains accessible even after Pods are terminated or Serverless workers stop.
  • Shareable storage: Share data across multiple Pods or Serverless endpoints by attaching the same network volume.
  • Portable storage: Move your working environment and data between different compute resources.
  • Efficient data management: Store frequently used models or large datasets to avoid re-downloading them for each new Pod or worker, saving time, bandwidth, and reducing cold start times.

Pricing

Network volumes are billed at $0.07 per GB per month for the first 1TB, and $0.05 per GB per month after that.
If your account lacks sufficient funds to cover storage costs, your network volume may be terminated. Once terminated, the disk space is immediately freed for other users, and Runpod cannot recover lost data. Ensure your account remains funded to prevent data loss.

Create a network volume

  • Web console
  • REST API
To create a new network volume:
  1. Navigate to the Storage page in the Runpod console.
  2. Select New Network Volume.
  3. Configure your volume:
    • Select a datacenter for your volume. Datacenter location does not affect pricing, but determines which GPU types and endpoints your network volume can be used with.
    • Provide a descriptive name for your volume (e.g., “project-alpha-data” or “shared-models”).
    • Specify the desired size for the volume in gigabytes (GB).
Network volume size can be increased later, but cannot be decreased.
  1. Select Create Network Volume.
You can edit and delete your network volumes using the Storage page.

Network volumes for Serverless

When attached to a Serverless endpoint, a network volume is mounted at /runpod-volume within the worker environment. This allows all workers on that endpoint to access shared data.

Attach to an endpoint

To enable workers on an endpoint to use a network volume:
  1. Navigate to the Serverless section of the Runpod console.
  2. Select an existing endpoint and click Manage, then select Edit Endpoint.
  3. In the endpoint configuration menu, scroll down and expand the Advanced section.
  4. Click Network Volume and select the network volume you want to attach to the endpoint.
  5. Configure any other fields as needed, then select Save Endpoint.
Data from the network volume will be accessible to all workers for that endpoint from the /runpod-volume directory. Use this path to read and write shared data in your handler function.
Writing to the same network volume from multiple endpoints or workers simultaneously may result in conflicts or data corruption. Ensure your application logic handles concurrent access appropriately for write operations.

Benefits for Serverless

Using network volumes with Serverless provides several advantages:
  • Reduced cold starts: Store large models or datasets on a network volume so workers can access them quickly without downloading on each cold start.
  • Cost efficiency: Network volume storage costs less than frequently re-downloading large files.
  • Simplified data management: Centralize your datasets and models for easier updates and management across multiple workers and endpoints.
If you use network volumes with your Serverless endpoint, your deployments will be constrained to the datacenter where the volume is located. This may impact GPU availability and failover options.

Network volumes for Pods

When attached to a Pod, a network volume replaces the Pod’s default disk volume and is typically mounted at /workspace.
Network volumes are only available for Pods in the Secure Cloud. For more information, see Pod types.

Attach to a Pod

Network volumes must be attached during Pod deployment. They cannot be attached to a previously-deployed Pod, nor can they be detached later without deleting the Pod. To deploy a Pod with a network volume attached:
  1. Navigate to the Pods section of the Runpod console.
  2. Select Deploy.
  3. Select Network Volume and choose the network volume you want to attach from the dropdown list.
  4. Select a GPU type. The system will automatically show which Pods are available to use with the selected network volume.
  5. Select a Pod Template.
  6. If you wish to change where the volume mounts, select Edit Template and adjust the Volume Mount Path.
  7. Configure any other fields as needed, then select Deploy On-Demand.
Data from the network volume will be accessible to the Pod from the volume mount path (default: /workspace). Use this directory to upload, download, and manipulate data that you want to share with other Pods.

Share data between Pods

You can attach a network volume to multiple Pods, allowing them to share data seamlessly. Multiple Pods can read files from the same volume concurrently, but you should avoid writing to the same file simultaneously to prevent conflicts or data corruption.

Network volumes for Instant Clusters

Network volumes for Instant Clusters work the same way as they do for Pods. They must be attached during cluster creation, and by default are mounted at /workspace within each node in the cluster.

Attach to an Instant Cluster

To enable workers on an Instant Cluster to use a network volume:
  1. Navigate to the Instant Clusters section of the Runpod console.
  2. Click Create Cluster.
  3. Click Network Volume and select the network volume you want to attach to the cluster.
  4. Configure any other fields as needed, then click Deploy Cluster.

S3-compatible API

Runpod provides an S3-compatible API that allows you to access and manage files on your network volumes directly, without needing to launch a Pod or run a Serverless worker for file management. This is particularly useful for:
  • Uploading large datasets or models before launching compute resources.
  • Managing files remotely without maintaining an active connection.
  • Automating data workflows using standard S3 tools and libraries.
  • Reducing costs by avoiding the need to keep compute resources running for file management.
  • Pre-populating volumes to reduce worker initialization time and improve cold start performance.
The S3-compatible API supports standard S3 operations including file uploads, downloads, listing, and deletion. You can use it with popular tools like the AWS CLI and Boto3 (Python).
The S3-compatible API is currently available for network volumes in the following datacenters: EUR-IS-1, EU-RO-1, EU-CZ-1, US-KS-2, US-CA-2.

Migrate files

You can migrate files between network volumes (including between data centers) using the following methods:

Using runpodctl

The simplest way to migrate files between network volumes is to use runpodctl send and receive on two running Pods. Before you begin, you’ll need:
  • A source network volume containing the data you want to migrate.
  • A destination network volume (which can be empty or contain existing data).
1

Deploy Pods with network volumes attached

Deploy two Pods using the default Runpod PyTorch template. Each Pod should have one network volume attached.
  1. Deploy the first Pod in the source data center and attach the source network volume.
  2. Deploy the second Pod in the target data center and attach the target network volume.
  3. Start the web terminal in both Pods.
2

Open the source volume

Using your source Pod’s web terminal, navigate to the network volume directory (usually /workspace):
cd workspace
3

Start the transfer

Use runpodctl send to start the transfer. To transfer the entire volume:
runpodctl send *
You can also specify specific files or directories instead of *.
4

Copy the receive command

After running the send command, copy the receive command from the output. It will look something like this:
runpodctl receive 8338-galileo-collect-fidel
5

Open the destination volume

Using your destination Pod’s web terminal, navigate to the network volume directory (usually /workspace):
cd workspace
6

Receive your files

Paste and run the receive command you copied earlier:
runpodctl receive 8338-galileo-collect-fidel
The transfer will begin and show progress as it copies files from the source to the destination volume.
For a visual walkthrough using JupyterLab, check out this video tutorial:

Using rsync over SSH

For faster migration speed and more reliability for large transfers, you can use rsync over SSH on two running Pods. Before you begin, you’ll need:
  • A network volume in the source data center containing the data you want to migrate.
  • A network volume in the target data center (which can be empty or contain existing data).
1

Deploy Pods with network volumes attached

Deploy two Pods using the default Runpod PyTorch template. Each Pod should have one network volume attached.
  1. Deploy the first Pod in the source data center and attach the source network volume.
  2. Deploy the second Pod in the target data center and attach the target network volume.
  3. Start the web terminal in both Pods.
2

Set up SSH keys on the source Pod

On the source Pod, install required packages and generate an SSH key pair:
apt update && apt install -y vim rsync && \
ssh-keygen -t ed25519 -f ~/.ssh/id_ed25519 -N "" -q && \
cat ~/.ssh/id_ed25519.pub
Copy the public key that appears in the terminal output.
3

Configure the destination Pod

On the destination Pod, install required packages and add the source Pod’s public key to authorized_keys:
apt update && apt install -y vim rsync && \
ip=$(printenv RUNPOD_PUBLIC_IP) && \
port=$(printenv RUNPOD_TCP_PORT_22) && \
echo "rsync -avzP --inplace -e \"ssh -p $port\" /workspace/ root@$ip:/workspace" && \
vi ~/.ssh/authorized_keys
In the editor that opens, paste the public key you copied from the source Pod, then save and exit (press Esc, type :wq, and press Enter).The command above also displays the rsync command you’ll need to run on the source Pod. Copy this command for the next step.
4

Run the rsync command

On the source Pod, run the rsync command from the previous step. If you didn’t copy it, you can construct it manually using the destination Pod’s IP address and port number.
# Replace DESTINATION_PORT and DESTINATION_IP with values from the destination Pod
rsync -avzP --inplace -e "ssh -p DESTINATION_PORT" /workspace/ root@DESTINATION_IP:/workspace

# Example:
rsync -avzP --inplace -e "ssh -p 18598" /workspace/ root@157.66.254.13:/workspace
The rsync command displays progress as it transfers files. Depending on the size of your data, this may take some time.
5

Verify the transfer

After the rsync command completes, verify the data transfer by checking disk usage on both Pods:
du -sh /workspace
The destination Pod should show similar disk usage to the source Pod if all files transferred successfully.
You can run the rsync command multiple times if the transfer is interrupted. The --inplace flag ensures that rsync resumes from where it left off rather than starting over.
I