Need Assistance?

In only two hours, with an average response time of 15 minutes, our expert will have your problem sorted out.

Server Trouble?

For a single, all-inclusive fee, we guarantee the continuous reliability, safety, and blazing speed of your servers.

Transferring the digital ocean spaces between the region( Rclone)

Generating an API key and Digital ocean space:

Before installing and configuring Rclone need to copy the objects between the spaces, which will acquire some information regarding the digital ocean account. For that you need a spaces API key, and need to know about the regions and the names of source and destination spaces.

If you need an assist for managing your digital ocean account, our expert teams are available 24 hours to set up on behalf of you. Please check out our Digital ocean management plan for more details.

To Generate API key:

In order to create the digital ocean spaces API key you need to follow the “creating access key” section to know how to create API key documentation.

Save the access key and the secret key. Will be useful to configure Rclone in later stage.

Manage Administrative Access to Spaces

  • Spaces are the S3 compatible object storage will allow you to store and manage an enormous amount of data.
  • It is similar to a bucket where we can store and use files.
  • We can share access to space by creating an access key.
  • Once the key moves to access keys then select the “Generate New Key” option.
  • The Name Text box will view on the screen from there you can suggest a name for the access key this will allow you to identify who or what uses the key and then click on the checkmark.
  • After specifying the name, you will see the access key on the next line of the secret key.
  • This will not directly proceed with the process for that you need to copy the key in a safe place or file.
  • If suppose the secret key has lost or forgotten, you can regenerate the key by selecting the More menu and click on the edit button, choose the regenerate token that leads to create a new key.
  • Whenever a new key deliberate you need to reconfigure its secret value.

Finding the space for S3-compatiable endpoint

Next step is to find the endpoints for each space. From the digital ocean control panel, you can view the space’s endpoint by selecting the setting tab.

The endpoint will always be the region you to create a space and it is followed by the and ensure to make a note for both spaces of the endpoint, we use this information while creating the Rclone configuration.

Next step is installing Rclone. You can install the Rclone from your local system or you can also install the Rclone on a droplet located source.

Click on the download section and select the appropriate zipped binary that matches your computer operating system and start downloading it.

Once you are done with the zip file download into your system need to follow the section that matches your platform.


 If you are running Ubuntu or Debian you can update the local package and install the unzipped file.

$ sudo apt update
$ sudo apt install unzip

Installing unzip for centos or fedora.

$ sudo yum install unzip

 Once you are done with unzip installation the next process is to download the Rclone zip file.

$ cd ~/Downloads

Next step is to archive and move into newly created directory:

$ unzip rclone*
$ cd rclone-v*

From here you need to copy the binary to the /usr/local/bin directory.

$ sudo cp rclone /usr/local/bin

Next, add the manual page to the system. So that we can easily access to the command syntax and other available option. Ensure that the local manual directory is available and then copy the rclone.1 file:

$ sudo mkdir -p /usr/local/share/man/man1
$ sudo cp rclone.1/usr/local/share/man/man1

Update the man database in order to add the new manual page to the system.

$ sudo mandb

Finally, create the Rclone configuration directory and open the configuration file.

It will open a new blank file along with text editor. Need to skip the section on configuring Rclone to continue.


If you are running the macOS begin with the rclone zip file from where you have downloaded it.

$ cd ~/downloads

Next, step is to unzip the file then move it to the new directory.

$ unzip -a rclone*
$ cd rclone-v*

Next, ensure the /usr/local/bin directory accessible and then copy the rclone binary.

$ sudo mkdir -p/usr/local/bin
$ sudo cp rclone /usr/local/bin

Final step is to create the configuration directory and open the configuration file.

$ mkdir -p ~/.config/rclone
$ nano ~/.config/rclone/rclone.conf

It opens a text editor along with a new blank file. Skip the section on configuring Rclone and then continue.


If you are running the windows then begin from the downloaded directory in the windows file explorer. Select the rclone zip file and then right-click, A menu will be displayed on the screen, click on to the Extract All…

The rclone.exe must run from the command line. Before that, the prompts need to extract the files from the zip. 

Then open the new command prompt (the cmd.exe program) window by clicking the windows button from the lower-left corner select the command prompt by typing cmd.

You need to be inside in the rclone path which you have extracted it by typing.

c:\>cd "%testpath%\downloads\rclone*\rclone*"

Verify the directory contents to ensure the correct location.

c:\> dir

Note: macOS and Linux – rclone

           window – rclone.exe.

Next, create the configuration directory and then open the configuration file to define the S3 and spaces credentials:

c:\> mkdir "%carepath%\.config\rclone"
c:\> notepad "%carepath%\.config\rclone\rclone.conf"

It opens a text editor along with a new blank file. Go ahead and define the region of the space in the configuration file.

Configuring Rclone

Configure the two digital ocean spaces regions as Rclone “remotes” in the rclone configuration file and then paste the configuration file to define the first region:

type = s3
env_auth = false
access_key_id = your_spaces_access_key
secret_access_key = your_spaces_secret_key
endpoint =
acl = private

Here, we can define the spaces access credentials in the configuration file, so that we can set env_auth as false.

Next need to set the access_key_id and secret_access_key variables to access the spaces access key and secret key. Ensure to change the values to the credentials of your account.

(Set the endpoint to the spaces endpoint)

Finally, set the acl to private to protect the assets until we share it.

Next, make a duplicate configuration block, update the name and the endpoint region to reflect the second region.

type = s3
env_auth = false
access_key_id = your_spaces_access_key
secret_access_key = your_spaces_secret_key
endpoint =
acl = private

Once you are done with the configuration, need to save and close the file.

macOS and Linux, ensure the lockdown permission to the configuration file since the credentials are inside:

$ chmod 600~/.config/rclone/rclone.conf

Copy the objects from S3 to spaces.

Once the configuration is done, we need to transfer the files.

Need to check the rclone configured remotes before you begin.

$ rclone listremotes



Now you can view the available spaces in the directory.

$ rclone lsd spaces-sky2;


-1 2020-12-21 13:07:55-1 source-space

Repeat the same process to view the other region:

$ rclone lsd spaces-syc3:


-1 2020-12-21 13:09:45-1 destination-space

You can use the tree command to view the contents space.

$ rclone tree spaces-skyo2:source-space

To copy the files between the spaces.

$ rclone sync spaces-sky2:source-space spaces-syc3:distination-space

To check the objects that have a transfer by using the tree subcommand:

$ rclone tree spaces-syc3:destination-space

To check the subcommand to compare the objects in both region.

$ rclone check spaces-skyo2:source-space space-syc3:destination-space

This will compare the both values of each objects in remotes. You will be obtain an notification indicating that the values could not be such cases you can rerun the command.

Liked!! Share the post.

Get Support right now!

Start server management with our 24x7 monitoring and active support team

Can't get what you are looking for?

Available 24x7 for emergency support.