Migrate AWS S3 to UpCloud's Managed Object Storage
Migrating your existing data from Amazon S3 to UpCloud’s Managed Object Storage service can be accomplished through API calls. To get started, you will need an UpCloud user account with API credentials. Instructions for creating this can be found in the Getting started using the UpCloud API guide.
You’ll also need an API client, or a way to make API calls. You can use any API client that you’re comfortable with, such as Yaak or the curl command line tool.
The process involves creating copy jobs via the API to transfer data from your AWS S3 buckets to the target Managed Object Storage. The system handles the data transfer automatically while providing status updates and detailed progress information.
It’s worth noting that the migration process is non-destructive, meaning any files or buckets that exist only in the destination will be left untouched. Your source data in AWS S3 remains unchanged throughout the migration process, so you can continue using your S3 buckets until the migration is complete and verified.
Before migrating
Before beginning the migration, make sure you have the following:
AWS S3 credentials. You’ll need your AWS access key and secret access key with sufficient permissions to read from your source AWS S3 buckets. If you need to create new access credentials, follow these steps in the AWS Management Console:
- Sign in to AWS Management Console
- Search for “IAM” in the top search bar and click on it
- In the left sidebar menu, click on “Users”
- Click on the username (or create a new user if needed)
- Click on the “Security credentials” tab
- Scroll to “Access keys” section and click “Create access key”
- When prompted for use case, select “Command Line Interface (CLI)” and click “Next”
- IMPORTANT: Save your Access Key and Secret Access Key securely - you’ll only see the Secret Access Key once
- To add the required S3 permissions, go to the User “Permissions” tab, click “Add permissions”, and either attach
AmazonS3ReadOnlyAccess
or create a custom policy with minimum permissions (s3:ListBucket
ands3:GetObject
) for the buckets you want to migrate.
AWS S3 endpoint URL. This is the public endpoint for your S3 buckets. The endpoint URL follows the format
https://s3.{region}.amazonaws.com
. For example:- US East (N. Virginia):
https://s3.us-east-1.amazonaws.com
- Europe (Ireland):
https://s3.eu-west-1.amazonaws.com
- Asia Pacific (Singapore):
https://s3.ap-southeast-1.amazonaws.com
You can find your bucket’s region in the AWS S3 Console under the bucket’s Properties tab.
- US East (N. Virginia):
UpCloud Managed Object Storage credentials. This is the access key and secret key for the destination. If you don’t have this, you’ll need to generate new keys from the UpCloud Control Panel by opening the Managed Object Storage page and clicking the User tab. If you don’t yet have a user, create one and then click the “+ ACCESS KEYS” to generate new credentials for the user.
It is also important to ensure that your user has full access to all storage buckets in the Managed Object storage. If you haven’t done so already, you can do this by attaching the ‘ECSS3FullAccess - v1’ policy to the user’s account as shown below:
UpCloud Managed Object Storage endpoint URL. This is the S3 public access endpoint for the destination object storage. You can find your endpoint URL from the Public access section in the Overview tab of the Managed Object Storage.
Optional: List of buckets to be migrated: If you plan to do a selective migration, then it will help to have a list prepared with all the buckets you plan to migrate to the Managed Object Storage.
Creating a full migration job
To migrate all buckets in an object storage instance, create a POST request using the API call below:
POST https://api.upcloud.com/1.3/object-storage-2/jobs
{
"type": "copy",
"source": {
"access_key_id": "<aws_access_key_id>",
"secret_access_key": "<aws_secret_access_key>",
"endpoint_url": "https://s3.<region>.amazonaws.com"
},
"target": {
"access_key_id": "<upcloud_access_key>",
"secret_access_key": "<upcloud_secret_key>",
"endpoint_url": "<upcloud_endpoint>"
}
}
Note that <region>
in the source endpoint URL should be replaced with your AWS region (e.g., us-east-1, eu-west-1, etc.). Also, make sure to use AWS credentials with appropriate S3 read permissions and the correct region where your buckets are located.
In an API client, such as Insomnia, the call above will look like this:
The returned response will include the operational status of the migration job as well as the UUID of the job. You will need to take note of UUID of the job to check on the progress of the migration later.
Creating a selective migration job
The migration tool can also be used to migrate specific buckets. This can be done using the POST request below:
POST https://api.upcloud.com/1.3/object-storage-2/jobs
{
"type": "copy",
"source": {
"access_key_id": "<aws_access_key_id>",
"secret_access_key": "<aws_secret_access_key>",
"endpoint_url": "https://s3.<region>.amazonaws.com",
"bucket": "<aws-source-bucket-name>"
},
"target": {
"access_key_id": "<upcloud_access_key>",
"secret_access_key": "<upcloud_secret_key>",
"endpoint_url": "<upcloud_endpoint>",
"bucket": "<upcloud-target-bucket-name>"
}
}
Note that <aws-source-bucket-name>
should be the name of your existing AWS S3 bucket, and <upcloud-target-bucket-name>
will be the name of the new bucket in UpCloud’s Managed Object Storage. The target bucket will be created automatically if it doesn’t exist. Remember to replace <region>
in the source endpoint URL with your AWS region (e.g., us-east-1, eu-west-1, etc.).
The migration is non-destructive, meaning any files or buckets that exist only in the destination will be left untouched.
Here is how the call looks in an API client:
Like before, the response will include details of the job inculding its status and the job UUID.
Monitoring the progress of a migration
As explained earlier, every time you initiate a migration job, you’ll receive a response containing the job UUID:
{
...
"updated_at": "2024-10-30T05:31:44.21534Z",
"uuid": "12dab7d5-12c5-4c31-a881-10917ba10e6a"
}
You can check on the status of a job using its job UUID in the following GET request:
GET https://api.upcloud.com/1.3/object-storage-2/jobs/{uuid}
The response will include the state of the migration job, as well as other pieces of related information, such as the amount of data transferred and the duration of the operation.
Jobs progress through the following operational states:
pending
- The job has been queued for processingconfiguring
- The job is going through initial setup. This can take between 1-2 minutesrunning
- The migration is still running, and data is being transferredcompleted
- The migration has finished successfullyfailed
- An error occurred during the execution of the job. Checkoutput.error
in the job status response for more informationcancelled
- The job was manually terminated. See below for how to do this
Cancelling an ongoing migration
If needed, you can cancel an existing migration job using the API call below:
DELETE https://api.upcloud.com/1.3/object-storage-2/jobs/{job-uuid}
Post-migration
Once you’ve finished copying your data from AWS S3, take time to verify that everything has transferred correctly to UpCloud’s Managed Object Storage. This includes checking:
- All buckets and objects are present
- File contents are identical
- Object metadata and tags have been preserved
- Folder structures are maintained
You’ll also need to update any applications that use the AWS S3 endpoints to use the new UpCloud Managed Object Storage endpoint instead.
If everything works as expected, you can consider deleting the original AWS S3 buckets. However, it’s usually good practice to keep for a little while after the migration - just in case you need to roll back or verify anything.