In Enterprise Edition we support data storage in any S3 compatible storage.
How we store files¶
By default, Supervisely use
/supervisely/data to keep caches, database and etc. But we are interested in
storage subfolder generated content, like uploaded images or neural networks are stored.
You can find to subfolders here:
That's because we maintain the same structure in local storage, as if you would use S3. In that case those two folders are buckets, with different permissions:
<something>-public/- Read access for anonymous users. Disallow listing for anonymous users. Read/Write access for authorized users
<something>-private/- Read/Write access for authorized users
We need read access for anonymous users so that we can load images directly from s3 in web browser. We generate unique hashes for image names, so there is no way someone can guess url.
Migration from local storage to S3¶
Send us request to generate configuration that will look for files in S3 cloud, rather than in local storage. Replace your current configuration with a new one, as in upgrade guide.
You will need to provide
STORAGE_SECRET_KEY credentials in
.env file. Also, we will use the following values as a default S3 endpoint:
If you use something other than AWS S3, change them too.
Now, copy your current storage to an S3. As we mentioned before, because we maintain the same structure in local filesystem, copy would be enough.
We suggest to use minio/mc to copy files.
minio/mc docker image and execute the following commands:
mc config host add s3 https://s3.amazonaws.com <YOUR-ACCESS-KEY> <YOUR-SECRET-KEY> mv cp <DATA_STORAGE_FROM_HOST>/<your-buckets-prefix>-public s3/<your-buckets-prefix>-public/ mv cp <DATA_STORAGE_FROM_HOST>/<your-buckets-prefix>-private s3/<your-buckets-prefix>-private/
Finally, restart services to apply new configuration:
docker-compose up -d.