In Enterprise Edition we support data storage in any S3 compatible storage.
How we store files¶
.env (defaults to
/supervisely/data) to keep caches, database and etc. But we are interested in
storage subfolder generated content, like uploaded images or neural networks are stored.
You can find two subfolders here:
That's because we maintain the same structure in local storage as if you would use S3. In that case those two folders are buckets, with different permissions:
<something>-public/- Read access for anonymous users. Disallow listing for anonymous users. Read/Write access for authorized users
<something>-private/- Read/Write access for authorized users
We need the read access for anonymous users so that we can load images directly from s3 in web browser. We generate unique hashes for image names, so there is no way someone can guess url.
Configure Supervisely to use S3¶
.env configuration file - you can find it by running
supervisely where command.
http (local hard drive) to
minio (S3 storage backend).
Also, you need to provide
STORAGE_SECRET_KEY credentials along with endpoint of your S3 storage.
For example, here are settings for Amazon S3:
So in the end, here is how your
.env settings could look like:
`STORAGE_PROVIDER=minio` `STORAGE_ENDPOINT=s3.amazonaws.com` `STORAGE_PORT=80` `STORAGE_ACCESS_KEY=<hidden>` `STORAGE_SECRET_KEY=<hidden>`
Migration from local storage to S3¶
Now, copy your current storage to an S3. As we mentioned before, because we maintain the same structure in local filesystem, copying will be enough.
We suggest to use minio/mc to copy the files.
minio/mc docker image and execute the following commands:
mc config host add s3 https://s3.amazonaws.com <YOUR-ACCESS-KEY> <YOUR-SECRET-KEY> mc cp <DATA_STORAGE_FROM_HOST>/<your-buckets-prefix>-public s3/<your-buckets-prefix>-public/ mc cp <DATA_STORAGE_FROM_HOST>/<your-buckets-prefix>-private s3/<your-buckets-prefix>-private/
Finally, restart services to apply new configuration:
supervisely up -d.