DATA_PATH
from .env
(defaults to /supervisely/data
) to keep caches, database and etc. But we are interested in storage
subfolder generated content, like uploaded images or neural networks are stored.<something>-public/
<something>-private/
.env
configuration file - you can find it by running supervisely where
command.STORAGE_PROVIDER
from http
(local hard drive) to minio
(S3 storage backend).STORAGE_ACCESS_KEY
and STORAGE_SECRET_KEY
credentials along with endpoint of your S3 storage.STORAGE_ENDPOINT=s3.amazonaws.com
STORAGE_PORT=443
.env
settings could look like:sudo supervisely up -d
to apply the new settings.env
configuration file - you can find it by running supervisely where
command.STORAGE_PROVIDER
from http
(local hard drive) to azure
(Azure storage backend).STORAGE_ACCESS_KEY
(your storage account name) and STORAGE_SECRET_KEY
(secret key) credentials along with endpoint of your blob storage..env
settings could look like:sudo supervisely up -d
to apply the new settings.env
configuration file - you can find it by running supervisely where
command.STORAGE_PROVIDER
from http
(local hard drive) to google
(GCS backend).STORAGE_CREDENTIALS_PATH
credentials file generated by Google..env
settings could look like:docker-compose.override.yml
under cd $(sudo supervisely where)
:sudo supervisely up -d
to apply the new settingsminio/mc
docker image and execute the following commands:supervisely up -d
.STORAGE_IAM_ROLE=<role_name>
in .env file then STORAGE_ACCESS_KEY
and STORAGE_SECRET_KEY
variables can be ommited.docker-compose.override.yml
file):https://s3-us-west-2.amazonaws.com/test1/abc
and you can open it in your web browser directly), then you can stop reading and start uploading./data/datasets/persons/image1.jpg
, use the following format in API, SDK or corresponding application: fs://local-datasets/persons/image1.jpg
docker-compose.override.yml
under cd $(sudo supervisely where)
:docker-compose.override.yml
: