milvus
milvus copied to clipboard
[Enhancement]: Move away from Minio Gateway
Is there an existing issue for this?
- [X] I have searched the existing issues
What would you like to be added?
Would it be possible to move away from MinioGateway in regards to azure? GCP and s3 both allow direct connections due to the s3 api, but azure currently doesnt support that. The issue is that minio gateway was deprecated and isnt a good solution anymore.
Why is this needed?
A direct connection to azure or a new 3rd party service like seaweedfs or s3proxy that isnt deprecated.
Anything else?
No response
good suggestion! We definitely want to integrate azure storage directly. Mark as good first issue
there is a project https://github.com/datafuselabs/opendal that has lots of object storage supports, add a golang and c++ language binding for it may be a good way.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
Rotten issues close after 30d of inactivity. Reopen the issue with /reopen.
bump
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
Rotten issues close after 30d of inactivity. Reopen the issue with /reopen.
Bump - Azure storage deeply appreciated
we should do it for sure
Anyone volunteered? in implement a c++ azure storage connector?
I am quiet instrested in this issue, does this mean not using minio anymore and using another project like opendal to support storge?
I am quiet instrested in this issue, does this mean not using minio anymore and using another project like opendal to support storge?
Hi Goldnen Sheep, It actually mean to remove gateway and directly. connect to Azure storage, gcp and S3 with the S3 APIs.
However, any contributions on storage layer is welcomed , openDAL might be one of the choice. HDFS support is another what we always want to work with
I am working on SeaweedFS. Would be nice to integrate better with Milvus. Let me know if need any help from me.
I am working on SeaweedFS. Would be nice to integrate better with Milvus. Let me know if need any help from me.
Nice! any contribution is highly welcomed. I'm also a big fan of SeaweedFS!
Do you think whether any query can be pushed down to the storage layer?
Do you think whether any query can be pushed down to the storage layer?
We don't do too much on storage tier, the only request is to put get. If seaweedfs can support efficient append that would helps us a lot.
SeaweedFS does support appending, but it would be efficient to append chunks, not line by line. Would that work?
SeaweedFS does support appending, but it would be efficient to append chunks, not line by line. Would that work?
Should work under most cases, because usually the insertion is in batch~
Good. But S3 API does not have append operation. This would need to append to SeaweedFS via http API.
bump - would love to be able to use azure storage!
bump - would love to be able to use azure storage!
Azure blob support is already on the road map
bump - would love to be able to use azure storage!
Azure blob support is already on the road map
Any ETA on this? Thanks!
bump - would love to be able to use azure storage!
Azure blob support is already on the road map
Any ETA on this? Thanks!
I would expect this to be done before 7.30
/assign @jaime0815 could you help on the support of Azure blob storage
What Go storage connector should be implemented? i can take a try
What Go storage connector should be implemented? i can take a try
The blob storage should be in cpp~~ @PowderLi could you work with shunjiezhao to figure it out?
What Go storage connector should be implemented? i can take a try
Here is an example accessing Blobs with Go https://github.com/Azure-Samples/storage-blobs-go-quickstart
What Go storage connector should be implemented? i can take a try
Here is an example accessing Blobs with Go https://github.com/Azure-Samples/storage-blobs-go-quickstart
Thanks, I will see 😄
What Go storage connector should be implemented? i can take a try
Here is an example accessing Blobs with Go https://github.com/Azure-Samples/storage-blobs-go-quickstart
Hello, what types of authorized should we support?
What Go storage connector should be implemented? i can take a try
Here is an example accessing Blobs with Go https://github.com/Azure-Samples/storage-blobs-go-quickstart
Hello, what types of authorized should we support?
The following 2 types
- identity and access management service(useIAM=true): Azure Active Directory (Azure AD), see details from Assign an Azure role for access to blob data
- other(useIAM=false): shared key, see details from Authorize with Shared Key
What Go storage connector should be implemented? i can take a try
Here is an example accessing Blobs with Go https://github.com/Azure-Samples/storage-blobs-go-quickstart
Hello, what types of authorized should we support?
The following 2 types
- identity and access management service(useIAM=true): Azure Active Directory (Azure AD), see details from Assign an Azure role for access to blob data
- other(useIAM=false): shared key, see details from Authorize with Shared Key
Hi, i have some questions. 😄
- when useIAM, user should provide roleName and rolePassword in milvus.yaml?
- should azure configuration is separate from minio configuration in milvus.yaml?
- How do I add this image to the test environment https://learn.microsoft.com/en-us/azure/storage/common/storage-use-azurite?tabs=docker-hub#run-azurite
Good morning~~, I have a suggestion.
When I adding support for Azure Blob Storage, I found a lot of duplicate code and test code similar to minio_chunk_manager. Can I use minio_chunk_manager as a framework and extract some base methods into a new interface, replace minio.Client with this interface, interface look like
type ObjectStorage interface {
CreateBucket() error
Stat(ctx context.Context, name string) (fileInfo, error)
Read(ctx context.Context, name string) ([]byte, error)
Write(ctx context.Context, name string, data []byte) error
Delete(ctx context.Context, name string) error
ListWithPrefix(ctx context.Context, preFix string, recursive bool) ([]fileInfo, error)
ReadAt(ctx context.Context, filePath string, off int64, length int64)([]byte,error)
}
type MinioChunkManager struct {
ObjectStorage
bucketName string
rootPath string
}
This way, I can use the minio_chunk_manager as framework and implement this interface using minio.Client and azblob.Client, And there is no need to rewrite azure's test code.