convoy
convoy copied to clipboard
Azure blob storage support
It would be nice to have more cloud storage options than just S3 and I was wondering if it'd be hard to adapt the Azure support currently used in Docker Distribution as a volume driver?
If I have some time in the evening in the coming weeks I can try to do this.
Thank you @prelegalwonder ! We'd love to see more storage options. And I think it shouldn't be hard to add Azure.
You can take a look at s3/s3.go and vfs/vfs_objectstore.go to find examples for objectstore code. VFS example is more straightforward compared to S3.
The testing code is at tests/integration/test_main.py test_vfs_objectstore() and test_s3_objectstore(). You can take a look at here for some information regarding how to start Convoy development.
If you have any questions or suggestions, please feel free to ask. :)
+1
Is anyone working on this?
As far as I know, not yet.
I was looking over the s3 driver and looks fairly simple to do ... just don't know if I quite grok it yet. I'm fairly new at go, but would love to give it a shot.
@withinboredom That would be most welcome! Feel free to ask if you have any question.
ok, some (hopefully) dumb questions to make sure I understand what's going on:
The s3.go is the interface to s3 from convoy, and s3_service.go is the interface that actually does the heavy lifting to s3, the adapter/client so to speak?
Is the initfunc only called once, when the plugin inits the first time, when a volume is created, or on every call? https://github.com/rancher/convoy/blob/ffd9a41520b0cdbd206ca31649394d0b5cb0a47e/s3/s3.go#L32-L72 I guess I'm asking what the lifecycle is for these functions.
I might have more questions, but still just reading the code at this point.
Yes, s3.go is the interface to Convoy framework, and s3_service.go provides interface to talk to S3 service.
S3 in Convoy so far only acts as objectstore, which we store backup of snapshot in. So it's not related to volume creation/deletion. It's related to backup creation/deletion.
The initFunc is called every time when a URL need to be parsed, see https://github.com/rancher/convoy/blob/master/objectstore/driver.go#L51
Interesting. I'm looking at the ebs driver now.
There's three paths to take.
- Azure blobs: which are similar to ebs, though they can't be mounted like they can in aws. If there's some way to intercept the read/writes to the fs, this would be the preferred way to connect to azure, since its 60MBs per blob... Would be great for snapshots/objectstore immediately.
- Azure files: Mount a share using CIFS/SMB. But 60MBs per share, not per blob. Bandwidth could become a bottleneck as there'd likely be one share throughout the whole system. Dunno, going to have to think about that one.
- Mount vhd: Currently what I'm doing. There's a limit to how many vhd's a vm can mount, and its an exclusive mount, meaning vhd's cannot be shared across instances. However, with striping, you can max out the storage account's bandwidth and with convoy-nfs this works, just a pain to setup.
I think option 1 can be done, which could be immediately useful to those of us on azure.
A general CIFS/SMB driver could work as well ... but I really like having something and Something be two different things. node_modules would be completely broken using SMB ... so. I'm not sure that's a viable option.
As for objectstore, I think Azure Blobs should be good enough. It sounds pretty similar to s3(rather than EBS I think).
But I don't think it's the way to intercept fs read/write calls, so Azure Blobs cannot be used as Convoy Driver or Docker volume driver. It seems can only be used as objectstore driver in Convoy.
+1
+1
+1
@yasker hi, if i want to backup the storage account, what methods can be implemented? if i use the AzCopy, can this be achieved? Looking forward to your reply