File-Tunnel icon indicating copy to clipboard operation
File-Tunnel copied to clipboard

Do you support s3 protocol?

Open Nurdich opened this issue 11 months ago • 1 comments

minio aws onedrive like same cloud disk.I thought there would be a problem with cache delay. But now there are services with or without delay, and I want to know. Is it useful if it is not mounted as a disk? ,

Nurdich avatar Jan 28 '25 07:01 Nurdich

Thanks Nurdich,

I'd like to implement that, but it won't be for a while. Maybe some time this year.

Cheers, Fidel

fiddyschmitt avatar Jan 28 '25 09:01 fiddyschmitt

Hi @fiddyschmitt,

Maybe you don't need to reinvent the wheel in this case. There are open source tools, such as rclone with *fuse and its mount feature, that can make any cloud or storage provider appear as a network drive, folder, or mount point on your PC. These mounted folders can host the channel files and synchronize them with the storage provider.

The only issue is that Rclone Mount only starts syncing files if the handle or file lock from the channel files is released, i.e., not while FT is running how it works right now, so rsync only syncs when I close FT. This means that no changes to the channel files are synced while FT is running.

Therefore, we need an cli option in FT that releases the handle/unlocks the channel file frequently (in SendPump()?) so that Rclone recognizes the change and syncs with the cloud storage. On the receiving side the same I think (in ReceivePump()?).

Would something like that make sense?

BR CfK

CfKu avatar Jun 07 '25 09:06 CfKu

Thanks @CfKu, awesome idea! I'm currently working on exactly that - a new mode called --upload-download which writes a file (and releases the handle), then waits for it to be deleted. This would allow tunneling through things like FTP and S3. I'll try it with rclone when it's ready. I'll post here when it's ready :)

fiddyschmitt avatar Jun 07 '25 12:06 fiddyschmitt

@fiddyschmitt awesome! Looking forward to it! My current scenario tries to use FT to tunnel through Dropbox mounted with rclone. Would be nice to see it working since Dropbox might be more general available for everyone than s3. Dropbox is somehow free and whitelisted almost everywhere.

CfKu avatar Jun 07 '25 13:06 CfKu

@fiddyschmitt awesome! Looking forward to it! My current scenario tries to use FT to tunnel through Dropbox mounted with rclone. Would be nice to see it working since Dropbox might be more general available for everyone than s3. Dropbox is somehow free and whitelisted almost everywhere.

I tried it. It seems that the cache may cause synchronization problems。

Nurdich avatar Jun 07 '25 17:06 Nurdich

Yes with the current FT implementation it won’t work since FT is blocking the file to be synced accordingly, but @fiddyschmitt is working on a feature to make it work. Is there a ETA for the feature?

CfKu avatar Jun 07 '25 18:06 CfKu

Aiming for this week :)

fiddyschmitt avatar Jun 08 '25 01:06 fiddyschmitt

I've been testing with Dropbox + rclone today. It works, but the latency is too high to be usable (it's currently 15 seconds).

This is the mount command I'm using:

rclone.exe mount dropbox: P: --vfs-cache-mode writes --poll-interval=1s --dir-cache-time=1s --vfs-write-back 1s

Any suggestions to make it more responsive? I'll try more things throughout the week.

fiddyschmitt avatar Jun 08 '25 13:06 fiddyschmitt

Do theses parameters help?

rclone mount dropbox: P: 
       --vfs-cache-mode writes 
       --vfs-write-back 1s 
       --vfs-cache-max-age 30s 
       --dropbox-chunk-size 4M 
       --tpslimit 12 --tpslimit-burst 12 
       --poll-interval 1s --dir-cache-time 1s

CfKu avatar Jun 08 '25 13:06 CfKu

Thanks! It resulted in the same latency though (15 seconds).

I'm off to sleep, I'll try more things tomorrow night :)

fiddyschmitt avatar Jun 08 '25 14:06 fiddyschmitt

I quickly asked also ChatGPT for other approaches. For FT it had a couple of recommendations, which I was not able to assess for sure, those are e.g. „Close the handle after each N KB written - Target: 256 KB – 512 KB or ≥ 1 s of traffic“ or „Re-open the same file with O_APPEND - No rename, no new inode ⇒ the reading peer can keep its read() loop alive; it just sees the length grow.“ or „ Add a sequence number in the first 8 bytes of every block - The reader can detect a partial block (file closed in the middle of an upload) and wait until the sequence flips“. Like I said I cannot assess if this makes sense at all.

Another recommendation was to change the Dropbox client to maestral (it seems to be able to sync files also when blocked - not tested). Changing the client for Dropbox might make sense, but I was wondering if rclone would work with ftp or s3 then, or if it is a rclone limitation which would be sad, since rclone is a Swiss Army knife to connect to cloud storages. The FTP scenario might be easy to test as well, there are a lot of tiny light ftp servers out there.

How does your current approach look like? Creating multiple files or a file sequence for a channel instead of extending a single one?

CfKu avatar Jun 09 '25 09:06 CfKu

Thanks for the interest & research!

  • Close the handle after each N KB written - Target: 256 KB – 512 KB or ≥ 1 s of traffic“ or
  • Re-open the same file with O_APPEND - No rename, no new inode ⇒ the reading peer can keep its read() loop alive; it just sees the length grow.“
  • Add a sequence number in the first 8 bytes of every block - The reader can detect a partial block (file closed in the middle of an upload) and wait until the sequence flips

File tunnel does a combination of those things. It reads from the TCP stream for --read-duration milliseconds or --purge-size bytes, whichever comes first.

In normal mode, it appends that data to the file. The other side then reads from the file.

In upload-download mode, it writes to a file and closes the handle. The other side reads it and deletes it, signifying it's processed it.

Another recommendation was to change the Dropbox client to maestral (it seems to be able to sync files also when blocked - not tested).

Testing other clients makes sense. But I do feel rclone just needs the right set of args to work the way we need.

Changing the client for Dropbox might make sense, but I was wondering if rclone would work with ftp or s3 then, or if it is a rclone limitation which would be sad, since rclone is a Swiss Army knife to connect to cloud storages. The FTP scenario might be easy to test as well, there are a lot of tiny light ftp servers out there.

I hope rclone is the answer too. I have added an FTP client to the latest version of FT. If needed, I'll do the same for Dropbox and S3.

How does your current approach look like? Creating multiple files or a file sequence for a channel instead of extending a single one?

Ultimately each side has to detect when a file is ready to be read, plus signal that it has done so. My testing suggests it doesn't matter if one or multiple files are used - the throughput and latency remain the same.

fiddyschmitt avatar Jun 09 '25 12:06 fiddyschmitt

Update: I just added a Dropbox client to FT. The results are looking a bit more promising, but the latency is still high (7 seconds).

Here is a breakdown of how long each operation is taking:

9/06/2025 11:22:10 PM: [2.dat] Write file content took 1 attempts (1.956 seconds)
9/06/2025 11:22:12 PM: [2.dat] Wait for file to be deleted by counterpart took 4 attempts (1.615 seconds)
9/06/2025 11:22:13 PM: [1.dat] Wait for file to exist took 11 attempts (5.071 seconds)
9/06/2025 11:22:14 PM: [1.dat] Read file contents took 1 attempts (0.843 seconds)
9/06/2025 11:22:15 PM: [1.dat] Delete processed file took 1 attempts (0.999 seconds)
9/06/2025 11:22:18 PM: [2.dat] Write file content took 1 attempts (2.227 seconds)
9/06/2025 11:22:18 PM: [2.dat] Wait for file to be deleted by counterpart took 2 attempts (0.697 seconds)
9/06/2025 11:22:20 PM: [1.dat] Wait for file to exist took 12 attempts (4.909 seconds)
9/06/2025 11:22:21 PM: [1.dat] Read file contents took 1 attempts (0.712 seconds)
9/06/2025 11:22:22 PM: [1.dat] Delete processed file took 1 attempts (0.946 seconds)
9/06/2025 11:22:24 PM: [2.dat] Write file content took 1 attempts (2.494 seconds)

So it looks like waiting for the file to exist takes the longest time.

Given that we're now using the native client, there might not be anything we can do to improve the latency. I'll have a think about what options we have.

fiddyschmitt avatar Jun 09 '25 13:06 fiddyschmitt

Thanks @fiddyschmitt !

So it looks like waiting for the file to exist takes the longest time.

If this is the bottleneck, then creating and deleting the file might not be the best approach. I read that the Dropbox native client can sync deltas at block level even if the file is already open. However, the native Dropbox client is apparently not an option for me since I plan to run it on a Raspberry Pi with CLI only.

Do you think it would be possible to keep your initial approach of using two channel files, but release the handle after a time or data threshold has been reached, to allow the downstream software to perform the sync? I don't know if this would work with lower latency, but it might be worth a try?

CfKu avatar Jun 09 '25 18:06 CfKu

If this is the bottleneck, then creating and deleting the file might not be the best approach. I read that the Dropbox native client can sync deltas at block level even if the file is already open.

Interesting! I'll see if we can use this block-level sync.

However, the native Dropbox client is apparently not an option for me since I plan to run it on a Raspberry Pi with CLI only.

Oh I didn't mean the client executable from Dropbox. I meant the official Dropbox C# library provided by the Dropbox developers. So it gets compiled into FT and works fine on Linux, Windows Mac.

Do you think it would be possible to keep your initial approach of using two channel files, but release the handle after a time or data threshold has been reached, to allow the downstream software to perform the sync?

Yes that's exactly what it's doing - releasing the handle after a time or data threshold is reached, allowing the underlying sync.

fiddyschmitt avatar Jun 09 '25 23:06 fiddyschmitt

Hey guys, I'm in iran and here internet has been shutdown completely only google.com can be accessed and now I just can only use google drive. Now I wanna use google drive as a way to bypass it so can I use this to do such a thing I have rclone ready, can anyone help me?

Don't have much time at any time my connection could go out :') helppp

parvizx3 avatar Jun 22 '25 09:06 parvizx3

@fiddyschmitt I have an idea. A calls the c # webAPI to upload files, while B calls the c # webAPI to download at the same time. By judging a certain parameter in the HTTP request, AB can be associated. This requires an intermediate server, or one of AB can act as a server

gitlsl avatar Jul 25 '25 08:07 gitlsl

Thanks @gitlsl, nice one. Yes I'd like to eventually add generic HTTP server as an option :)

fiddyschmitt avatar Jul 29 '25 12:07 fiddyschmitt

Thanks @gitlsl, nice one. Yes I'd like to eventually add generic HTTP server as an option :)

I used the earliest beacon scheme back. That is, to store the command in the HTTP file, and then there is another end over there to read, and then execute the relevant commands, including file transfer and so on, all are transit, but the network protocol is definitely not good, I tried very, very slowly.

Nurdich avatar Aug 14 '25 12:08 Nurdich

Hi @Nurdich, @CfKu,

Tunneling through S3 and Dropbox now works in v3.0.0.

S3 instructions here. Dropbox instructions here.

Please let me know if it works for you :)

fiddyschmitt avatar Nov 11 '25 13:11 fiddyschmitt

Great, I‘ll give it a try

CfKu avatar Nov 13 '25 22:11 CfKu