feat(upload): Add transfer_speed for downloading and uploading files
Package Changes Through aa7b4c39ce1d3893f6fb72a06046f738f956fe52
There are 7 changes which include upload with minor, upload-js with minor, sql-js with patch, clipboard-manager with patch, sql with patch, fs-js with patch, log-plugin with patch
Planned Package Versions
The following package releases are the planned based on the context of changes in this pull request.
| package | current | next |
|---|---|---|
| api-example | 2.0.4 | 2.0.5 |
| api-example-js | 2.0.1 | 2.0.2 |
| clipboard-manager | 2.0.1 | 2.0.2 |
| fs-js | 2.0.1 | 2.0.2 |
| log-plugin | 2.0.1 | 2.0.2 |
| sql | 2.0.1 | 2.0.2 |
| sql-js | 2.0.0 | 2.0.1 |
| upload | 2.0.1 | 2.1.0 |
| upload-js | 2.0.0 | 2.1.0 |
Add another change file through the GitHub UI by following this link.
Read about change files or the docs at github.com/jbolda/covector
Thanks for the PR! I think this is a pretty solid start. My 2 main issues here are
- I don't see a reason to split up the payload types. I'd much rather implement the feature for both apis.
- I'm not a fan of all the mutex / thread stuff tbh. Also, having your setTimeout comment in mind, Rust's sleep method isn't that much better, there's no guarantee that it's exactly one second, just that it's at least one second (and then add the potential mutex waiting time on top).
- And a small one, maybe just "speed" instead of "progress_speed"? Or transfer_speed? Idk, something about progress_speed triggers me 😂 Maybe let's wait on other opinions on that topic.
For 2) i think a better approach would again come down to the performance.now() suggestion in the issue (of course meaning an equivalent in rust). You don't really need to know how much data was sent in a whole second, knowing how much data was send in X miliseconds (or how long Y chunks took) should be enough. Of course you could still collect a few chunks first before reporting or storing the last idk 10 chunks timings in an array and report the average of those values. All of that should work without a mutex or threads.
-
Yaa, speed for the upload would be something I thought could be done on another PR, but since you pointed it out. I'll implement it here only ..
-
This is also a valid point. If rust's sleep isn't guaranteed to run after every second .. ( But it ran the code accurately than setInterval, using setInterval wasn't even working from the users prespective )
Yaa, let me try this performance.now() rust approach. Thanks for the detailed description. Previously, I couldn't comprehend it could be done without threads and sleeps..
yes, speed or transfer_speed makes more sense
use std::time::Instant;
use std::collections::VecDeque;
fn main() {
let mut time_list: VecDeque<u128> = VecDeque::with_capacity(10);
let mut start_time = Instant::now();
while next_chunk() { // Assuming next_chunk() is defined elsewhere
let now = Instant::now();
let it_took = now.duration_since(start_time).as_millis();
start_time = now;
time_list.push_back(it_took);
if time_list.len() == 10 {
let avg_time: u128 = time_list.iter().sum::<u128>() / 10;
// Report the average time
send(avg_time); // Assuming send() is defined elsewhere
// Reset the time list
time_list.clear();
}
}
}
Here is a rough pseudocode that I have in mind. I'm just suggesting it.
@FabianLars I've rewritten the logic for the transfer_speed payload field.
I've created a new struct that decouples the Transfer Speed calculation from the download and upload function.
- Added a granularity for TransferStat which is 500 milliseconds, this will update the transfer speed after every 500 milliseconds
Existing issues
- If the while loop that saves the chunk and reports to TransferStat stops running continuously, due to network problems or some other reasons. Or two consecutive loop run's gap is more than the granularity, then the
transfer_speedmight get stuck to a value, for a brief period of time .. ( I haven't observed it, since my network connection is fine. But something we might consider for the user experience )
Feel free to suggest better naming
Thank you! And sorry for the delay (i was out sick the whole last month)
A few last small things and then we can merge this:
- we need a changefile
- Please add
#[serde(rename_all = "camelCase")]]abovestruct ProgressPayload {}so thattransfer_speedbecomestransferSpeedon the js side (only) - Add
transferSpeedto the ProgressPayload type in the javascript file - run the formatter so CI is happy :)
The existing issue is not a blocker for me but rather something to track over time. As long as we're happy with the user facing api we can change the implementation later.
@FabianLars Okay I am making these changes.
Can you please clarify what is a changefile ? You have mentioned in the first point ...
Check the .changes folder for examples. Don't sweat over it though, I can add this when I merge the pr :)
@FabianLars I've made the changes. Works fine in my current downloader application made with tauri 💯 .
https://github.com/user-attachments/assets/12bbc978-87bd-45b6-a72b-fce86e965ada
Also, I also tried to do this same thing in the js side, with equivalent code:
export class TransferStats {
accumulatedChunkLen: number; // Total length of chunks transferred in the current period
accumulatedTime: number; // Total time taken for the transfers in the current period (in ms)
transferSpeed: number; // Calculated transfer speed in bytes per second
startTime: number; // Time when the current period started (in ms)
granularity: number; // Time period (in milliseconds) over which the transfer speed is calculated
constructor(granularity: number) {
this.accumulatedChunkLen = 0;
this.accumulatedTime = 0;
this.transferSpeed = 0;
this.startTime = performance.now();
this.granularity = granularity;
}
// Records the transfer of a data chunk and updates the transfer speed if the granularity period has elapsed.
recordChunkTransfer(chunkLen: number): void {
const now = performance.now();
const timeElapsed = now - this.startTime;
this.accumulatedChunkLen += chunkLen;
this.accumulatedTime += timeElapsed;
// If the accumulated time exceeds the granularity, calculate the transfer speed.
if (this.accumulatedTime >= this.granularity) {
this.transferSpeed = Math.floor(
this.accumulatedChunkLen / (this.accumulatedTime / 1000)
); // bytes per second
this.accumulatedChunkLen = 0;
this.accumulatedTime = 0;
}
// Reset the start time for the next period.
this.startTime = now;
}
// Provides a default instance of TransferStats with a granularity of 500 milliseconds.
static default(): TransferStats {
return new TransferStats(500); // Default granularity is 500ms
}
}
And it worked fine as expected, I think it's also important for us to know. This problem of transfer_speed could've been solved from the JS side also ...
@FabianLars I've made the changes. Works fine in my current downloader application made with tauri 💯 . 2024-11-05.01-05-36.mp4 This is nice!