torrust-index icon indicating copy to clipboard operation
torrust-index copied to clipboard

Persist the total completed downloads of a torrent

Open josecelano opened this issue 10 months ago • 2 comments

Relates to: https://github.com/torrust/torrust-index-gui/discussions/521

We import this info from the tracker:

pub struct TorrentBasicInfo {
    pub info_hash: String,
    pub seeders: i64,
    pub completed: i64,
    pub leechers: i64,
}

But we only store seeders and leechers in the database:

CREATE TABLE "torrust_torrent_tracker_stats" (
	"torrent_id"	INTEGER NOT NULL,
	"tracker_url"	VARCHAR(256) NOT NULL,
	"seeders"	INTEGER NOT NULL DEFAULT 0,
	"leechers"	INTEGER NOT NULL DEFAULT 0,
	"updated_at"	TEXT DEFAULT 1000-01-01 00:00:00,
	FOREIGN KEY("torrent_id") REFERENCES "torrust_torrents"("torrent_id") ON DELETE CASCADE,
	UNIQUE("torrent_id","tracker_url"),
	PRIMARY KEY("torrent_id")
);

We could also persist the completed field to show that info on the frontend.

This is the function that imports and stores that data.

    /// Import torrents statistics not updated recently..
    ///
    /// # Errors
    ///
    /// Will return an error if the database query failed.
    pub async fn import_torrents_statistics_not_updated_since(
        &self,
        datetime: DateTime<Utc>,
        limit: i64,
    ) -> Result<(), database::Error> {
        debug!(target: LOG_TARGET, "Importing torrents statistics not updated since {} limited to a maximum of {} torrents ...", datetime.to_string().yellow(), limit.to_string().yellow());

        let torrents = self
            .database
            .get_torrents_with_stats_not_updated_since(datetime, limit)
            .await?;

        if torrents.is_empty() {
            return Ok(());
        }

        info!(target: LOG_TARGET, "Importing {} torrents statistics from tracker {} ...", torrents.len().to_string().yellow(), self.tracker_url.yellow());

        // Import stats for all torrents in one request

        let info_hashes: Vec<String> = torrents.iter().map(|t| t.info_hash.clone()).collect();

        let torrent_info_vec = match self.tracker_service.get_torrents_info(&info_hashes).await {
            Ok(torrents_info) => torrents_info,
            Err(err) => {
                let message = format!("Error getting torrents tracker stats. Error: {err:?}");
                error!(target: LOG_TARGET, "{}", message);
                // todo: return a service error that can be a tracker API error or a database error.
                return Ok(());
            }
        };

        // Update stats for all torrents

        for torrent in torrents {
            match torrent_info_vec.iter().find(|t| t.info_hash == torrent.info_hash) {
                None => {
                    // No stats for this torrent in the tracker
                    drop(
                        self.database
                            .update_tracker_info(torrent.torrent_id, &self.tracker_url, 0, 0)
                            .await,
                    );
                }
                Some(torrent_info) => {
                    // Update torrent stats for this tracker
                    drop(
                        self.database
                            .update_tracker_info(
                                torrent.torrent_id,
                                &self.tracker_url,
                                torrent_info.seeders,
                                torrent_info.leechers,
                            )
                            .await,
                    );
                }
            }
        }

        Ok(())
    }

Subtasks

  • Add the new field to the database and store the value in the statistics importer.
  • Add the new field to the API endpoints: torrent list and details.

josecelano avatar Apr 11 '24 12:04 josecelano

Hi @josecelano. Do you generate the files in migration folder, or do you manually edit them?

hungfnt avatar May 11 '24 03:05 hungfnt

Hi @josecelano. Do you generate the files in migration folder, or do you manually edit them?

Hi @ngthhu , you can do it manually if you want. What I usually do is:

  • Run sqlx-cli to generate the new migration. For example: sqlx migrate add torrust_add_field_xxx. We use the prefix: torrust_
  • That will generate the file in the dir migrations.
  • I copy that file twice into migrations\mysql and migrations\sqlite3

You can create the files with the datetime prefix.

After adding the migration, it will be executed automatically the next time you run the application.

Docs: https://docs.rs/torrust-index/3.0.0-alpha.2/torrust_index/#development

We are not using reversible migrations yet. I don't know why. The project was not using them when I started working on it. Maybe we can open a discussion to decide if we should support it.

josecelano avatar May 13 '24 07:05 josecelano