guardian-ai icon indicating copy to clipboard operation
guardian-ai copied to clipboard

Oracle Guardian AI Open Source Project is a library consisting of tools to assess fairness/bias and privacy of machine learning models and data sets.

Results 7 guardian-ai issues
Sort by recently updated
recently updated
newest added

fixing a typo 2023 -> 2024 (this was reported by a user)

OCA Verified

This addition to GuardianAI enables the use of membership inference attacks for recommender systems. - Additional libraries: torch==2.2.1 - For testing purposes, unit tests have been included in tests/unitary/test_privacy_attacks_recommender.py -...

enhancement
OCA Verified

### Willingness to contribute Yes. I can contribute this feature independently. ### Proposal Summary This addition to the privacy estimation tool extends its capabilities to include membership inference attacks specifically...

Bumps [scikit-learn](https://github.com/scikit-learn/scikit-learn) from 1.3.2 to 1.5.0. Release notes Sourced from scikit-learn's releases. Scikit-learn 1.5.0 We're happy to announce the 1.5.0 release. You can read the release highlights under https://scikit-learn.org/stable/auto_examples/release_highlights/plot_release_highlights_1_5_0.html and...

dependencies
OCA Verified

This PR enables the framework to support the evaluation of models trained outside Guardian AI. These changes specifically affect the **privacy estimation** component and do not impact other parts of...

OCA Verified

Added a guard in run_attack to handle cases where evaluate_attack returns None or an empty list. This prevents crashes when building the result string and instead returns a NO_METRICS marker.

OCA Verified

This PR introduces support for running privacy evaluation on models that were trained externally (outside the Guardian AI). This allows users to assess privacy risks, such as potential membership inference...

OCA Verified