classiq-library icon indicating copy to clipboard operation
classiq-library copied to clipboard

IMPLEMENTING QUANTUM FEDERATED LEARNING IN CLASSIQ

Open Yuvan010 opened this issue 10 months ago • 9 comments

Abstract

Federated Learning (FL) is a decentralized machine learning paradigm where multiple clients train a shared model without centralizing data. Quantum Federated Learning (QFL) is an emerging field that leverages quantum computing for distributed learning tasks, providing potential advantages in security, privacy, and computational efficiency.

Currently, the Classiq Library lacks an implementation for Quantum Federated Learning, making it an ideal feature addition. This implementation will serve as a valuable resource for researchers and developers interested in quantum-enhanced federated learning strategies.

Motivation

Quantum Machine Learning (QML) has shown promise in various fields, but real-world applications require scalable and privacy-preserving training methods. Federated Learning enables training models across distributed data sources while maintaining data privacy. Given that Classiq already includes structured problem-solving examples (e.g., hybrid variational circuits), adding QFL will enhance the library’s collection of quantum AI implementations.

By introducing Quantum Federated Learning (QFL) on Classiq, this feature will:

  • Enable distributed quantum machine learning with secure data privacy.
  • Benchmark quantum models trained across multiple nodes.
  • Compare performance with classical FL architectures.

Proposed Solution

We propose implementing Quantum Federated Learning (QFL) in Classiq by leveraging:

Quantum Variational Circuits for local model training on each client. Quantum Gradient Updates using distributed quantum nodes. Secure Quantum Aggregation using entanglement-based communication. Classiq’s circuit optimization tools to enhance computational efficiency. The implementation will include:

  • A QFL problem definition using Classiq’s quantum circuit design tools.
  • A quantum-enhanced model training loop with variational circuits.
  • Execution on simulators and quantum devices to compare accuracy and performance.
  • Example use cases in privacy-preserving machine learning, such as medical AI and finance.

Technical Details

Quantum Federated Learning Process

  • Client-side Quantum Model Training

Each client trains a local quantum model using Variational Quantum Circuits (VQC). Qubits encode local datasets for training updates.

  • Quantum Gradient Aggregation

Clients send quantum-encoded gradient updates to the central server. The server applies quantum-safe aggregation methods.

  • Global Model Update & Synchronization

The global model is updated and redistributed among clients. Secure quantum channels facilitate communication.

Input Example

python

qfl_model = {
    "clients": ["Node_1", "Node_2", "Node_3"],
    "local_models": ["Quantum NN", "Variational QClassifier"],
    "aggregation_method": "Quantum Secure Sum"
}

Team Details

@Yuvan010 @sriram03psr @ManjulaGandhi @sgayathridevi

Abstract PDF

Quantum_Federated_Learning_Classiq_Updated.pdf

Yuvan010 avatar Feb 28 '25 14:02 Yuvan010

Hello @Yuvan010!

Thank you for proposing to implement Quantum Federated Learning (QFL) using Classiq.

That sounds like a cool idea! However, I do have a concern—while Classiq supports integration with PyTorch for QML, part of the process (synthesis of the quantum model to quantum program) runs on our cloud. Would this be a barrier for your project?

Feel free to reach out to the community if you have any questions.

Thanks!

NadavClassiq avatar Mar 02 '25 12:03 NadavClassiq

Good regards @NadavClassiq!

We understand that Classiq’s quantum model synthesis runs on the cloud, and we don’t see this as a barrier for our project. Our approach to Quantum Federated Learning (QFL) primarily focuses on:

1)Using Classiq’s PyTorch integration for quantum machine learning components.

2)Generating parameterized quantum circuits (PQCs) on Classiq, ensuring compatibility with its cloud-based synthesis process.

3)Leveraging hybrid classical-quantum training, where classical optimizations can run locally, while quantum model execution happens on Classiq’s cloud.

Would you recommend any specific best practices for handling the synthesis step efficiently within Classiq? We’re happy to explore workarounds if needed.

Looking forward to your thoughts!

Yuvan010 avatar Mar 02 '25 13:03 Yuvan010

Sounds great, the optimization is indeed local, so there is no issue with that. The rest (1,2) is also perfectly suitable.

Good luck!

NadavClassiq avatar Mar 02 '25 13:03 NadavClassiq

Hi @Yuvan010, what is the status of this? Are you still working on the implementation?

TaliCohn avatar Apr 01 '25 10:04 TaliCohn

Thank you for following up. We are still working on the implementation, but we are encountering some minor errors that need to be resolved. We appreciate your patience and are actively working to address these issues. We just need some more time to fully complete the implementation. Hopefully you can give us that time.

Regards Yuvan

On Tue, Apr 1, 2025, 4:20 PM Tali Cohn @.***> wrote:

Hi @Yuvan010 https://github.com/Yuvan010, what is the status of this? Are you still working on the implementation?

— Reply to this email directly, view it on GitHub https://github.com/Classiq/classiq-library/issues/826#issuecomment-2768961502, or unsubscribe https://github.com/notifications/unsubscribe-auth/A36FON3CG3HPP7NZZXEUIZ32XJVQBAVCNFSM6AAAAABYCPMU66VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDONRYHE3DCNJQGI . You are receiving this because you were mentioned.Message ID: @.***> [image: TaliCohn]TaliCohn left a comment (Classiq/classiq-library#826) https://github.com/Classiq/classiq-library/issues/826#issuecomment-2768961502

Hi @Yuvan010 https://github.com/Yuvan010, what is the status of this? Are you still working on the implementation?

— Reply to this email directly, view it on GitHub https://github.com/Classiq/classiq-library/issues/826#issuecomment-2768961502, or unsubscribe https://github.com/notifications/unsubscribe-auth/A36FON3CG3HPP7NZZXEUIZ32XJVQBAVCNFSM6AAAAABYCPMU66VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDONRYHE3DCNJQGI . You are receiving this because you were mentioned.Message ID: @.***>

Yuvan010 avatar Apr 01 '25 22:04 Yuvan010

Sure, how much time do you need? For any questions or difficulties, please post on the Slack community. Our team and community members will help you out.

TaliCohn avatar Apr 02 '25 07:04 TaliCohn

We are hoping that we need 1-2 weeks to completely finish the implementation. Is that timeframe okay with you ?

On Wed, Apr 2, 2025, 12:41 PM Tali Cohn @.***> wrote:

Sure, how much time do you need? For any questions or difficulties, please post on the Slack community. Our team and community members will help you out.

— Reply to this email directly, view it on GitHub https://github.com/Classiq/classiq-library/issues/826#issuecomment-2771542791, or unsubscribe https://github.com/notifications/unsubscribe-auth/A36FON4IDMPE2ADVO6KIBHD2XOETZAVCNFSM6AAAAABYCPMU66VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDONZRGU2DENZZGE . You are receiving this because you were mentioned.Message ID: @.***> [image: TaliCohn]TaliCohn left a comment (Classiq/classiq-library#826) https://github.com/Classiq/classiq-library/issues/826#issuecomment-2771542791

Sure, how much time do you need? For any questions or difficulties, please post on the Slack community. Our team and community members will help you out.

— Reply to this email directly, view it on GitHub https://github.com/Classiq/classiq-library/issues/826#issuecomment-2771542791, or unsubscribe https://github.com/notifications/unsubscribe-auth/A36FON4IDMPE2ADVO6KIBHD2XOETZAVCNFSM6AAAAABYCPMU66VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDONZRGU2DENZZGE . You are receiving this because you were mentioned.Message ID: @.***>

Yuvan010 avatar Apr 03 '25 03:04 Yuvan010

Hi @Yuvan010, any update on this?

TaliCohn avatar Apr 22 '25 10:04 TaliCohn

@Yuvan010

TaliCohn avatar May 08 '25 10:05 TaliCohn