ianvs icon indicating copy to clipboard operation
ianvs copied to clipboard

Enhance Dependency Management and Documentation for Ianvs

Open FuryMartin opened this issue 11 months ago • 9 comments

Background

Ianvs is currently grappling with significant dependency management challenges. It lacks a robust system to handle updates and ensure compatibility. As Python versions, dependency libraries, and Ianvs features continuously evolve, many existing examples fail to run, resulting in a surge of inquiries in the Issues section. Moreover, new PRs are often merged without being tested against historical examples, making it difficult to guarantee the functionality of past features through manual Code Review alone. There is an urgent need for a more comprehensive CI testing framework to maintain the usability of Ianvs features as the project progresses. Additionally, the online documentation is outdated, which can be quite confusing for new users.

Expected Outcome

Basic Task:

  • Update the Contributing Guide
    • Clearly document the dependency relationship between ianvs and sedna in the Contributing Guide. Explain how to resolve this dependency, and provide the rationale behind it.
  • Develop a New Quick Start Example with Comprehensive Documentation
    • Refine the existing example examples/cloud-edge-collaborative-inference-for-llm to fully demonstrate the usage of ianvs. This example should illustrate the setup of each module, highlight potential pitfalls, and serve as a blueprint for others to follow.
    • The example must be accompanied by comprehensive documentation.
  • Update Documentation for Other Paradigm Usage:

Advance Task:

  • Ensure other old examples are runnable
  • An implementation of CI/CD that automatically verifies whether existing examples are runnable.

Suggested Skills

  • Dependency Management: Proficiency in Python and CI/CD pipelines.
  • Documentation: Experience in writing clear and comprehensive technical documentation.
  • Testing: Knowledge of automated testing frameworks and practices to ensure backward compatibility.
  • Collaboration: Ability to coordinate changes across multiple repositories (Ianvs and Sedna) and work with the community to address issues and integrate feedback.

Pre-Test

For those who wish to apply for the LFX mentorship for this project, please complete this pre-test .

FuryMartin avatar Jan 16 '25 07:01 FuryMartin

Hello @FuryMartin,

I am interested in this project. My previous experience includes a successful GSoC project with BeagleBoard. Reference I am familiar with AI/ML. Reference I have Python experience as well. Reference1, Reference2 I have documentation experience with Sphinx. Reference. I have also made a few documentation PRs in ianvs, which are currently unmerged: #174, #180, and #181. I have experienced version conflicts in ianvs and created PRs resolving these issues in the pcb-aoi dataset. Reference I am familiar with CI/CD pipelines. Reference

-> Next Steps:

  • To address the version conflicts present in the pcb-aoi example, I have submitted a few PRs. I will work on verifying other examples as well. If they are not runnable, I will collaborate with the community to resolve the errors and submit corrections through additional PRs.
  • I will also familiarize myself with the cloud-edge-collaborative-inference-for-LLM example.

Please let me know if I am on the right track. Looking forward to discuss more on this project in future meetings.

Thanks and Regards, Aryan Nanda

AryanNanda17 avatar Jan 25 '25 21:01 AryanNanda17

Hi @FuryMartin, could you tell me the last date for application to this mentorship ? Could you please let me know the selection criteria and selection date for the same ?

ggold7046 avatar Feb 06 '25 10:02 ggold7046

Hello @MooreZheng Kubeedge Bot closed this issue automatically. But it should still be open. #182 solves the dependency conflicts in singletask_learning/pcb-aoi example but there are others parts of this project as well which are equally important. Thanks

AryanNanda17 avatar Feb 07 '25 06:02 AryanNanda17

Hello @MooreZheng Kubeedge Bot closed this issue automatically. But it should still be open. #182 solves the dependency conflicts in singletask_learning/pcb-aoi example but there are others parts of this project as well which are equally important. Thanks

Sure. The issue is now re-open.

MooreZheng avatar Feb 07 '25 08:02 MooreZheng

Pre-test

For those who wish to apply for the LFX mentorship for this project, please complete this pre-test.

Description

Due to compatibility issues with the existing PCB-AoI, and to keep up with the latest developments in the LLM field, this Mentorship Project plans to transform the examples/cloud-edge-collaborative-inference-for-llm into a brand-new Quick Start Example (i.e., the second part of the Expected Outcome above).

Therefore, we expect candidates to have a thorough understanding of this example. We would like you to complete the following three tasks:

  • Based on the README.md of the example, run through the testing process for the MMLU-5-shot dataset provided in the example (both the test dataset and the model response cache are already provided in the example).

  • Construct a custom dataset containing 50 samples according to the dataset format specified in the example, and experience the process of running a custom Benchmark test (you can extract some samples from other benchmarks; there is no need to consider whether the benchmark is effective, as long as it runs successfully).

  • Based on your hands-on experience and by reviewing the source code, discuss the areas where you believe this example falls short of being an influential open-source project's quick start guide (whether it is in dataset loading, cloud-edge collaboration code design, documentation, or any other aspect).

After completing these tasks, please write an experimental report using Google Docs to describe your work.

Submission Method

To avoid plagiarism, please send the link of your report via email to [email protected] and CC [email protected].

We will publish all received report links under this issue after the submission deadline.

Timeline

According to the official schedule of the LFX Mentorship, candidates need to complete registration and project applications between February 5 and February 18. We will confirm candidates between February 19 and February 25. To allow for sufficient time, please complete this pre-test by February 23, 5 PM (PST).

Evaluation Criteria

Item Score
Task 1 50
Task 2 30
Task 3 20

FuryMartin avatar Feb 15 '25 01:02 FuryMartin

Hello @FuryMartin , @hsj576 and @MooreZheng I have shared the experimental report with you via email. I hope you have found it.

Thanks and regards, Aryan Nanda

AryanNanda17 avatar Feb 21 '25 03:02 AryanNanda17

Reports Received

As of 5:00 PM PST on February 23, 2025, we have received a total of one Pre-test Report.

Scores

Candidates Task 1 Task 2 Task 3 Total
@AryanNanda17 50 25 20 95

We will announce the final decision after it is approved by the community.

FuryMartin avatar Feb 24 '25 06:02 FuryMartin

We have confirmed that @AryanNanda17 will take on this project.

Congratulations!

FuryMartin avatar Mar 04 '25 02:03 FuryMartin

Thank you @FuryMartin! I will make sure to complete this project on time.

AryanNanda17 avatar Mar 04 '25 11:03 AryanNanda17