ianvs
ianvs copied to clipboard
LFX Mentorship 2023 01-Mar-May Challenge - for #48
Introduction
For those who want to apply for LFX mentorship for #48, this is a selection test for the application. This LFX mentorship aims to build lifelong learning benchmarking on KubeEdge-Ianvs which is a distributed synergy AI benchmarking platform. Based on Ianvs, we designed this challenge to evaluate the candidates.
Requirements
Each applicant of LFX Mentorship can try out the following two tasks and gain a total accumulated score according to completeness. In the end, we'll publish the top five applicants and their total scores. Finally, the one with the highest score will successfully become the mentee of this LFX Mentorship project. All the titles of task output such as pull requests (PRs) should be prefixed with LFX Mentorship.
Task 1
Content
-
Build a public dataset benchmarking website to present the example dataset cityscapes.
- The applicant might want to design the website with a style similar to the example website of coda.
- In this task, to release the task burden of the applicant, we provide a clean and re-organized dataset based on the existing public CITYSCAPES merely for election purposes. Note that another much more complicated new dataset will be provided to the mentee after the mentorship starts.
- This benchmarking website should exhibit the contents listed in Table 1.
- Submit a PR that includes a public link of the dataset benchmarking website and the corresponding dataset introduction.
- We suggest that the domain name of the website be named after a personal account (e.g., jack123-LFX.github.io for applicant Jack123).
Home page | Dataset overview |
Lifelong learning algorithm overview | |
Data sample display | |
Documentation page | Dataset partition description |
Data statistics | |
Data format | |
Data annotation | |
Download page | Instructions and links |
Benchmark page | Various algorithm and metric results |
Table 1. Task 1 overview
Resources
Task 2
Content
- Create a new example on Kubeedge Ianvs based on the semantic segmentation dataset cityscapes, for single task learning, incremental learning or lifelong learning.
- The example mainly includes a new baseline algorithm (not existed on Ianvs) which can run on Ianvs under cityscapes.
- The baseline algorithm can be a new unseen task detection algorithm, processing algorithm, or base model.
- Reviewers will take a look at the Mean Intersection over Union (mIoU) to evaluate the algorithm performance.
- Note that If the applicant wants to use single-task learning or incremental learning, s/he needs to replace the origin object-detection dataset pcb-aoi to the targeted semantic-segmentation dataset cityscapes, and replace the origin object-detection model FPN to a semantic-segmentation model, e.g., RFNet. While for an applicant who tries to tackle lifelong learning, s/he does not necessarily need to do that, because the dataset and base model are both prepared.
- For each algorithm paradigm, submit an experiment report as a PR which includes algorithm design, experiment results and a README document.
- The README document aims to show instructions for testing and verifying the submitted example for reviewers. An example is available at unseen task recognition readme document.
- An example of the algorithm design is available at unseen task recognition proposal.
- An example of the experiment results is available at the leaderboard of single task learning.
- Submit a PR of codes of this new example.
- The organization of the codes can be referred to pcb-aoi.
Resources
Rating
Task 1
All the items that should be completed in task 1 are listed in Table 2 and item scores will be accumulated as the total score of this task.
Item | Score | |
---|---|---|
Set up a basic frontend framework | 10 | |
The frontend pages can be accessed publicly | 10 | |
Home page | Dataset overview | 5 |
Lifelong learning algorithm overview | 5 | |
Data sample display | 5 | |
Documentation page | Dataset partition description | 5 |
Data statistics | 5 | |
Data format | 5 | |
Data annotation | 5 | |
Download page | Instructions and links | 5 |
Benchmark page | Various algorithm and metric results | 20 |
Table 2. Task 1 scoring rules
Task 2
- Completion of different algorithm paradigms has different scores as shown in Table 3.
- For examples under the same algorithm paradigm, an applicant will obtain 20 extra scores only if his/her example performs the best in ranking. When ranking, reviewers will take a look at the Mean Intersection over Union (mIoU) to evaluate the algorithm performance.
- That is, only the applicant ranking top 1 gets the extra score. Good Luck!
- Each applicant can try to implement multiple examples with different algorithm paradigms. But only the algorithm paradigm with the highest score will be counted.
- For the examples that can not be run successfully directly through the submitted code and the README instruction document, the total score will be 0 in task 2. So, be cautious about the code and docs!
Item | Score |
---|---|
Lifelong learning | 50 |
Incremental learning | 30 |
Single task learning | 10 |
Highest metric result | 20 |
Table 3. Task 2 scoring rules
Deadline
According to the timeline of LFX mentorship 2023 01-Mar-May, the admission decision deadline is March 7th. Since we have to process the internal review and decide, the final date for PR submissions of the pretest will be March 5th, 8:00 AM PDT.
Hello, I am Aryan here. I have a doubt in task 1. What specific algorithms should be used for benchmarking the dataset? Do we need to use Ianvs for benchmarking?
Ok sir
Good day. Please am i allowed to use the dataset information provided here at the CITYSCAPES website on my benchmark site too.
Hello, I am Aryan here. I have a doubt in task 1. What specific algorithms should be used for benchmarking the dataset? Do we need to use Ianvs for benchmarking?
First, algorithms for benchmarking are totally designed and developed by applicants. Second, in task 1, we recommend that applicants utilize Ianvs for benchmarking but it is not mandatory.
Good day. Please am i allowed to use the dataset information provided here at the CITYSCAPES website on my benchmark site too.
Sure, if it is helpful for you.
Also since the CITYSCAPES dataset is available in Tensorflow flow datasets, is it okay if we load the dataset from it.
Also since the CITYSCAPES dataset is available in Tensorflow flow datasets, is it okay if we load the dataset from it.
I suggest that you use the cityscapes we provide. Based on the same dataset, we can compare the mIoU among all the submitted algorithms.
@luosiqi for the task 1, do we have to benchmark the dataset on the model we trained in task2?
@luosiqi also in task2 do we have to implement only one type of learning?
@luosiqi for the task 1, do we have to benchmark the dataset on the model we trained in task2?
Yes. You can use the model of task 2 for task 1's benchmarking.
@luosiqi also in task2 do we have to implement only one type of learning?
In task 2, just do as much as you can. At the end, we calculate total scores of the two tasks as the final score of each applicant.