Multi-objective-feature-selection icon indicating copy to clipboard operation
Multi-objective-feature-selection copied to clipboard

Pareto-front toolbox has been removed from File Exchange. What to do?

Open sara-eb opened this issue 6 years ago • 4 comments
trafficstars

Hi,

Thanks for sharing the code, I am trying to run your code on my dataset. I have 145 features extracted from 3D medical images. I have 2 questions:

  1. The Yi Cao's Pareto-front toolbox is not available on Mathwork. This toolbox has been removed from File Exchange. Could you please help with this?

  2. The data that I have chosen for feature selection include the data of 2 patients' image and has almost 2 million instances. Do you think that this method of feature selection can be useful? I am going to run it my data and see the output.

Thanks

sara-eb avatar May 05 '19 04:05 sara-eb

Hi,

Please find attached the toolbox.

The ratio between number of instances and number of features is quite good, so I believe the algorithm should work.

Best, Stefano

On 5 May 2019, at 12:39 PM, sara-eb <[email protected]mailto:[email protected]> wrote:

Hi,

Thanks for sharing the code, I am trying to run your code on my dataset. I have 145 features extracted from 3D medical images. I have 2 questions:

  1. The Yi Cao's Pareto-front toolbox is not available on Mathwork. This toolbox has been removed from File Exchange. Could you please help with this?

  2. The data that I have chosen for feature selection include the data of 2 patients' image and has almost 2 million instances. Do you think that this method of feature selection can be useful? I am going to run it my data and see the output.

Thanks

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHubhttps://github.com/stefano-galelli/Matlab-Multi-objective-Feature-Selection/issues/1, or mute the threadhttps://github.com/notifications/unsubscribe-auth/AB6GST2X5OD2SGSSTBSDFH3PTZQGJANCNFSM4HK2K76A.

stefano-galelli avatar May 05 '19 04:05 stefano-galelli

@stefano-galelli Thanks a lot for your prompt response. My apologies for basic questions.

Could I ask where is the attached file?

  • Since I still do not have paretofront toolbox, I just ran the example code script_example_NSGAII.m to this line. Whenever, I am trying to run the code script_example_BORG.m , it shows an error, Undefined function or variable 'borg' , in line 91.

  • In which workspace variable final selected features are saved? Since I am going to run the algorithm on a cluster, I need to know which variables should be saved for further analysis.

  • The class members in my data are imbalanced, can this algorithm handle imbalance data feature selection too?

  • In my draft, should I refer to this paper "Identifying (Quasi) Equally Informative Subsets in Feature Selection Problems for Classification: A Max-Relevance Min-Redundancy Approach "?

Many thanks in advance. Sara

sara-eb avatar May 05 '19 07:05 sara-eb

Dear Sara,

Can you please write to my email address <[email protected]mailto:[email protected]>? I think I cannot send attachments using the GitHub email system.

As for your questions:

You won’t be able to run the algorithm without that toolbox. That said, our algorithm can be used with two optimisation algorithms: NSGA-II and Borg. My suggestion is to use NSGA-II and the corresponding script. Borg is probably a better optimiser, but you need to get it from the developers (linkhttp://borgmoea.org).

  • In which workspace variable final selected features are saved? Since I am going to run the algorithm on a cluster, I need to know which variables should be saved for further analysis.

My suggestion is to first run it locally (maybe on a smaller sample size), so that you decide which variables you want to save.

  • The class members in my data are imbalanced, can this algorithm handle imbalance data feature selection too?

We did not do extensive testing with imbalanced data, but it should be able to handle them.

  • In my draft, should I refer to this paper "Identifying (Quasi) Equally Informative Subsets in Feature Selection Problems for Classification: A Max-Relevance Min-Redundancy Approach "?

Yes, thank you!

Best, Stefano

On 5 May 2019, at 3:09 PM, sara-eb <[email protected]mailto:[email protected]> wrote:

@stefano-galellihttps://github.com/stefano-galelli Thanks a lot for your prompt response. My apologies for basic questions.

Could I ask where is the attached file?

Many thanks in advance. Sara

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHubhttps://github.com/stefano-galelli/Matlab-Multi-objective-Feature-Selection/issues/1#issuecomment-489397833, or mute the threadhttps://github.com/notifications/unsubscribe-auth/AB6GST46AWDM2NHOQKU2VQ3PT2B2ZANCNFSM4HK2K76A.

stefano-galelli avatar May 06 '19 02:05 stefano-galelli

Thanks a lot, I sent the request via email.

sara-eb avatar May 06 '19 02:05 sara-eb