moead-py icon indicating copy to clipboard operation
moead-py copied to clipboard

3 objectives sub problems weights definition

Open jbuisine opened this issue 3 years ago • 2 comments

Hi,

You compute into your MOEA/D implementation the weights generation for 3 objectives (see code).

After testing this part of code, I'm not sure if triming the weights (by number of sub problems mu) in this way is correct. As example, using muset to 20, we obtained:

[[0.0, 0.0, 1.0], [0.0, 0.05, 0.95], [0.0, 0.1, 0.9], [0.0, 0.15, 0.85], [0.0, 0.2, 0.8], [0.0, 0.25, 0.75], [0.0, 0.3, 0.7], [0.0, 0.35, 0.65], [0.0, 0.4, 0.6], [0.0, 0.45, 0.55], [0.0, 0.5, 0.5], [0.0, 0.55, 0.45], [0.0, 0.6, 0.4], [0.0, 0.65, 0.35], [0.0, 0.7, 0.3], [0.0, 0.75, 0.25], [0.0, 0.8, 0.2], [0.0, 0.85, 0.15], [0.0, 0.9, 0.1], [0.0, 0.95, 0.05]]

Because we kept the 20 first computed weights.

It is not a problem that after sorting all weights, the first objective weight is always set to 0.0 ? That's mean we never take care of this first objective in Weighted-sum or Tchebychev mono-objective conversion.

Please tell me if I'm wrong or misunderstand something !

jbuisine avatar Aug 19 '20 22:08 jbuisine

Hi @jbuisine , sorry for the late reply. I started looking into this and blocked when I couldn't find the original code.

The overall MOEA/D algorithm was ported from JMetal, but I couldn't find any implementation that generated 3-objective weights out of the box; most expected some input file with weights for 3-objectives. I don't think there was an implementation on paper either (let me know if you found one, I'd be curious to see.) It doesn't look like it's available anymore (The Source: http://dces.essex.ac.uk/staff/qzhang/moead/moead-java-source.zip part.

My interpretation of this code was to give a reproducible way to generate some "uniformly" distributed weights. If you run larger population sizes (like in the 100, or 1000s) you won't see just 0.0 for the first objective, but that's not really a great solution. It doesn't look like the weights are guaranteed to be "uniform" either.

From memory and guess work, I think it the algorithm relies on the idea that i,j,m will be equal to m and 0 at some point in this iteration, so by sorting by the sum you would retain those if there's smaller fractional weights (those could be floored explicitly if that was the case). Java has a default rounding mode that you can set, so I'm thinking it might have played a role here, whereas python might round things differently (mostly speculating since I don't have the java code anymore.)

I think maybe a better way would be just to run one iteration of the algorithm but permute the three weights for m/3. For very large population, you could just randomly sample with a seed and it would hopefully give you a representative sample of weights. I'm not clear on the effect of having different but similarly distributed weight vectors but I think it should work.

Let me know if you tried anything that worked better or found an implementation for the 3-weights MOEA/D uniform weight distribution.

mbelmadani avatar Oct 19 '20 08:10 mbelmadani

Hello,

I think this paper could you let a view of a possible implementation when using k >= 3 (k the number of objectives).

There is many others way to select mu sub-problems from the whole available weighted sub-problems. A simple one, is when sub-problem are selected randomly. I think it's an open research problem and really depending of the space search landscape of the problem.

In my own implementation of MOEA/D, I just set the selection as random selection. A better way would probably to first learn from landscape problem and then using this knowledge for enabling strategy selection.

jbuisine avatar Oct 21 '20 15:10 jbuisine