Gradient-Free-Optimizers icon indicating copy to clipboard operation
Gradient-Free-Optimizers copied to clipboard

Convergence Issues with small step-size in Fine Search Grids

Open SimonBlanke opened this issue 3 months ago • 0 comments

Following up on issue #84:

Decreasing step_size parameter does not lead to the expected improvement in convergence precision for local optimization algorithms

Expected Behavior When decreasing step_size, optimizers should be able to achieve finer precision and better convergence to the global optimum.

Actual Behavior Reducing step_size does not provide the expected improvement in final solution quality, even on simple convex functions.

Test Case

import numpy as np
from gradient_free_optimizers import HillClimbingOptimizer

search_space = {
    "x": np.arange(-10, 10, 0.01), 
    "y": np.arange(-10, 10, 0.01),
    "z": np.arange(-10, 10, 0.01),
}

def sphere_function(para):
    x, y, z = para["x"], para["y"], para["z"]
    return -(x**4 + y**2 + z**2)

# Test with different step sizes
for step_size in [1, 0.5, 0.1, 0.01]:
    opt = HillClimbingOptimizer(search_space, step_size=step_size)
    opt.search(sphere_function, n_iter=5000)
    print(f"Step size {step_size}: Best score = {opt.best_score}")

SimonBlanke avatar Aug 28 '25 13:08 SimonBlanke