First search

May 15, 2026 · View on GitHub

Gradient-Free-Optimizers Logo


Lightweight optimization with local, global, population-based and sequential techniques across mixed search spaces

Tests Coverage


Documentation Homepage · Optimizers · API Reference · Examples
On this page Features · Examples · Concepts · Citation


Bayesian Optimization on Ackley Function

Gradient-Free-Optimizers is a Python library for gradient-free optimization of black-box functions. It provides a unified interface to 23 optimization algorithms, from simple hill climbing to Bayesian optimization, all operating on mixed search spaces that combine continuous ranges, discrete grids, categorical choices, and SciPy distribution-backed dimensions.

Designed for hyperparameter tuning, simulation optimization, feature selection, engineering design, and any scenario where gradients are unavailable or impractical. The library prioritizes simplicity: define your objective function, specify the search space, and run. All algorithms share one consistent API, so switching from hill climbing to Bayesian optimization is a one-line change. SciPy is optional; GFO works with only pandas as a required dependency, making it suitable as an optimization backend or for minimal environments, containers, and embedded systems.

LinkedIn


Installation

pip install gradient-free-optimizers

PyPI Python Total Downloads

Optional dependencies
pip install gradient-free-optimizers[progress]  # Progress bar with tqdm
pip install gradient-free-optimizers[sklearn]   # scikit-learn for surrogate models
pip install gradient-free-optimizers[full]      # All optional dependencies

Key Features

23 Optimization Algorithms
Local, global, population-based, and sequential model-based optimizers. Switch algorithms with one line of code.
Zero Configuration
Sensible defaults for all parameters. Start optimizing immediately without tuning the optimizer itself.
Memory System
Built-in caching prevents redundant evaluations. Critical for expensive objective functions like ML models.
Mixed Search Spaces
Combine continuous ranges, discrete grids, categorical choices, and SciPy distributions in a single search space.
Constraints Support
Define constraint functions to restrict the search space. Invalid regions are automatically avoided.
Minimal Dependencies
Only pandas required. Optional integrations for progress bars (tqdm) and surrogate models (scikit-learn).

Quick Start

import numpy as np
from gradient_free_optimizers import HillClimbingOptimizer

def objective(params):
    x, y = params["x"], params["y"]
    return -(x**2 + y**2)

search_space = {
    "x": (-5.0, 5.0),            # continuous range
    "y": np.arange(-5, 5, 0.1),  # discrete grid
}

opt = HillClimbingOptimizer(search_space)
opt.search(objective, n_iter=1000)

print(f"Best score: {opt.best_score}")
print(f"Best params: {opt.best_para}")

Core Concepts

flowchart LR
    O["Optimizer
    ━━━━━━━━━━
    23 algorithms"]

    S["Search Space
    ━━━━━━━━━━━━
    mixed dimensions"]

    F["Objective
    ━━━━━━━━━━
    f(params) → score"]

    D[("Search Data
    ━━━━━━━━━━━
    history")]

    O -->|propose| S
    S -->|params| F
    F -->|score| O

    O -.-> D
    D -.->|warm start| O

Optimizer: Implements the search strategy. Choose from 23 algorithms across four categories: local search, global search, population-based, and sequential model-based.

Search Space: Defines valid parameter ranges and choices. Each key is a parameter name, each value is a tuple (min, max) for continuous, a NumPy array for discrete, a list for categorical, or a SciPy distribution.

Objective Function: Your function to maximize. Takes a dictionary of parameters, returns a score. Use negation to minimize.

Search Data: Complete history of all evaluations accessible via opt.search_data for analysis and warm-starting future searches.


Examples

Hyperparameter Optimization
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.model_selection import cross_val_score
from sklearn.datasets import load_wine
import numpy as np

from gradient_free_optimizers import BayesianOptimizer

X, y = load_wine(return_X_y=True)

def objective(params):
    model = GradientBoostingClassifier(
        n_estimators=params["n_estimators"],
        max_depth=params["max_depth"],
        learning_rate=params["learning_rate"],
    )
    return cross_val_score(model, X, y, cv=5).mean()

search_space = {
    "n_estimators": np.arange(50, 300, 10),
    "max_depth": np.arange(2, 10),
    "learning_rate": np.logspace(-3, 0, 20),
}

opt = BayesianOptimizer(search_space)
opt.search(objective, n_iter=50)
Bayesian Optimization
import numpy as np
from gradient_free_optimizers import BayesianOptimizer

def ackley(params):
    x, y = params["x"], params["y"]
    return -(
        -20 * np.exp(-0.2 * np.sqrt(0.5 * (x**2 + y**2)))
        - np.exp(0.5 * (np.cos(2 * np.pi * x) + np.cos(2 * np.pi * y)))
        + np.e + 20
    )

search_space = {
    "x": np.arange(-5, 5, 0.01),
    "y": np.arange(-5, 5, 0.01),
}

opt = BayesianOptimizer(search_space)
opt.search(ackley, n_iter=100)
Particle Swarm Optimization
import numpy as np
from gradient_free_optimizers import ParticleSwarmOptimizer

def rastrigin(params):
    A = 10
    values = [params[f"x{i}"] for i in range(5)]
    return -sum(v**2 - A * np.cos(2 * np.pi * v) + A for v in values)

search_space = {f"x{i}": np.arange(-5.12, 5.12, 0.1) for i in range(5)}

opt = ParticleSwarmOptimizer(search_space, population=20)
opt.search(rastrigin, n_iter=500)
Simulated Annealing
import numpy as np
from gradient_free_optimizers import SimulatedAnnealingOptimizer

def sphere(params):
    return -(params["x"]**2 + params["y"]**2)

search_space = {
    "x": np.arange(-10, 10, 0.1),
    "y": np.arange(-10, 10, 0.1),
}

opt = SimulatedAnnealingOptimizer(
    search_space,
    start_temp=1.2,
    annealing_rate=0.99,
)
opt.search(sphere, n_iter=1000)
Constrained Optimization
import numpy as np
from gradient_free_optimizers import RandomSearchOptimizer

def objective(params):
    return params["x"] + params["y"]

def constraint(params):
    # Only positions where x + y < 5 are valid
    return params["x"] + params["y"] < 5

search_space = {
    "x": np.arange(0, 10, 0.1),
    "y": np.arange(0, 10, 0.1),
}

opt = RandomSearchOptimizer(search_space, constraints=[constraint])
opt.search(objective, n_iter=1000)
Mixed Search Space
import numpy as np
from scipy import stats
from gradient_free_optimizers import BayesianOptimizer

def objective(params):
    x = params["x"]
    n_layers = params["n_layers"]
    lr = params["learning_rate"]
    activation_scores = {"relu": 0.0, "tanh": 0.1, "gelu": 0.3}
    return -(x**2) - 0.1 * n_layers + activation_scores[params["activation"]] - abs(lr - 0.001)

search_space = {
    "x": (-5.0, 5.0),                          # continuous
    "n_layers": np.arange(1, 6),                # discrete
    "activation": ["relu", "tanh", "gelu"],     # categorical
    "learning_rate": stats.loguniform(1e-5, 1),  # distribution
}

opt = BayesianOptimizer(search_space)
opt.search(objective, n_iter=100)

Memory and Warm Starting
import numpy as np
from gradient_free_optimizers import HillClimbingOptimizer

def expensive_function(params):
    # Simulating an expensive computation
    return -(params["x"]**2 + params["y"]**2)

search_space = {
    "x": np.arange(-10, 10, 0.1),
    "y": np.arange(-10, 10, 0.1),
}

# First search
opt1 = HillClimbingOptimizer(search_space)
opt1.search(expensive_function, n_iter=100, memory=True)

# Continue with warm start using previous search data
opt2 = HillClimbingOptimizer(search_space)
opt2.search(expensive_function, n_iter=100, memory_warm_start=opt1.search_data)
Ask/Tell Interface
import numpy as np
from gradient_free_optimizers import BayesianOptimizer

def objective(params):
    return -(params["x"]**2 + params["y"]**2)

search_space = {
    "x": np.arange(-10, 10, 0.1),
    "y": np.arange(-10, 10, 0.1),
}

# Manual control over the optimization loop
opt = BayesianOptimizer(search_space)
opt.setup_search(objective, n_iter=100)

for _ in range(100):
    params = opt.ask()           # Get next parameters to evaluate
    score = objective(params)
    opt.tell(params, score)      # Report result back
Early Stopping
import numpy as np
from gradient_free_optimizers import BayesianOptimizer

def objective(params):
    return -(params["x"]**2 + params["y"]**2)

search_space = {
    "x": np.arange(-10, 10, 0.1),
    "y": np.arange(-10, 10, 0.1),
}

opt = BayesianOptimizer(search_space)
opt.search(
    objective,
    n_iter=1000,
    max_time=60,           # Stop after 60 seconds
    max_score=-0.01,       # Stop when score reaches -0.01
    early_stopping={       # Stop if no improvement for 50 iterations
        "n_iter_no_change": 50,
    },
)

Ecosystem

GFO is used as the optimization engine in other packages and integrates with the broader Python optimization ecosystem. For updates, follow on GitHub.

PackageDescription
HyperactiveHigh-level hyperparameter optimization framework, uses GFO as its optimization backend
SurfacesTest functions and benchmark surfaces for optimization algorithm evaluation

Documentation

ResourceDescription
User GuideComprehensive tutorials and explanations
API ReferenceComplete API documentation
OptimizersDetailed description of all 23 algorithms
ExamplesCode examples for various use cases

Contributing

Contributions welcome! See CONTRIBUTING.md for guidelines.


Citation

If you use this software in your research, please cite:

@software{gradient_free_optimizers,
  author = {Simon Blanke},
  title = {Gradient-Free-Optimizers: Simple and reliable optimization with local, global, population-based and sequential techniques in mixed search spaces},
  year = {2020},
  url = {https://github.com/SimonBlanke/Gradient-Free-Optimizers},
}

License

MIT License - Free for commercial and academic use.