๐Ÿงฌ OpenGeneticAlgorithm.NET

January 19, 2026 ยท View on GitHub

OpenGA.Net logo

๐Ÿงฌ OpenGeneticAlgorithm.NET

License: MIT .NET NuGet Downloads

The most intuitive, powerful, and extensible genetic algorithm framework for .NET

OpenGA.Net is a high-performance, type-safe genetic algorithm library that makes evolutionary computation accessible to everyone. Whether you're solving complex optimization problems, researching machine learning, or just getting started with genetic algorithms, OpenGA.Net provides the tools you need with an elegant, fluent API.


๐Ÿš€ Quick Start

Installation

dotnet add package OpenGA.Net

Your First Genetic Algorithm

A Chromosome in OpenGA.Net represents a potential solution to your optimization problem. It contains genes (the solution components) and defines how to evaluate, modify, and repair solutions.

Step 1: Create your chromosome by inheriting from the Chromosome<T> abstract class, the example below demonstrates how a Chromosome representing a potential solution to the traveling salesman problem could look like:

using OpenGA.Net;

// A chromosome for the Traveling Salesman Problem. Each chromosome represents a route through cities (genes = city sequence)
public class TspChromosome(IList<int> cities, double[,] distanceMatrix) : Chromosome<int>(cities)
{
    // Calculate how "good" this route is (shorter distance = higher fitness)
    public override async Task<double> CalculateFitnessAsync()
    {
        double totalDistance = CalculateTotalDistance();
        
        return 1.0 / (1.0 + totalDistance);
    }
    
    // Randomly swap two cities in the route
    public override async Task MutateAsync(Random random)
    {
        int index1 = random.Next(Genes.Count);
        int index2 = random.Next(Genes.Count);
        (Genes[index1], Genes[index2]) = (Genes[index2], Genes[index1]);
    }
    
    public override async Task<Chromosome<int>> DeepCopyAsync()
    {
        return new TspChromosome(new List<int>(Genes), distanceMatrix);
    }
    
    public override async Task GeneticRepairAsync()
    {
        // Ensure each city appears exactly once (fix any duplicates)
    }
}

Step 2: Create your initial population:

// Generate initial population of random routes. The initial population represents a set of random solutions to the optimization problem.
var initialPopulation = new TspChromosome[100];
for (int i = 0; i < 100; i++)
{
    var cities = Enumerable.Range(0, numberOfCities).OrderBy(x => _random.Next()).ToList();
    initialPopulation[i] = new TspChromosome(cities, _distanceMatrix);
}

Step 3: Configure and run your genetic algorithm:

// Configure and run your genetic algorithm
var bestSolution = await OpenGARunner<int>
    .Initialize(initialPopulation)
    .RunToCompletionAsync();

// Get your optimized result
Console.WriteLine($"Best route: {string.Join(" โ†’ ", bestSolution.Genes)}");

That's it! ๐ŸŽ‰ You just solved an optimization problem with a few lines of code.


๐Ÿ—๏ธ Architecture Overview

OpenGA.Net is built around four core concepts that work together seamlessly:

๐Ÿงฌ Chromosomes

Your solution candidates. Strongly typed and extensible. Each Chromosome<T> subclass defines how a solution is represented and evaluated. Implement:

  • CalculateFitnessAsync() โ€” return a higher-is-better score for the current Genes
  • MutateAsync() โ€” randomly perturb Genes to explore the search space
  • DeepCopyAsync() โ€” produce a full copy used during crossover
  • GeneticRepairAsync() (optional) โ€” fix invalid states after crossover/mutation

Tip: See the TSP chromosome in the Quick Start above for a concrete example. The pattern is the same for any problem: choose a gene representation, define a fitness function, add a simple mutation, and optionally repair to enforce constraints.

๐ŸŽฏ Parent Selection Strategies

Choose the chromosomes that will participate in mating/crossover:

StrategyWhen to UseProblem CharacteristicsPopulation SizeFitness Landscape
Tournament (Default)Default choice for most problemsBalanced exploration/exploitation neededAny sizeNoisy or multimodal landscapes
ElitistNeed guaranteed convergenceHigh-quality solutions must be preservedMedium to large (50+)Clear fitness hierarchy
Roulette WheelFitness-proportionate diversityWide fitness range, avoid premature convergenceLarge (100+)Smooth, unimodal landscapes
BoltzmannDynamic selection pressureNeed cooling schedule controlMedium to largeComplex, deceptive landscapes
Rank SelectionPrevent fitness scaling issuesSimilar fitness values across populationAny sizeFlat or highly scaled fitness
RandomMaximum diversity explorationEarly stages or highly exploratory searchAny sizeUnknown or chaotic landscapes

๐Ÿงฌ Crossover Strategies

Create offspring by combining parent chromosomes:

StrategyWhen to UseGene RepresentationProblem TypePreservation Needs
One-Point (Default)Fast, simple problemsOrder doesn't matter criticallyOptimization with independent variablesPreserve some gene clusters
K-PointModerate complexity balanceMixed dependencies between genesMulti-dimensional optimizationControl disruption level
UniformMaximum genetic diversityIndependent genesExploratory search, avoid local optimaNo specific gene clustering
CustomDomain-specific requirementsComplex constraints or structuresSpecialized problem domainsDomain-specific validation

๐Ÿ”„ Survivor Selection Strategies

Manage population evolution over generations:

StrategyWhen to UseConvergence SpeedPopulation DiversityResource Constraints
Elitist (Default)Most optimization problemsFast convergenceModerate diversity lossLow computational overhead
GenerationalExploration-heavy searchSlower, thorough explorationHigh diversity maintainedHigher memory usage
TournamentBalanced performanceModerate convergenceGood diversity balanceModerate computational cost
Age-basedLong-running evolutionary systemsVery slow, stableExcellent long-term diversityRequires age tracking
BoltzmannTemperature-controlled evolutionAdaptive convergenceDynamic diversity controlHigher computational complexity
Random EliminationMaintain population diversitySlowest convergenceMaximum diversityMinimal computational overhead

๐Ÿ Termination Strategies

Control when the genetic algorithm stops evolving:

StrategyWhen to UseStopping ConditionBest ForPredictability
Maximum Epochs (Default)Known iteration limitsFixed number of generationsTime-constrained scenariosHighly predictable runtime
Maximum DurationReal-time applicationsMaximum execution durationProduction systemsPredictable time bounds
Target Standard DeviationDiversity monitoringLow population diversityAvoiding premature convergenceAdaptive stopping
Target FitnessGoal-oriented optimizationSpecific fitness threshold reachedKnown optimal solution valueAdaptive based on performance

๐Ÿ’ก Strategy Selection Examples

// High-performance optimization (fast convergence needed)
.ParentSelection(c => c.RegisterSingle(s => s.Tournament()))
.Crossover(c => c.RegisterSingle(s => s.OnePointCrossover()))
.SurvivorSelection(r => r.RegisterSingle(s => s.Elitist()))
.Termination(t => t.MaximumEpochs(100))

// Exploratory search (avoiding local optima)
.ParentSelection(c => c.RegisterSingle(s => s.Tournament()))
.Crossover(c => c.RegisterSingle(s => s.UniformCrossover()))
.SurvivorSelection(r => r.RegisterSingle(s => s.Generational()))
.Termination(t => t.TargetStandardDeviation(stdDev: 0.001))

// Production system (time-constrained)
.ParentSelection(c => c.RegisterSingle(s => s.Tournament()))
.Crossover(c => c.RegisterSingle(s => s.OnePointCrossover()))
.SurvivorSelection(r => r.RegisterSingle(s => s.Elitist()))
.Termination(t => t.MaximumDuration(TimeSpan.FromMinutes(5)))

// Quality-focused research (target fitness termination)
.ParentSelection(c => c.RegisterSingle(s => s.RouletteWheel()))
.Crossover(c => c.RegisterSingle(s => s.KPointCrossover(3)))
.SurvivorSelection(r => r.RegisterSingle(s => s.Elitist(0.2f)))
.Termination(t => t.TargetFitness(0.95).TargetStandardDeviation(stdDev: 0.001, window: 10))

๐ŸŽฏ Use Cases

๐Ÿญ Optimization Problems

  • Scheduling: Job shop, vehicle routing, resource allocation
  • Engineering Design: Antenna design, circuit optimization, structural engineering
  • Financial: Portfolio optimization, trading strategies, risk management

๐Ÿง  Machine Learning

  • Neural Architecture Search: Automatically design neural networks
  • Hyperparameter Tuning: Optimize ML model parameters
  • Feature Selection: Find optimal feature subsets

๐ŸŽฎ Game Development

  • AI Behavior: Evolve intelligent game agents
  • Level Generation: Create procedural game content
  • Balancing: Optimize game mechanics and difficulty curves

๐Ÿ”ฌ Research & Academia

  • Evolutionary Computation: Research platform for GA variants
  • Multi-objective Optimization: Pareto-optimal solutions
  • Algorithm Comparison: Benchmark different evolutionary strategies

โš™๏ธ Advanced Configuration

For sophisticated optimization scenarios, OpenGA.Net supports advanced multi-strategy configurations with Adaptive Pursuit - an intelligent operator selection policy that learns which strategies perform best over time and automatically adjusts selection probabilities.

๐Ÿง  Adaptive Pursuit Algorithm

Adaptive Pursuit continuously monitors the performance of each configured strategy and dynamically adapts their selection probabilities based on:

  • Fitness improvement from applied operators
  • Population diversity maintenance
  • Recent performance trends with temporal weighting
  • Exploration vs exploitation balance

This creates a self-optimizing genetic algorithm that automatically discovers the best operator combinations for your specific problem.

๏ฟฝ Multi-Strategy Configuration

var result = await OpenGARunner<int>
    .Initialize(initialPopulation, minPopulationPercentage: 0.4f, maxPopulationPercentage: 2.5f)
    .WithRandomSeed(42)
    .MutationRate(0.15f)

    // Multi-strategy parent selection with adaptive learning
    .ParentSelection(p => p.RegisterMulti(m => m
        .Tournament(stochasticTournament: true)
        .RouletteWheel()
        .Rank(customWeight: 0.2f)
        .Boltzmann(temperatureDecayRate: 0.05, initialTemperature: 1.0)
        .WithPolicy(policy => policy.AdaptivePursuit(
            learningRate: 0.12,
            minimumProbability: 0.05,
            rewardWindowSize: 15
        ))
    ))

    // Round Robin crossover for exploration
    .Crossover(c => c.RegisterMulti(s => s
        .KPointCrossover(3)
        .UniformCrossover()
        .WithPolicy(policy => policy.RoundRobin())
    ).WithCrossoverRate(0.85f))

    // Multi-strategy survivor selection for dynamic population management.
    // Each strategy is assigned a custom probability weight.
    .SurvivorSelection(s => s.RegisterMulti(m => m
        .Elitist(elitePercentage: 0.12f, customWeight: 0.6f)
        .Tournament(tournamentSize: 4, stochasticTournament: true, customWeight: 0.3f)
        .AgeBased(customWeight: 0.1f)
        .WithPolicy(policy => policy.CustomWeights())
    ).OverrideOffspringGenerationRate(1.3f))
    
    // Multi-condition termination
    .Termination(t => t
        .MaximumEpochs(750)
        .MaximumDuration(TimeSpan.FromMinutes(10))
        .TargetFitness(0.98)
        .TargetStandardDeviation(0.001, window: 20))
    
    .RunToCompletionAsync();

๐Ÿ› ๏ธ Extensibility

Create your own strategies by inheriting from base classes:

// Custom crossover strategy
public class MyCustomCrossover<T> : BaseCrossoverStrategy<T>
{
    protected internal override async Task<IEnumerable<Chromosome<T>>> CrossoverAsync(
        Couple<T> couple, Random random)
    {
        // Your custom crossover logic
    }
}

// Custom survivor selection strategy  
public class MySurvivorSelectionStrategy<T> : BaseSurvivorSelectionStrategy<T>
{
    internal override float RecommendedOffspringGenerationRate => 0.5f;
    
    protected internal override async Task<IEnumerable<Chromosome<T>>> SelectChromosomesForEliminationAsync(
        Chromosome<T>[] population, Chromosome<T>[] offspring, Random random, int currentEpoch = 0)
    {
        // Your custom survivor selection logic
    }
}

// Custom reproduction selector
public class MyParentSelector<T> : BaseParentSelectorStrategy<T>
{
    protected internal override async Task<IEnumerable<Couple<T>>> SelectMatingPairsAsync(
        Chromosome<T>[] population, Random random, int minimumNumberOfCouples)
    {
        // Your custom parent selection logic
    }
}

// Custom termination strategy
public class MyTerminationStrategy<T> : BaseTerminationStrategy<T>
{   
    public override bool Terminate(GeneticAlgorithmState state)
    {
        // Your custom logic to control when the GA should terminate
    }
}

// Using your custom strategies
var result = await OpenGARunner<MyGeneType>
    .Initialize(initialPopulation)
    .ParentSelection(c => c.RegisterSingle(s => s.Custom(new MyParentSelector<MyGeneType>())))
    .Crossover(c => c.RegisterSingle(s => s.CustomCrossover(new MyCustomCrossover<MyGeneType>())))
    .SurvivorSelection(r => r.RegisterSingle(s => s.Custom(new MySurvivorSelectionStrategy<MyGeneType>())))
    .Termination(t => t.Custom(new MyTerminationStrategy<MyGeneType>(50)))
    .RunToCompletionAsync();

๐Ÿ“Š Performance & Benchmarks

OpenGA.Net has been rigorously tested on classic optimization problems to demonstrate its performance and solution quality. Our comprehensive benchmark suite includes:

๐Ÿ”ฌ Benchmark Problems

  • ๐Ÿ—บ๏ธ Traveling Salesman Problem (TSP): 30 and 50 city instances
  • ๐ŸŽ’ Knapsack Problem: 50 and 100 item optimization
  • ๐Ÿ“ฆ Bin Packing Problem: 50 and 100 item optimization

โšก Performance Highlights

  • Execution Speed: 601ms-2,321ms for 500 generations on complex problems
  • Scalability: Linear scaling with population size and problem complexity
  • Solution Quality: Consistently achieves near-optimal results within 1-2% of known bounds

๐Ÿ“ˆ Key Results

ProblemInstanceTime (500 gen)Best Result
TSP30 cities1,853ms71.1% better than random tour
TSP50 cities1,893ms69.7% better than random tour
Knapsack50 items1,863ms1,027.8 value (99.94% efficiency)
Knapsack100 items601ms2,286.2 value (99.90% efficiency)
Bin Packing50 items2,321ms19 bins (vs 18 optimal)
Bin Packing100 items993ms36 bins (optimal!)

Result Interpretation:

  • TSP: % improvement over random tours / absolute distance (showing significant optimization capability)
  • Knapsack: Total value achieved and efficiency vs upper bound (nearly optimal solutions)
  • Bin Packing: Bins used (achieving optimal or near-optimal solutions)

๐Ÿƒ Run Benchmarks Yourself

cd OpenGA.Net.Benchmarks

# Quick performance tests
dotnet run --simple

# Detailed solution quality analysis  
dotnet run --analysis

# Comprehensive BenchmarkDotNet suite
dotnet run

๐Ÿ“‹ View Complete Benchmark Results - Detailed methodology, configurations, and analysis


๐Ÿค Contributing

We welcome contributions! OpenGA.Net is built by the community, for the community.

๐Ÿ› Found a Bug?

Open an issue with:

  • Clear reproduction steps
  • Expected vs actual behavior
  • Environment details

๐Ÿ”ง Want to Code?

  1. Fork the repository
  2. Create a feature branch: git checkout -b feature/amazing-feature
  3. Make your changes with tests
  4. Submit a pull request

๐Ÿ“„ License

OpenGA.Net is released under the MIT License. See LICENSE.md for details.

Copyright (c) 2024 Ahmed Sarnaout

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.