Univariate Benchmarks Results

This page shows the results obtained by applying a number of Global optimization algorithms to the entire benchmark suite of 1-D optimization problems, together with some statistics on the algorithm performances.

Note

The CRS2 and DIRECT algorithms from NLOpt did not qualify for the univariate benchmark as I am constantly getting useless ValueError every time I try them.

Note

The CMA-ES algorithm from CMA-ES did not qualify for the univariate benchmark as the Python implementation does not support optimization in 1-D.

test_functions Univariate (1D) Test Functions

The following table shows the overall success of all Global Optimization algorithms, considering for every benchmark function 100 random starting points.

So, for example, AMPGO was able to solve, on average, 96.2% of all the test functions for all the 100 random starting points using, on average, 182 functions evaluations.

Optimization algorithms performances (1-dimensional)
Optimization Method Overall Success (%) Functions Evaluations
AMPGO 96.222 182
ASA 58.944 318
BasinHopping 71.222 509
DE 88.889 483
Firefly 99.000 490
Galileo 1.556 54
MLSL 13.833 6161
PSWARM 83.611 737
SIMANN 5.167 1903

These results are also depicted in the next figure, which clearly shows that AMPGO is one of the better-performing optimization algorithms as far as the current benchmark is considered.

AMPGO 1-D results

AMPGO Optimization algorithms performances (1-dimensional)


The following table is a split-by-benchmark function of the first table, showing the percentage of successful optimizations per benchmark, considering 100 random starting points.

Optimization algorithms performances (1-dimensional)
Function Name AMPGO ASA BasinHopping DE Firefly Galileo MLSL PSWARM SIMANN
Problem02 100 100 100 100 99 1 0 100 0
Problem03 100 0 43 100 99 0 49 31 6
Problem04 100 100 100 100 100 3 0 100 1
Problem05 100 99 100 100 96 1 0 99 18
Problem06 100 100 32 100 100 0 0 100 0
Problem07 100 100 100 100 98 5 0 100 1
Problem08 100 0 100 100 100 0 0 21 9
Problem09 100 75 41 100 98 3 0 100 0
Problem10 88 0 36 0 100 0 0 100 17
Problem11 100 100 82 100 100 2 0 58 0
Problem12 100 0 81 100 100 1 100 44 0
Problem13 100 100 100 100 100 8 0 100 4
Problem14 100 100 84 100 99 0 0 100 1
Problem15 100 100 54 100 100 3 0 100 0
Problem18 100 87 100 100 100 0 100 100 0
Problem20 100 0 36 100 100 0 0 100 0
Problem21 43 0 24 100 100 0 0 100 32
Problem22 100 0 67 0 93 0 0 49 0

The following table is a split-by-benchmark function of the first table, showing the average number of functions evaluations for successful optimizations only, considering 100 random starting points.

Optimization algorithms performances (1-dimensional)
Function Name AMPGO ASA BasinHopping DE Firefly Galileo MLSL PSWARM SIMANN
Problem02 38 296 671 235 509 49 7303 197 2001
Problem03 254 400 469 939 57 3909 2440 1283
Problem04 6 340 511 246 219 52 7341 172 2000
Problem05 116 323 732 92 477 58 7026 364 1945
Problem06 145 310 256 323 604 57 7294 319 2001
Problem07 47 322 640 279 397 51 7419 218 1997
Problem08 138 766 283 915 54 5254 2699 1251
Problem09 66 316 443 224 541 55 7289 291 2001
Problem10 543 533 2005 553 56 7554 213 1944
Problem11 17 316 507 905 482 57 6647 1349 2001
Problem12 120 584 190 419 47 59 1776 2001
Problem13 7 334 722 169 150 55 7450 175 1996
Problem14 67 302 637 257 185 58 6977 290 2000
Problem15 51 325 411 286 218 56 7897 202 2001
Problem18 5 321 354 1730 420 57 191 178 2001
Problem20 89 259 428 379 53 7233 269 2001
Problem21 1545 272 356 722 53 7314 198 1843
Problem22 25 472 223 708 59 6743 1924 2001

Table Of Contents

Previous topic

Multidimensional Benchmarks Results

Next topic

Test Functions Index

This Page