Parallel metaheuristic

To this end, concepts and technologies from the field of parallelism in computer science are used to enhance and even completely modify the behavior of existing metaheuristics.Just as it exists a long list of metaheuristics like evolutionary algorithms, particle swarm, ant colony optimization, simulated annealing, etc.It is usual that trajectory-based metaheuristics allow to quickly find a locally optimal solution, and so they are called exploitation-oriented methods promoting intensification in the search space.Although their utilization allows to significantly reduce the temporal complexity of the search process, this latter remains high for real-world problems arising in both academic and industrial domains.The search process is stopped when a given condition is satisfied (a maximum number of generation, find a solution with a target quality, stuck for a given time, .Iteratively, the probabilistic application of variation operators on selected individuals guides the population to tentative solutions of higher quality.For non-trivial problems, executing the reproductive cycle of a simple population-based method on long individuals and/or large populations usually requires high computational resources.Sparse exchanges of individuals are performed among these islands with the goal of introducing some diversity into the subpopulations, thus preventing search of getting stuck in local optima.In general, the higher level for parallelization is a coarse-grained implementation and the basic island performs a cellular, a master-slave method or even another distributed one.
An example of different implementations of the same PSO metaheuristic model.
Graph of a strictly concave quadratic function with unique maximum.
Optimization computes maxima and minima.
metaheuristiccomputer scienceevolutionary algorithmsparticle swarmant colony optimizationsimulated annealingNP-hardmetaheuristicsgreedy algorithmparticle swarm optimizationdifferential evolutionCellular Evolutionary AlgorithmsEnrique AlbaParadiseoOptimizationAlgorithmsmethodsheuristicsUnconstrained nonlinearFunctionsGolden-section searchPowell's methodLine searchNelder–Mead methodSuccessive parabolic interpolationGradientsConvergenceTrust regionWolfe conditionsQuasi–NewtonBerndt–Hall–Hall–HausmanBroyden–Fletcher–Goldfarb–ShannoL-BFGSDavidon–Fletcher–PowellSymmetric rank-one (SR1)Other methodsConjugate gradientGauss–NewtonGradientMirrorLevenberg–MarquardtPowell's dog leg methodTruncated NewtonHessiansNewton's methodConstrained nonlinearBarrier methodsPenalty methodsAugmented Lagrangian methodsSequential quadratic programmingSuccessive linear programmingConvex optimizationConvex minimizationCutting-plane methodReduced gradient (Frank–Wolfe)Subgradient methodLinearquadraticAffine scalingEllipsoid algorithm of KhachiyanProjective algorithm of KarmarkarBasis-exchangeSimplex algorithm of DantzigRevised simplex algorithmCriss-cross algorithmPrincipal pivoting algorithm of LemkeActive-set methodCombinatorialApproximation algorithmDynamic programmingInteger programmingBranch and boundGraph algorithmsMinimum spanning treeBorůvkaKruskalShortest pathBellman–FordDijkstraFloyd–WarshallNetwork flowsEdmonds–KarpFord–FulkersonPush–relabel maximum flowEvolutionary algorithmHill climbingLocal searchSpiral optimization algorithmTabu searchSoftware