Push–relabel maximum flow algorithm

In comparison, the Ford–Fulkerson algorithm performs global augmentations that send flow following paths from the source all the way to the sink.The variant based on the highest label node selection rule has O(V 2√E) time complexity and is generally regarded as the benchmark for maximum flow algorithms.[3][4] Subcubic O(VElog(V 2/E)) time complexity can be achieved using dynamic trees,[2] although in practice it is less efficient.[4][6] The concept of a preflow was originally designed by Alexander V. Karzanov and was published in 1974 in Soviet Mathematical Dokladi 15.[2][8] As explained in,[9] Goldberg-Tarjan introduced distance labels by incorporating them into the parallel maximum flow algorithm of Yossi Shiloach and Uzi Vishkin.This function must satisfy the following conditions in order to be considered valid: In the algorithm, the label values of s and t are fixed.For a fixed flow f, a vertex v ∉ {s, t} is called active if it has positive excess with respect to f, i.e., xf (u) > 0.The relabel operation applies on an active node u which is neither the source nor the sink without any admissible out-arcs in Gf.To see that a relabel operation on node u preserves the validity of 𝓁(u), notice that this is trivially guaranteed by definition for the out-arcs of u in Gf.The generic push–relabel algorithm is used as a proof of concept only and does not contain implementation details on how to select an active node for the push and relabel operations.This can be proven true by examining the effects of the push and relabel operations on the label function 𝓁.[8] If a preflow f and a valid labeling 𝓁 for f exists then there is no augmenting path from s to t in the residual graph Gf .This can be proven by contradiction based on inequalities which arise in the labeling function when supposing that an augmenting path does exist.In order to bound the time complexity of the algorithm, we must analyze the number of push and relabel operations which occur within the main loop.However, the value of Φ must be equal to 0 at termination since there cannot be any remaining active nodes at the end of the algorithm's execution.[1][8] The following is a sample execution of the generic push-relabel algorithm, as defined above, on the following simple network flow graph diagram.In the example, the h and e values denote the label 𝓁 and excess xf , respectively, of the node during the execution of the algorithm.The "current-arc" data structure is a mechanism for visiting the in- and out-neighbors of a node in the flow network in a static circular order.The algorithm scans the list from front to back and performs a discharge operation on the current node if it is active.[3] Although in the description of the generic push–relabel algorithm above, 𝓁(u) is set to zero for each node u other than s and t at the beginning, it is preferable to perform a backward breadth-first search from t to compute exact labels.The global relabeling heuristic periodically performs backward breadth-first search from t in Gf  to compute the exact labels of the nodes.Both heuristics skip unhelpful relabel operations, which are a bottleneck of the algorithm and contribute to the ineffectiveness of dynamic trees.
Graph of a strictly concave quadratic function with unique maximum.
Optimization computes maxima and minima.
mathematical optimizationmaximum flowsflow networkFord–Fulkerson algorithmstrongly polynomialEdmonds–Karp algorithmdynamic treesminimum cost flowsAlexander V. KarzanovAndrew V. GoldbergRobert TarjanUzi Vishkinmax-flow min-cut theorempotential argumentamortized complexitytopologically sortedbreadth-first searchPythonCormen, T. H.Leiserson, C. E.Rivest, R. L.Stein, C.Introduction to AlgorithmsCiteSeerXOptimizationAlgorithmsmethodsheuristicsUnconstrained nonlinearFunctionsGolden-section searchPowell's methodLine searchNelder–Mead methodSuccessive parabolic interpolationGradientsConvergenceTrust regionWolfe conditionsQuasi–NewtonBerndt–Hall–Hall–HausmanBroyden–Fletcher–Goldfarb–ShannoL-BFGSDavidon–Fletcher–PowellSymmetric rank-one (SR1)Other methodsConjugate gradientGauss–NewtonGradientMirrorLevenberg–MarquardtPowell's dog leg methodTruncated NewtonHessiansNewton's methodConstrained nonlinearBarrier methodsPenalty methodsAugmented Lagrangian methodsSequential quadratic programmingSuccessive linear programmingConvex optimizationConvex minimizationCutting-plane methodReduced gradient (Frank–Wolfe)Subgradient methodLinearquadraticAffine scalingEllipsoid algorithm of KhachiyanProjective algorithm of KarmarkarBasis-exchangeSimplex algorithm of DantzigRevised simplex algorithmCriss-cross algorithmPrincipal pivoting algorithm of LemkeActive-set methodCombinatorialApproximation algorithmDynamic programmingGreedy algorithmInteger programmingBranch and boundGraph algorithmsMinimum spanning treeBorůvkaKruskalShortest pathBellman–FordDijkstraFloyd–WarshallNetwork flowsEdmonds–KarpFord–FulkersonMetaheuristicsEvolutionary algorithmHill climbingLocal searchParallel metaheuristicsSimulated annealingSpiral optimization algorithmTabu searchSoftware