League Championship Algorithm
LCA
Optics Inspired Optimization
OIO
F3EA Metaheuristic
F3EA Metaheuristic
Grouping Evolution Strategy
GES

The League Championship Algorithm

The **League Championship Algorithm (LCA) **is a recently proposed stochastic population based algorithm for continuous global optimization which tries to mimic a championship environment wherein artificial teams play in an artificial league for several weeks (iterations). Given the league schedule in each week, a number of individuals as sport teams play in pairs and their game outcome is determined in terms of win or loss (or tie), given the playing strength (fitness value) along with the intended team formation/arrangement (solution) developed by each team. Modeling an artificial match analysis, each team devises the required changes in its formation (generation of a new solution) for the next week contest and the championship goes on for a number of seasons (stopping condition). An add-on module based on modeling the end season transfer of players is also developed to possibly speed up the global convergence of the algorithm.

Beside the nature, culture, politics, etc as the typical sources of inspiration of various algorithms, the metaphor of sporting competitions is used for the first time in LCA. Since its introduction in 2009, LCA has been motivated by several sport inspired algorithms such as Soccer League Optimization, *Soccer League* Competition algorithm, Soccer Game Optimization, etc (see more).

The the Optics Inspired Optimization

The **Optics Inspired Optimization (OIO) **is a recently proposed stochastic population based nature inspired algorithm which has been introduced for continuous global optimization based on the optical characteristics of concave and convex mirrors.

OIO treats the surface of the numerical function to be optimized as a reflecting surface wherein each peak is assumed to reflect as a convex mirror and each valley to reflect as a concave one. Each point in the joint search/solution and objective space which is mapped as a solution within the search space is assumed to be an artificial light point. In this way, the artificial ray glittered from an artificial light point is reflected back artificially by the function surface, given that the reflecting surface is partially a part of a peak or a part of a valley, and the artificial image point (a new point in the joint search and objective space which is mapped as a new solution in the search/solution space) is formed upright (toward the light point position in the search space) or inverted (away from the light point position in the search space). Such a model gives us the ability to carry out both exploration and exploitation tasks during the search for optimum (see more).

The Find-Fix-Finish-Exploit-Analyze metaheuristic algorithm

The **Find-Fix-Finish-Exploit-Analyze (F3EA) metaheuristic algorithm**** **is a novel population-based evolutionary algorithm for optimization, which specifically mimics the F3EA targeting process of object or installations selection for destruction in the warfare. F3EA metaheuristic algorithm considers the surface of the numerical objective function as the battle ground and performs main steps of Find-Fix-Finish-Exploit-Analyze (F3EA) in an iterative manner.

Each individual in the population is related to one of enemy’s artificial facilities and its position on the function surface (in*R ^{n}*

The Grouping Evolution Strategies

Many combinatorial optimization problems or their sub-problems comprise a grouping (or assignment) phase where the task is to partition a set of items into disjoint groups. Introduced in 1994, grouping genetic algorithm (*GGA*) is an evolutionary genetic algorithm which uses group related encoding and operators to suit the structure of grouping problems. To our knowledge all researches that have used group related encoding and operators, have relied upon *GA* as their evolutionary search engine. In other words, they all have applied *GGA* and there is no effort on developing the grouping version of other meta-heuristics (e.g., simulated annealing (*SA*), tabu search (*TS*), evolution strategies (*ES*), etc). Although it may seem easy to develop the grouping version of *SA* (*GSA*) or even *TS* (*GTS*) (because the only difference between *GSA* (or *GTS*) with *SA* (or *TS*) is in the generation of the neighbor solution and this may be fixed using *GGA* operators to generate the neighborhood solutions), this is not the case with *ES*. The difficulty with developing the grouping version of *ES* (namely, **Gouping Evolution Strategies (GES)**) is due to the following facts:

1) *ES* owns a Gaussian mutation to produce new real-valued solution vectors during the search process. *GES*, develops a new comparable mutation which works based on the role of groups, while keeping the major characteristics of the classic *ES* mutation.

2) Originally, *ES* has been introduced for optimizing non-linear functions in continuous space. But grouping problems are all discrete.

GES uses a mutation operator analogous to the original one that works with groups instead of scalars and uses it in a two phase procedure to generate the new solution. The main characteristic of the new mutation mechanism is that it works in continuous space while the consequences are used in discrete space (see more).