### Using genetic algorithm to solve function extremum (fortran)

# Written in front

This article is suitable for some friends who need to learn the optimization algorithm in emergency. Please skip it directly for the seniors. After all, Xiaobian is only a traditional engineering student who has little to do with computers.

The first time Xiaobian met genetic algorithm was in a contest in the third year of college. At that time, he needed to learn some optimization algorithms. However, as an optimization algorithm, the ancient level of genetic algorithm naturally became my first learning goal.

Write down my feeling about GA here. This algorithm belongs to intelligent algorithm, which can realize global optimization in theory, but after a period of learning, it really needs some luck for some complex functions to achieve global optimization. This algorithm is the most classical. In the later stage, many similar algorithms appear in succession, such as particle swarm optimization algorithm, ant colony algorithm and so on. There's no evaluation here. Xiaobian thinks it's good to learn one of them. It's all in one way.

last!!!!! This time I used GA algorithm to calculate some complex functions. Here are some key steps and lessons to prevent yourself from coming back to review later. (ps full code is available to be transmitted later)

## The past and present life of genetic algorithm

The concept of genetic algorithm was first developed by Bagley J.D

Proposed in 1967. Later, Professor J.H.Holland of Michigan University began to study Genetic Algorithm in 1975,

The mechanism of GA) is studied systematically. Genetic algorithm is a simple simulation of Darwinian biological evolution theory, which follows the principle of "survival of the fittest" and "survival of the fittest". Genetic algorithm simulates the evolution process of an artificial population, and through the mechanism of selection, hybridization and mutation, the population always reaches the optimal (or near optimal) state after several generations.

Since the genetic algorithm was proposed, it has been widely used, especially in the fields of function optimization, production scheduling, pattern recognition, neural network, adaptive control and so on. The genetic algorithm has played a significant role, greatly improving the efficiency of problem solving. Genetic algorithm is also an important research topic in the field of "soft computing".

In this paper, the implementation process of genetic algorithm is analyzed in detail based on fortran, and then its application is discussed through a practical function optimization case.

# Algorithm steps

Many people have written several basic steps of genetic algorithm. I will not go into details here, just put down the address, and you can go to the detailed products. This algorithm is so amazing.

https://blog.csdn.net/qq_34374664/article/details/78874956

I can't find the blog that inspired me a lot before, but after reading his article, I believe you will have an understanding of the general process of GA

Come back to see my next performance after watching!

Ha ha ha ha ha ha, do you think it's so amazing that you don't need a strong mathematical understanding ability to understand it.

# The main structure of genetic algorithm

Let's start the programming of genetic algorithm

First of all, we need to sort out the ideas and think about which functions (subroutines) are needed in the whole calculation process

1. Population initialization

2. Calculate population fitness

3. Population fitness ranking

4. Select (filter) operation

5. Cross operation

6. Variation operation

These are the functions that need to be used.

Secondly, we need to have a certain awareness of large-scale programming. Some data can be set as global variables and saved all the time, so we need to figure out some frequently used quantities in advance.

pop_size: enter the population size

Chromo? Size: enter chromosome length

generation_size: enter the number of iterations

Cross rate: enter cross probability

Cross rate: input mutation probability

elitism: enter elite selection or not

(these variables will be explained later)

## Start solving:

1. Population initialization:

integer i,j,k integer pop_size, chromo_size,pop_num real x call RANDOM_SEED() do i=1,pop_size do j=1,pop_num do k=1,chromo_size call RANDOM_NUMBER(x) pop(i,j,k) = nint(x) end do end do end do

Language explanation: for each individual of pop size population, pop num dimensions of individual and chromosomal length of chromosomal size of each dimension are randomly assigned.

Random? Seed()? A basic random number generating function

2. Calculate the individual fitness of the population (for different optimization objectives, it needs to be rewritten here)

integer i,j,k integer pop_size,pop_num,chromo_size real fitness_value1(pop_size,pop_num). !do i=1,pop_size(Single level dimension, initialization method when an independent variable) ! fitness_value(i) = 0. !end do do i=1,pop_size do j=1,pop_num do k=1,chromo_size if (pop(i,j,k) == 1)then fitness_value1(i,j) = fitness_value1(i,j)+2**(k-1) end if end do fitness_value1(i,j) = -500+fitness_value1(i,j)*(500-(-500))/(2**chromo_size-1) fitness_value1(i,j) = fitness_value1(i,j)*sin(sqrt(fitness_value1(i,j))) !***** *********Change function fitness_value(i)=fitness_value1(i,j)+fitness_value(i) end do end do

3. Population sequencing

Sort the individuals according to their fitness and save the best individuals

integer pop_size,pop_num,chromo_size integer i,j,k,m,min,temp integer temp1(pop_num,chromo_size) do i=1,pop_size fitness_table(i) = 0. end do min = 1 temp = 1 temp1(pop_num,chromo_size)=0 do i=1,pop_size min = i do j = i+1,pop_size if (fitness_value(j)<fitness_value(min))then min = j end if end do if (min/=i)then temp = fitness_value(i) fitness_value(i) = fitness_value(min) fitness_value(min) = temp do m=1,pop_num do k = 1,chromo_size temp1(m,k) = pop(i,m,k) pop(i,m,k) = pop(min,m,k) pop(min,m,k) = temp1(m,k) end do end do end if end do do i=1,pop_size if (i==1)then fitness_table(i) = fitness_table(i) + fitness_value(i) else fitness_table(i) = fitness_table(i-1) + fitness_value(i) end if end do !fitness_table!***********************???????????????? fitness_avg(G) = fitness_table(pop_size)/pop_size if (fitness_value(pop_size) > best_fitness)then best_fitness = fitness_value(pop_size) best_generation = G end if do i=1,pop_num do j=1,chromo_size best_individual(i,j) = pop(pop_size,i,j) end do end do

4. Roulette selection operation

For different populations distributed on the "function mountain", we need to deal with the inferior populations. The rule is to use the fitness of the function calculated in the third step as the distribution angle of each population on the wheel.

Then turn the wheel and select who will survive. It is obvious that the smallest population with low fitness is the least likely to be selected.

If the bad luck leads to the exclusion of the excellent population, we can take the way of "escort" to these excellent population. (note here is that this way of transportation needs to be enough, and the prevention and control is locally optimal.)

integer pop_size, chromo_size,elitism,pop_num integer i,j,k,p,r,mid,first,last,idx real x,w,q call RANDOM_SEED() call RANDOM_NUMBER(x) do i=1,pop_size r = x * fitness_table(pop_size) first = 1 last = pop_size w=(last+first)/2 mid = nint(w) idx = -1 do while ((first <= last).and.(idx == -1) ) if (r > fitness_table(mid))then first = mid else if (r < fitness_table(mid))then last = mid else idx = mid exit end if q=(last+first)/2 mid = nint(q) if ((last - first) == 1)then idx = last exit end if end do do k=1,pop_num do j=1,chromo_size pop_new(i,k,j)=pop(idx,k,j) end do end do end do !**************************Choice of delivery************************* if (elitism==1)then p = pop_size-1 else p = pop_size end if do i=1,p do k=1,pop_num do j=1,chromo_size pop(i,k,j) = pop_new(i,k,j) end do end do end do

5. Single point crossing operation

implicit none integer pop_size, chromo_size,cross_rate,pop_num integer i,j,k,cross_pos,temp real x call RANDOM_SEED() do i=1,pop_size,2 do k=1,pop_num call RANDOM_NUMBER(x) if(x < cross_rate)then cross_pos = nint(x * chromo_size) !Cross position if(cross_pos == 0.or.cross_pos == 1)then cycle end if do j=cross_pos,chromo_size temp = pop(i,k,j) pop(i,k,j) = pop(i+1,k,j) pop(i+1,k,j) = temp end do end if end do end do

6. Variation operation

!pop_size: Population size !chromo_size: Chromosome length !cross_rate: Mutation probability subroutine mutation(pop_size, chromo_size, mutate_rate,pop_num) use a10 implicit none integer i,j,mutate_pos real x integer pop_size, chromo_size,mutate_rate,pop_num call RANDOM_SEED() do i=1,pop_size do j=1,pop_num call RANDOM_NUMBER(x) if (x < mutate_rate)then mutate_pos = nint(x*chromo_size) if (mutate_pos == 0)then cycle end if pop(i,j,mutate_pos) = 1 - pop(i,j, mutate_pos) end if end do end do

Mutation operation is similar to crossover operation, but its influence is different. The cross operation is equivalent to the association between the populations on the function mountain to expand the territory between them and clear the blind area in the middle of them. He is relatively controllable. In contrast, the mutation operation is a random jump. The population may go to a better place or a worse place. This requires the reader to modify the parameters to get the best results.

At this point, the solution process is completed

## The results show that:

Here the original function is..... Refer to the line of change function in fitness calculation, a function of 30 dimensions.

Here is the main program:

program GA use a10 implicit none !real,external :: fitness***********It's better to use subroutines !integer,external:: rank !real,external ::selection !real,external :: crossover !real,external :: mutation integer m(30,24),p,j,i real n,q(30) integer,save::pop_size !Population size integer ,save::pop_num !Single population dimension integer,save::chromo_size !Chromosome size********** *********change integer ,save::elitism !Select elite operation integer ,save::cross_rate !Crossover probability integer ,save::mutate_rate !Mutation probability !function [m,n,p,q] = GeneticAlgorithm(pop_size, chromo_size, generation_size, cross_rate, mutate_rate, elitism) elitism = 1 !Select elite operation pop_size = 1000 !Population size pop_num=30 !Dimension 30 chromo_size = 24 !Chromosome size generation_size = 200 !Iteration times cross_rate = 0.5 !Crossover probability mutate_rate = 0.01 !Mutation probability !print *, 'Here we go' fitness_avg = 0. fitness_value(pop_size) = 0. best_fitness = 0. best_generation = 0 call initilize(pop_size,pop_num,chromo_size) !Initialization do G=1,generation_size call fitness(pop_size,pop_num,chromo_size) !Calculation of fitness call rank(pop_size,pop_num,chromo_size) !Sort individuals according to their fitness call selection(pop_size, chromo_size,elitism,pop_num) !Selection operation call crossover(pop_size, chromo_size, cross_rate,pop_num) !Crossover operation call mutation(pop_size, chromo_size, mutate_rate,pop_num) !Mutation operation end do !**** ******************matlab Print image in**************** m = best_individual !Get the best individual n = best_fitness !Get the best fit p = best_generation !Get the best individual generation !Get the best individual variable value. For different optimization objectives, you need to rewrite q = 0. do i=1,pop_num do j=1,chromo_size if (best_individual(i,j) == 1)then q(i)= q(i)+2**(j-1) end if end do !!!!!!!!!!!!!!!!!*********************************Change to the value of the corresponding argument********************** q(i) = -500+q(i)*(500-(-500))/(2**chromo_size-1) end do write(*,*)"Optimal individual" write(*,*) m write(*,*)"Optimal fitness" write(*,*) n write(*,*)"Optimal individual independent variable" write(*,*) q write(*,*)"Optimal algebra" write(*,*) p end program GA

Because a lot of functions are calculated without saving the original one in time, there may be a bit of mismatch in the middle. But the general idea has been shown

## Finally, I'd like to say something that needs attention

- You need to understand the conversion between binary and decimal. Here is the use of interpolation.
- It is necessary to understand the modularity of large-scale programming in fortran for data calling.

Finally, time is limited, and Xiaobian is lazy. I will add it when I have time. So that I can review later.

Xiaobian is a graduate student of hydraulic structure. If you read here, you can communicate more.