The stability of the sorting algorithm can not measure the advantages and disadvantages of an algorithm. It mainly describes the properties of the algorithm.

### Insertion sort

If the sequence is basically ordered, the insertion sort is simple and effective

##### Direct insert sort

Insert elements in a sequenced sequence

void InsertSort(ElemType A[],int n){ int i,j; for(i=2;i<=n;i++){ if(A[i].key<A[i-1].key){ A[0]=A[i]; //A[0] is the sentry for(j=i-1;A[0].key<A[j].key;--j){ A[j+1]=A[j]; } A[j+1]=A[0]; } } }

Spatial complexity O(1)

Time complexity O(n ²)

Direct sorting algorithm is a stable sorting algorithm, which is suitable for linear tables with sequential and chained storage.

##### Binary Insertion Sort

void InsertSort(ElemType A[],int n){ int i,j,low,high,mid; for(i=2;i<=n;i++){ low=1; high=i-1; while(low<=high){ mid=(low+high)/2; if(A[mid].key>A[0].key){ high=mid-1; } else{ low=mid+1; } } for(j=i-1lj>=high+1;--j){ A[j+1]=A[j]; } A[high+1]=A[0]; } }

It is a stable sorting algorithm, which can reduce the number of comparison elements

Time complexity O(n ²)

##### Shell Sort

Also known as reduced incremental sorting

Split the sorting table into L[i,i+d,i+2d The sub tables of i+kd] are inserted and sorted directly. When the elements in the whole table have been in "basic order", then all records are inserted and sorted directly.

void ShellSort(ElemType A[],int n){ for(dk=n/2;dk>=1;dk=dk/2){ for(i=dk+1;i<=n;++i){ if(A[i].key<A[i-dk].key){ A[0]=A[i]; //Temporarily stored in A[0] for(j=i-dk;j>0&&A[0].key<A[j].key;j-=dk){ A[j+dk]=A[j]; //Record moves back to find where to insert } A[j+dk]=A[0]; } } } }

Spatial complexity O(1)

Time complexity is not fixed

Hill sort is an unstable sort method, which is only suitable for linear table sequential storage

### Exchange sort

##### Bubble sort

Compare the values of adjacent elements from the back to the front. If they are in reverse order, exchange them until the sequence comparison is completed. This is called a bubble sort. Do n-1 bubbling at most.

void BubbleSort(ElemType A[],int n){ for(i=0;i<n-1;i++){ flag=false; //Whether there is exchange in this bubble sort for(j=n-1;j>1;j--){ if(A[j-1].key>A[j].key){ swap(A[j-1],A[j]); flag=true; } } if(flag==false){ return; } } }

Spatial complexity O(1)

Time complexity O(n ²)

Bubble sorting is a stable sorting method. Each sorting will place an element in its final position

##### Quick sort

Based on divide and conquer. Take any one element from the sequence to be arranged as the benchmark, and make the element on the left side of the benchmark smaller than the benchmark by one sorting, and the element on the right side is greater than or equal to the benchmark. Then repeat the above process for the two sub tables recursively until there is only one element or empty in each part.

//One pass sorting process int Paetition(ElemType A[],int low,int high){ ElemType pivot=A[low]; //Benchmark the first element in the table while(low<high){ while(low<high&&A[high]>=pivot){ --high; } A[low]=A[high]; while(low<high&&A[low]<=pivot){ ++low; } A[high]=A[low]; } A[low]=pivot; return low; //Return to base position } void QuickSort(ElemType A[],int low,int high){ if(low<high){ int pivotpos=Partition(A,low,high); QuickSort(A,low,pivotpos-1); QuickSort(A,pivotpos+1,high); } }

O (log2 n) O (log2n) O (log2n) O (log2n)

O (nlog2 n) O (NLog Gu 2n) O (nlog2n) O (nlog2n) with average time complexity

Fast sorting is an unstable sorting method. After each sorting, an element is placed in its final position.

(1) Using the idea of quick sorting, the smallest n/2 elements are placed in A1 and the rest in A2.

Set the reference position as i, if i=n/2, the grouping is completed; if i < n / 2, the reference and previous elements belong to A1, continue to divide the elements after i; if i > n / 2, then the reference and subsequent elements belong to A2, continue to divide the elements before i.

(2)

int Partition(int A[],int n){ int pivot,low=0,low0=0,high=n-1,high0=n-1,flag=1,k=n/2,i; int s1=0,s2=0; while(flag){ pivot=A[low]; while(low<high){ while(low<high&&A[high]>=povit){--high;} A[low]=A[high]; while(low<high&&A[low]<=povit){++low;} A[high]=A[low]; } A[low]=pivot; if(low==k-1){ //Partition success flag=0; } else{ if(low<k-1){ low0=++low; high=high0; } else{ high0=--high; low=low0; } } } for(i=0li<k;i++){s1+=A[1];} for(i=k;i<n;i++){s2+=A[i];} return s2-s1; }

(3) Time complexity O(n), space complexity O(1)

### Selection sort

##### Simple selection sorting

The i-th order is from L[i n] Select the element with the smallest key to exchange with L(i). Each sorting can determine the final position of an element

void SelectSort(ElemType A[],int n){ for(int i=0;i<n-1;i++){ min=i; //Record the location of the smallest element for(j=i+1;j<n;j++){ if(A[j]<A[min]){ min=j; } if(min!=i){ swap(A[i],A[min]); } } } }

Spatial complexity O(1)

Time complexity O(n ²)

Simple selection sort is an unstable sort method

##### Heap sort

Heap sorting is a tree selection sorting method. The key is to construct the initial heap. For a binary tree with n nodes, start from the nth / 2nd node in turn (for a large root heap, if the key of the root node is smaller than the older one of the left and right children, exchange it)

//Building a big root pile void BuildMaxHeap(ElemType A[],int len){ for(int i=len/2;i>0;i--){ AdjustDown(A,i,len); } } void AdjustDown(ElemType A[],int k,int len){ A[0]=A[k]; for(i=2*kli<=len;i*=2){ if(i<len&&A[i]<A[i+1]){ //Get the subscript of the larger child node i++; } if(A[0]>=A[i]){ break; } else{ A[k]=A[i]; k=i; } } A[k]=A[0]; }

Time complexity is O(n)

Heap sort algorithm:

void HeapSort(ElemType A[],int len){ BuildMaxHeap(A,len); for(i=len;i>1;i--){ Swap(A[i],A[1]); //Exchange output top element with bottom element AdjustDown(A,1,i-1); } }

Spatial complexity O(1)O(1)O(1)

Time complexity O (nlog2 n) O (NLog Gu 2n) O (nlog2n)

Heap sort is an unstable sort method

### Merge sort

Combine two or more ordered tables into a new ordered table

ElemType *B=(ElemType *)malloc(n+1)*sizeof(ElemType); //Auxiliary array void Merge(ElemType A[],int low,int mid,int high){ for(int k=low;k<=high;k++){ B[k]=A[k]; } for(i=low,j=mid+1,k=i;i<=mid&&j<=high;k++){ if(B[i]<=B[j]){ A[k]=B[i++]; //Put the smaller value in A } else{ A[k]=B[j++]; } } while(i<=mid){ A[k++]=B[i++]; } while(j<=high){ A[k++]=B[j++]; } } void MergeSort(ElemType A[],int low,int high){ if(low<high){ int mid=(low+high)/2; MergeSort(A,low,mid); MergeSort(A,mid+1,high); Merge(A,low,mid,high); } }

Spatial complexity O(n)O(n)O(n)

Time complexity O (nlog2 n) O (NLog Gu 2n) O (nlog2n)

2-way merge sort is a stable sort method

### Radix sorting

The thought of multi key word sorting

Space complexity O ®

The time complexity O(d(n+r)) is independent of the initial state of the sequence.

### External sorting

External sorting means that the files to be sorted are large, and the memory cannot be put down at one time, so they need to be stored in external media. In the process of sorting, we need to exchange memory and external memory many times.

The external sorting usually adopts the merging sorting method. According to the size of the memory buffer, the files with n records in the external memory are divided into several sub files with length of h, which are read into the memory in turn and sorted by using the effective internal sorting method, and the sorted ordered sub files (merging segments) are written back to the external memory, and then these merging segments are merged one by one until the whole ordered file is obtained.

To improve the speed of external sorting, we should reduce the number of I/O

##### Multiple balanced merging and failure tree

Increasing the number of merging paths m can reduce the number of merging paths S, and thus reduce the number of memory accesses (I/O). But increasing the number of merging paths will increase the time of internal merging.

In order to avoid the influence of m increment on the internal merging, a failure tree is introduced, which can be regarded as a complete binary tree. The leaf node stores the records of the current comparison of each merging segment in the merging process. The internal node is used to remember the "loser" in the left and right subtrees. The winner continues to compare upward until the root node.

##### Permutation - select sort (generate initial merge segment)

Reducing the number of initial merging segments r can also reduce the number of merging passes S

##### Best merging tree

After permutation and sorting, the original merging segments with different lengths are obtained.

M-path merging and sorting can be described by a m-tree. Using the idea of Huffman tree, in the merging tree, the initial merging segment with few records is merged first, and the merging segment with more records is merged last, then the best merging tree with the least total I/O times can be established.

### summary

Time complexity:

- O(n 2) O(n 2) O(n 2): simple selection sort, direct insertion sort, bubble sort. The implementation process is simple, and the time complexity O(n)O(n)O(n) O(n) O(n)O(n)O(n) under the best case of direct insertion and bubble sorting is independent of the initial state of the sequence
- O (nlog2 n) O (NLog Gu 2n) O (nlog2n): heap sort, fast sort, merge sort. Time complexity O(n? O(n? O) O(n? O) in the worst case of fast sorting
- Hill sorting can achieve high efficiency for large-scale sorting, but the exact progressive time is not obtained

Spatial complexity:

- O(1)O(1)O(1): simple selection sort, insertion sort, bubble sort, Hill sort, heap sort
- O (log2 n) O (log n) O (log2n): fast sort, worst case O(n)O(n)O(n)
- O(n)O(n)O(n): 2-way merge sort

Stability:

- Insert sort, bubble sort, merge sort and cardinality sort are stable sort methods
- Simple selection sort, fast sort, Hill sort and heap sort are unstable sort methods