# One article explains the ten classic sorting algorithms

Posted Jun 26, 2020 • 16 min read

WeChat public account:Xiaochao said

If you feel helpful to you, please share

I think the beginning of everyone s journey of learning algorithms is a variety of sorting algorithms. Indeed, the wide applicability of sorting algorithms and its concise basis and other properties are the best choice for beginners, then today I will take you to review the following Various classic sorting algorithms! Hope it helps you!

Our convention:all sorting algorithms in this article operate on integer arrays, in ascending order

### 1. Bubble sort

**Bubbling sort**, as the name suggests, is that the data keeps on going up like bubbles. The general idea is:we perform n rounds of bubbling operations on a given array. Each operation compares two adjacent items separately. If the previous item is greater than the next item, exchange them. You can imagine this scenario , After n comparisons(exchange if necessary), the final result must be the **largest item** moved to the far right of the array; then we repeat this process, after n times of bubbling operations, just Sorting all the data, this is the idea of bubble sorting.

But we can do some optimizations. For example, I define a variable named `flag`

to monitor the number of element exchanges in a round of bubbling operation. If the number of **element exchanges** in a bubbling operation is `0`

, That means that all elements have been sorted, and we can terminate the sorting end procedure.

### The specific code is as follows:

```
//Bubbling sort, a is an array, n is the length of the array
public static void bobbleSort(int a[],int n){
if(n<=1) return;
for(int i=0;i<n;++i){
//Sign to exit bubble sorting early
boolean flag=false;
for(int j=1;j<n-i;++j){
if(a[j]<a[j-1]){
int tmp=a[j];a[j]=a[j-1];a[j-1]=tmp;//Exchange elements
flag=true;
}
}
if(flag==false) break;//If there is no element exchange, it is already in order and ends
}
}
```

### Features of Bubble Sort

- The best time complexity is O(n), then the corresponding array is already ordered, only one bubbling operation is needed; the worst time complexity is O(n^2), n bubbling operations are performed To complete the final sorting; the space complexity is O(1), which is an in-place sorting algorithm
- Bubble sort is a stable sorting algorithm, because we can control
**If two elements are equal, they will not be exchanged**, which guarantees its stability(stability here means that the sorting algorithm does not change The position of the equal elements in the array, that is,`a==b`

, the position of a in the array is ahead of the position of b, then the position of a after sorting is still ahead, that is, the sorting does not change their relative position.) - The number of exchange operations required for bubble sort is equal to the degree of reverse order in the array

### 2. Select Sort

The idea of selection sorting is very simple, the process is as follows:

- First scan the array to find the smallest element, and then exchange it with the first element(if the first element is the smallest element, then it will exchange with itself)
- Find the smallest element in the remaining array, and then exchange it with the second element
- So repeated n-1 times, the sorting effect is achieved

### The specific code is as follows:

```
public static void selectSort(int[]a,int n){
for(int i=0;i<n;++i){
int min=i;//min is the subscript of the smallest element
//Find the smallest element between i--n
for(int j=i;j<n;++j){
if(a[j]<a[min]) min=j;
}
//Exchange position
{int tmp=a[min];a[min]=a[i];a[i]=tmp;}
}
}
```

### Select the characteristics of sorting

- The best, worst, and average time complexity are all O(n^2); the space complexity is O(1), which is an in-place sorting algorithm
- Selection sorting is not a stable sorting algorithm, because each exchange will change the relative position between equal elements
- The running time has nothing to do with the input, because each scan does not provide any useful information for the next scan
- The movement of data is the least, only n times, the number of exchanges is equal to the length of the array, this linear nature is not possible by many sorting methods

### 3. Insert Sort

Now please recall the situation where you play poker. Every time you take a card from the table and insert it into the proper position in your hand. After this repeated operation, the cards in your hand are just in order. This is exactly the insertion. Sorting ideas. Imagine dividing the array into a sorted interval in front of **and an unsorted interval in the back of**, traversing these n elements in sequence, inserting one element at a time in the sorted interval **appropriate**, Repeat so much, and finally sort the array.

In a computer, the situation is a bit complicated. It is clear to everyone that inserting an element in an array requires **moving data**, which is the core of insertion sorting:`Determine the position of the insertion point and carry out data movement`

. In response to this, we have two implementation ideas:

- Through many exchanges of adjacent elements to achieve the purpose of finding the position and inserting
- Determine the location first, then move the data, and finally exchange

### The specific code is as follows:

```
//The first approach uses exchange to simplify the sorting process
public static void insertionSort1(int[]a,int n){
for(int i=0;i<n;++i){
for(int j=i;j>0&&a[j]<a[j-1];--j)
{int tmp=a[j];a[j]=a[j-1];a[j-1]=tmp; }//Exchange elements
}
}
//The second method is to move the data and insert it
public static void insertionSort2(int []a,int n){
for(int i=0;i<n;++i){
int c=a[i],j;
for(j=i;j>=0&&a[j]>=c;--j) ;//Determine where c should be inserted
for(int k=i;k>=j+2;--k)//Move data
a[k]=a[k-1];
a[j+1]=c;//Insert data a[i]
}
}
```

### Features of Insert Sort

- Time complexity:the best case time complexity is O(n), at this time the corresponding data is already ordered, only need to
**traverse the already ordered data from beginning to end**; worst case time complexity Degree is O(n^2), corresponding to the array is in reverse order at this time, each insertion is equivalent to inserting new data at the first position of the array; average time complexity:for insertion sort,**Every Each insert operation is equivalent to inserting a data in the array**, because**the average time complexity of inserting a data in the array is O(n)**, so the average time complexity of performing n insert operations is O(n^2). - Space complexity is O(1), insertion sort is an in-place sorting algorithm
- Insert sorting is a stable sorting algorithm. As for the elements with the same value, we can choose to insert the elements that appear later after the elements that appear in front, so as to maintain the original sequence
- The time required for insertion sorting depends on the initial order of the elements in the input. If the initial sequence is more ordered, the amount of
**exchange**or**moving data**will be reduced, and the time required will be Shorten, become longer anyway - The number of swap operations required for insertion sorting is equal to the degree of reverse order in the array

### 4. Hill sorting

We can understand Hill sorting as the insertion sorting of `Magic Revised`

. Although the two names are called by different names(how Hill feels it has nothing to do with the insertion!), but they are indeed inextricably linked. Relationship. We slowly said:

I don t know if you have noticed a fatal shortcoming of insertion sort:**You can only swap adjacent elements at a time!**, you imagine a scenario:if the smallest element in the array is at the far right of the array, wouldn t I have to make **n exchanges**! This is unbearable, especially for similar things like larger arrays. Therefore, people have improved on the basis of insertion sorting, so that it **can exchange two elements that are far away**, this feature has greatly improved efficiency, and people have given it a new name. `Hill sort`

.

I think that after you have heard my narrative above, you are probably no stranger to Hill's sort. In the process of its implementation, we consider the `h ordered array`

, that is, the sub-arrays separated by h are ordered. We control the value of h so that it gradually decreases, that is, the ordered spacing gradually becomes smaller**, and finally ends with h=1(that is, insertion sort), in order to ensure that the final result has Order**. The specific code is shown below:

### The specific code is as follows:

```
//Hill sort
public static void shellSort(int []a,int n){
int h=1;
while(h<=n/3) h=3*h+1;//The initialization interval h, the purpose is to unify the following
while(h>=1){
for(int i=0;i<n;++i){
for(int j=i;j>=h&&a[j]<a[j-h];j=j-h){
int tmp=a[j];a[j]=a[j-h];a[j-h]=tmp;//Exchange elements
}
}
h=h/3;//Reduce the spacing
}
}
```

### Features of Hill Sorting:

- The time complexity of Hill sorting is related to the
`h`

we choose. For example, in our previous code, the value of`h`

is 3, but one thing can be determined:The time complexity of`Hill sorting does not reach the square level`

This is a more useful conclusion, it tells us the advantages of Hill sorting; in the case of our`h==3`

, the worst-case time complexity is`O(n^(3/2))`

It can be seen that a small change to the insertion sort breaks the running time of the square order - Hill sorting is an in-place sorting algorithm with a space complexity of O(1)
- Hill sorting is a stable sorting algorithm, the reason is the same as insert sorting
- Hill sorting is better for larger arrays. This is not surprising, remember our initial troubles?
**Insert sort can only exchange adjacent elements each time**, Hill sort solves this problem very well

### 5. Merge sort

The so-called merge is simply **maintaining an orderly merge**. If we want to sort the array a, we can consider the following steps:

- Divide array a into array b and array c
- Sort array b and array c separately
- Merge two ordered sub-arrays b and c ** into a large ordered array

In this way, the array a naturally becomes ordered. Using recursive thinking, we can easily think of the following sorting ideas:

```
//Merge sort
private static void sort(int []a,int lo,int hi){
if(lo>=hi) return;
int mid=(lo+hi)/2;
sort(a,lo,mid);//Sort the left half
sort(a,mid+1,hi);//Sort the right half
merge(a,lo,mid,hi);//Merge result
}
```

The above code cleverly uses the recursive idea to sort the array a. We only need to focus on how the merge algorithm is implemented, that is, **It is known that the two arrays b and c are arranged in order from small to large, and merge them into an ordered array a**, we use a `assist The array aux[]`

implements the `in-place merge algorithm`

, the specific code is as follows:

```
//In-place merge algorithm, the task is to merge the array a[lo---mid]with the array a[mid+1---hi]
private static void merge(int []a,int lo,int mid,int hi){
//Copy the elements in array a to the array aux
for(int i=lo;i<=hi;++i)
aux[i]=a[i];
//Merge
int i=lo,j=mid+1;
for(int k=lo;k<=hi;++k){
if(i>mid) a[k]=aux[j++];
else if(j>hi) a[k]=aux[i++];
else if(aux[j]<aux[i]) a[k]=aux[j++];
else a[k]=aux[i++];
}
}
```

Finally, our merge sort code is as follows:

```
//The final code of merge sort
private static int []aux;
public static void mergeSort(int []a,int n){
aux=new int[n];//Apply an array for the merge algorithm
sort(a,0,n-1);
}
```

Sort out the in-situ merge sorting idea:realize the `merge function`

through the auxiliary array; the `sort function`

embeds the `merge function`

, and realize the sorting by continuously calling the `sort function`

recursively.

The merge sort we implemented above is called **top-down merge sort**. We first split the large array and then sort the small array. In fact, there is another version of merge sort **Bottom-up merge sort**, we start with the smallest array, merge one by one, two merge two, and finally merge all elements. The specific code is as follows:

```
public static void mergeSort2(int []a,int n){
//Merge logn times in pairs
aux=new int[n];
//size is the length of the sub-array, lo is the index of the sub-array(ie the first element)
for(int size=1;size<n;size=size*2){
for(int lo=0;li<n-size;lo+=size+size){
merge(a,lo,lo+size-1,Math.min(lo+size+size-1,n-1));
}
}
}
```

Its idea is to merge from the bottom. After traversing the entire array, the size doubles, and the last step of the merge is the entire array.

### Features of merge sort

Let's analyze the following time complexity first. We assume that the time required to merge and sort n elements is T(n), and the time to sort into two sub-arrays is T(n/2). We can find that the merge() function merges two ordered sub-arrays with a time complexity of O(n). Therefore, the formula for calculating the time complexity of merge sort is:

`T(1) = C; When n=1, only constant-level execution time is needed, so it is expressed as C. T(n) = 2*T(n/2) + n; n>1`

Then, we can get the conclusion through mathematical derivation:

**The time complexity of merge sort is O(nlogn)**, and:the execution efficiency of merge sort is independent of the order of the original array to be sorted, so its time is complicated The degree is**very stable**. Whether it is the best case, the worst case, or the average case, the time complexity is O(nlogn). gBecause we use the in-place merge algorithm, the space complexity is O(1), which is an in-place sorting algorithm. Of course, different merge algorithms result in different results.

Merge sort is a stable sorting algorithm, because in the process of merge, we can think of the control

**When the two elements are equal, the front element is placed first**.

### 6. Quick sort

Quick sort is a very widely used sorting algorithm, we next analyze its ideas:

- Find a reference element
**K**, divide the original array a into two arrays b and c, so that the elements in b are less than**K**, and the elements in c are all greater than**K**, that is,`a[0---n-1]---> b+K+c`

- Sort array b and array c separately

The specific code is as follows:

```
//Implementation of quick sort
public static void quickSort(int []a,int n){
sort(a,0,n-1);
}
//Sort method
private static void sort(int []a,int lo,int hi){
if(lo>=hi) return;
int i=partition(a,lo,hi);//Splitting
sort(a,lo,i);//Sort the left half
sort(a,i+1,hi);//Sort the right half
}
```

The main problem of quick sort is the realization of **segmentation method**. Our approach is:

Determine the leftmost element of the array to be split as the split element

Segmentation is achieved by scanning the left and right pointers

The specific code is as follows:

We start scanning from the left end of the array to the right until we find an element greater than or equal to it, and then start scanning from the right end of the array to the left until we find an element less than or equal to it. Then we swap their positions

In this way, we can ensure that the left element of the left pointer

`i`

is not greater than the split element, and the right element of the right pointer`j`

is not less than the split elementWhen the two pointers meet, we only need to exchange the split element

`a[lo]`

with the rightmost element`a[j]`

of the left sub-array and then return to the split position`j`

private static int partition(int []a,int lo,int hi){

int c=a[lo];//Segmentation benchmark

int i=lo,j=hi+1;//left and right scan pointer

while(true){`while(a[--j]>=c) if(j==lo) break; while(a[++i]<=c) if(i==hi) break; if(i>=j) break; {int tmp=a[j];a[j]=a[i];a[i]=tmp; }//Exchange elements`

}

a[lo]=a[j];a[j]=c;//Put c in a suitable position

retrun j;//Return the position of the split element}

### Features of Quick Sort

The time complexity of quick sorting is more dependent on the position of the cut point, you can think about it, if the given array to be sorted is completely reversed from large to small, such as 4, 3, 2, 1. We need to perform approximately n partition operations to complete the entire process of quick queue. We need to scan about n/2 elements on average for each partition. In this case, the time complexity of the fast queue is O(n^2).

In general, if the position of the split point is closer to the middle, the quick sort and merge sort are actually the same

`T(1) = C; When n=1, only constant-level execution time is needed, so it is expressed as C. T(n) = 2*T(n/2) + n; n>1`

Thus, the time complexity is O(n^2). Of course, the time complexity of T(n) can be O(nlogn) in most cases, and only degrades to O(n2) in extreme cases. We can also randomize the array in advance to avoid extreme situations

The space complexity of quick sort is O(1), which is an in-place sorting algorithm

Quick sort is not a stable sorting algorithm

### 7. Heap sort

Because heap sorting is a sort based on the data structure **heap**, I plan to explain it in detail when introducing the heap. I will skip it for the time being.

### 8. Bucket sorting

For a chestnut, our task is to sort 40 numbers within 100. We can do this:

- Put the numbers within 0~~20 into the first bucket
- Put the numbers within 21~~40 into the second bucket
- Repeat until the 40 numbers are put into 5 buckets
- Use some sorting method(quick sorting used in this article) to sort the elements in the bucket
- Take the numbers out of the bucket in order and put them in an original array

After the above operations, has the original array become ordered?

We can compare bucket sorting with quick sorting. Quick sorting is to split the set into two value ranges, here called two buckets, and then sort the two buckets separately, and finally complete the sorting. Bucket sorting is to split the collection into multiple buckets and sort each bucket to complete the sorting process. The difference between the two is that the quick sort is to sort on the collection itself, which belongs to the in-situ sorting method, and the sorting method for each bucket is also the quick sort. Bucket sorting provides additional operating space, sorting the buckets in the extra space, avoiding the comparison and exchange operations of the elements that constitute the bucket process, and at the same time can choose the appropriate sorting algorithm to sort the buckets.

### The specific code is as follows:

The structure of the barrel

Distribution of data

Sorting of elements in the bucket

Take out the data in the bucket

public class BucketSort {

`/** * Bucket sorting * * @param arr array * @param bucketSize bucket capacity */ public static void bucketSort(int[]arr, int bucketSize) { if(arr.length <= 1) { return; } //minimum value of the array int minValue = arr[0]; //Maximum value of the array int maxValue = arr[1]; for(int i = 0; i <arr.length; i++) { if(arr[i]<minValue) { minValue = arr[i]; } else if(arr[i]> maxValue) { maxValue = arr[i]; } } //Number of barrels int bucketCount =(maxValue-minValue)/bucketSize + 1; //barrel int[][]buckets = new int[bucketCount][bucketSize]; //Number of elements in each bucket int[]indexArr = new int[bucketCount]; //Assign the values in the array to each bucket for(int i = 0; i <arr.length; i++) { //Calculate which bucket this element arr[i]should be in int bucketIndex =(arr[i]-minValue)/bucketSize; buckets[bucketIndex][indexArr[bucketIndex]++]= arr[i]; } //Sort each bucket, quick sort is used here int k = 0; for(int i = 0; i <buckets.length; i++) { if(indexArr[i]== 0) { continue; } //In this bucket sorting, we use quick sorting to complete the sorting in the bucket Sort.quickSort(buckets[i], indexArr[i]-1); for(int j = 0; j <indexArr[i]; j++) { arr[k++]= buckets[i][j]; } } }`

}

### Features of bucket sorting

Bucket sorting is very demanding on the data to be sorted. It requires a natural size relationship between buckets and buckets. Only in this way, after the data in each bucket is sorted, the data between buckets and buckets is not required. Then sort. In addition, the degree of uniform distribution of data in the bucket is also an important factor affecting bucket sorting. If the distribution is extremely uneven, bucket sorting will degenerate from `n`

to `nlog(n)`

.

- Best case:T(n) = O(n). When the input data can be evenly distributed to each bucket.
- Worst case:T(n) = O(nlogn). When the input data is allocated to the same bucket.
- Average case:T(n) = O(n).

### 9. Counting and sorting

Counting sorting can be seen as a special case of bucket sorting, but here the size of the bucket becomes 1. If our task is to sort 40 numbers within 20, we can apply for an array `book[21]`

, then store the number of occurrences of the number `i`

with the value of `book[i]`

by traversing the array, and finally The data is put into the original array in turn. The following are two different versions. The first is easier to understand but unstable, and the second is a stable sorting algorithm, but it is not easy to understand.

### Unstable count ordering:

Find data range and apply for array

Iterate over all elements of the array and calculate

`book[i]`

Copy the result back to a array

public class CountingSort {

`//Count and sort, a is an array, n is the size of the array. Assume that all non-negative integers are stored in the array. public static void countingSort(int[]a, int n) { if(n <= 1) return; //Find the range of data in the array int max = a[0]; for(int i = 1; i <n; ++i) { if(max <a[i]) { max = a[i]; } } //apply for a count array book, subscript size [0,max] int[]book = new int[max + 1]; //Count the number of each element and put it in the book for(int i = 0; i <n; ++i) { book[a[i]]++; } //Temporary array r to store the sorted result int[]r = new int[n]; //Store the result in the array r int index=0; for(int i=0;i<n;++i){ for(int j=0;j<book[i];++j){ r[index++]=a[j]; } } //Copy the result to an array for(int i = 0; i <n; ++i) { a[i]= r[i]; } }`

}

### Stable count sorting

Find data range and apply for array

Iterate over all elements of the array and calculate

`book[i]`

Accumulate in order

Copy the result back to the temporary array(emphasis)

Copy the result from the temporary array back to the a array

public class CountingSort {

`//Count and sort, a is an array, n is the size of the array. Assume that all non-negative integers are stored in the array. public static void countingSort(int[]a, int n) { if(n <= 1) return; //Find the range of data in the array int max = a[0]; for(int i = 1; i <n; ++i) { if(max <a[i]) { max = a[i]; } } //apply for a count array book, subscript size [0,max] int[]book = new int[max + 1]; //Count the number of each element and put it in the book for(int i = 0; i <n; ++i) { book[a[i]]++; } //accumulate in sequence for(int i = 1; i <max + 1; ++i) { book[i]= book[i-1]+ book[i]; } //Temporary array r to store the sorted result int[]r = new int[n]; //The key step of calculating sorting is a bit difficult to understand for(int i = n-1; i >= 0; --i) { int index = book[a[i]]-1; r[index]= a[i]; book[a[i]]--; } //Copy the result to an array for(int i = 0; i <n; ++i) { a[i]= r[i]; } }`

}

### Features of Counting and Sorting

- Counting and sorting can only be used in scenarios where the data range is not large. If the data range k is much larger than the data n to be sorted, it is not suitable for counting and sorting.
- Counting and sorting can only sort non-negative integers. If the data to be sorted is of other types, it should be converted into non-negative integers without changing the relative size.(For example, you can add a number at the same time)

### 10. Cardinality sorting

Cardinality sorting is a non-comparative integer sorting algorithm. Its principle is to cut integers into different numbers by digits, and then compare them by each digit.

In layman's terms, it is to let you compare the relationship of several multi-digit numbers. What would you do? It must be the highest position first. If the highest position of a is the largest, then the final result is definitely the largest. This is the case with so-called cardinality sorting, but some details are different.

### Conditions of Use

- The data can be divided into independent
`bits`

for comparison; - There is a progressive relationship between the bits. If the higher bits of data a are larger than data b, then the remaining positions need not be compared;
- The data range of each bit cannot be too large, and linear sorting can be used, otherwise the time complexity of cardinality sorting cannot be O(n).

### Program

There are two implementation schemes in order of priority from high to low:

- MSD:based on the high order, first sort and group by k1, the records in the same group, the key code k1 is equal, and then sort each group into subgroups by k2, and then continue to sort and group the following key codes until After the sub-groups are sorted by the most significant key code kd, the groups are connected together to obtain an ordered sequence. The MSD method is suitable for sequences with many digits.
- LSD:Based on the low order, sort from kd first, then sort kd-1 and repeat in sequence until k1 is sorted to get an ordered sequence. The LSD method is suitable for sequences with few digits.

### Specific code implementation:

Find the maximum value in the array

Starting from the lowest bit, count and sort each bit

public class RadixSort {

`/** * Cardinality sorting * * @param arr */ public static void radixSort(int[]arr) { int max = arr[0]; for(int i = 0; i <arr.length; i++) { if(arr[i]> max) { max = arr[i]; } } //Starting from the one's place, sort the array arr by "exponent" for(int exp = 1; max/exp> 0; exp *= 10) { countingSort(arr, exp); } } /** * Count and sort-sort the array according to "a certain number of digits" * * @param arr * @param exp exponent */ public static void countingSort(int[]arr, int exp) { if(arr.length <= 1) { return; } //Count the number of each element int[]book = new int[10]; for(int i = 0; i <arr.length; i++) { book[(arr[i]/exp)%10]++; } //Calculate the sorted position for(int i = 1; i <book.length; i++) { book[i]+= book[i-1]; } //Temporary array r to store the sorted result int[]r = new int[arr.length]; for(int i = arr.length-1; i >= 0; i--) { r[book[(arr[i]/exp)%10]-1]= arr[i]; book[(arr[i]/exp)%10]--; } for(int i = 0; i <arr.length; i++) { arr[i]= r[i]; } }`

}

### Features of Cardinality Sort

The space complexity of cardinality sorting is O(n + k), so it is not an in-situ sorting algorithm.

Cardinality sorting does not change the relative order between the same elements, so it is a stable sorting algorithm.

Sorting according to each bit, we can use the bucket sorting or counting sorting just mentioned, and their time complexity can be O(n). If the data to be sorted has k bits, then we need k bucket sorting or count sorting, and the total time complexity is O(k*n). When k is not large, for example, to sort our mobile phone numbers, the maximum k is 11, so the time complexity of cardinality sorting is approximately O(n).

Best case:T(n) = O(n * k).

Worst case:T(n) = O(n * k).

Average case:T(n) = O(n * k).

Where k is the maximum value of the sequence to be sorted.

### At last

Today's sorting algorithm summary is coming to an end here, I hope to gain something for everyone. Codewords are not easy, I hope to forward and share more~

Off-topic:For beginners in algorithms, I recommend a very nice book **"Fourth Edition of Algorithms"**, where the various maps are very detailed. If you need an electronic version of the file, reply **Algorithm 4** in the background to get the download link. Reply in the background **Algorithm 01** Send you a mind map of algorithm and data structure. Finally, I hope we can make progress together and grow together!