Site icon Shahad's Blogs

30 Days of Algorithms – Day 3: Merge Sort

Merge sort is an efficient and general-purpose sorting algorithm. It is a Divide and Conquer algorithm that was introduced by John von Neumann in 1945. It is one of the most used sorting algorithms. Many programming languages like Java, Perl use this algorithm in the built-in sorting function.

This is the third day of the 30 days of algorithm series. In the last day, we discussed the insertion sort. You can check that too!

How Merge Sort Works

It is a Divide and Conquer approach. So we at first divide the unsorted list into sublists several times until we get the single element lists. Then in the sublists, we sort them and combine. Finally, we get the sorted list combining all the sublists. Consider we have the following list to sort. While sorting in ascending order we’ll try to get an overview of how the merge sort works.

Let the initial index of the list is \(i\) and the last index is \(l\). Then the middle point of the list is \(m = floor(\frac{i + l}{2})\). if \(i < l\), we’ll divide the list into two parts. One is from \(i\) to \(m\) and another is form \((m+1)\) to \(l\). This is the Divide phase. The divide phase continues until we reach to the base case i.e. until all the sublists contains only one element.

In the Conquer phase, we sort the sublists. While sorting the sublists, we at first consider the first elements from the lists. We compare them and take the smaller one. After that, we move to the next element in the list from where the smaller one was taken. Again a comparison occurs and the smaller one is taken. Then we move to the next element. This continues until we consider all the elements and create a sorted list. This is called combining. The entire process is illustrated in the following figure:

How the sorting is done and the sublists are combined is shown in the following illustration:

The Merge Sort Algorithm

We divide the algorithm into two parts. One will recursively divide the list another will sort the sublists and combine them. Pseudocode for dividing the list can be written as:

mergeSort(A, i, l):
  if i >= l:
    return
  end if
  m = (i + l) / 2
  mergeSort(A, i, m)
  mergeSort(A, m+1, l)
  merge(A, i, l, m)
end mergeSort

Now the next part merge() will sort the sublists and combine into a list. The psudocode of the merge() is as follows:

merge(A, i, l, m):
  sizeOfList1 := m - i + 1
  sizeOfList2 := l - m
  
  list1 := A[i to m inclusive]
  list2 := A[m+1 to l inclusive]

  k := 0, j := 0, index = i
  while k < sizeOfList1 and j < sizeOfList2 do:
    if list1[k] < list2[j]:
      A[index] := list1[k]
      k := k + 1
    else:
      A[index] := list2[j]
      j := j + 1
    end if
    index := index + 1
  end while

  while k < sizeOfList1:
    A[index] := list1[k]
    k := k + 1
    index := index + 1
  end while

  while j < sizeOfList2:
    A[index] := list2[j]
    j := j + 1
    index := index + 1
  end while
end merge

Implementation

A python implementation is as followed:

Complexity Analysis of Merge Sort

The merge sort completes the whole process in three steps or phases. Firstly, it divides the list into sublists then it conquers or sorts the sublists and lastly, combine the sublists.

In the dividing phase, it takes constant time. Because it actually calculates the midpoint, \(m\) of \(i\) and \(l\). So time complexity for this step is \(O(1)\).

Now lets come to the last phase, combining the sublists. This steps takes \(O(n)\) time for merging total \(n\) elements.

Now let’s think about the dividing and combining together. The dividing step takes \(O(1)\) time and the combining step takes \(O(n)\) time. Here the \(O(1)\) part is the lower order time. So these steps take \(O(n)\) time together. Now let’s say the dividing and combining steps take \(c_1n\) time together for any constant \(c_1\).

Now come to the remaining part, conquering or sorting or merging. For simplicity let’s assume \(n\) is always even, if \(n > 1\). So \(n/2\) is an integer. So the running time of a sublist of size \(n\) will be twice of the running time of a \(n/2\) sized sublist plus \(cn\), for the dividing and combining phases. Total merging time for the \(n\) sized list will be \(2*(\frac{cn}{2}) = cn\).

Now two sublists of size \(n/4\) will be generated from each sublist of size \(n/2\). So the running time of a sublist of size \(n/2\) will be twice of the running time of a \(n/4\) sized sublist plus \(cn\), for the dividing and combining phases. Total merging time at this level will be \(4*(\frac{cn}{4}) = cn\), as there are four sublists of size \(n/4\).

This will continue until we reach a level where all the sublists contain only one element. This can be illustrated by the following figure taken from the Khan Academy‘s website.

Here we can see, at each level total required merging time is \(cn\). Previously we saw \(c_1n\) time required for dividing and combining. So total time taken at each level is \(cn + c_1n = c_kn\), where \(c + c_1 = c_k\).

Now if there are \(l\) number of levels then the total required time will be \(l.c_kn\). If there are \(n\) elements in a list, there are \(\log_2n + 1\) levels in the Divide and Conquer tree. So, here \(l = \log_2n + 1\). Then total time can be written as:

\(
\begin{equation}
\begin{split}
l.c_kn &= c_kn(\log_2n + 1)\\
&= c_kn\log_2n + c_kn
\end{split}
\end{equation}
\)

So, ignoring lower terms and constant coefficients we can say time complexity is \(O(n\log_2n)\).

Time Complexities

Space complexity:

Space complexity is \(O(n)\) because extra array(s) of maximum size \(n/2\) is used.

Applications of Merge Sort

It is used in inversion count problem. Inversion count of an array indicates how far or close the array is from being sorted.

It is also used in external sorting. External sorting can handle massive amounts of data. In external sorting, the external slower memory is used instead of the main memory (RAM).

It is also used as the algorithm of builtin sorting function of some programming languages like Java, Perl etc.

Merge sort is helpful in case of linked list sorting. It is possible to implement it in such a way that it takes \(O(1)\) space complexity.

Exit mobile version