1. The order of growth of the running time of an algorithm gives a simple characterization of the algorithm’s efficiency and also allows us to compare the relative performance of alternative algorithms.

  2. At input sizes large enough to make only the order of growth of the running time relevant we study the asymptotic efficiency of algorithms.

  3. Usually, an algorithm that is asymptotically more efficient will be the best choice for all but very small inputs.

  4. The notation Θ(1) to mean either a constant or a constant function with respect to some variable.





  1. For a given function g(n), we denote by Θ(g(n)) the set of functions

  2. Θ(g(n)) = { f(n) : there exist positive constants c1, c2 and n0 such that 0 ≤ c1g(n) ≤ f(n) ≤ c2g(n) for all n ≥ n0 }

  3. A function f(n) belongs to the set Θ(g(n)) if there exist positive constants c1, c2 such that it can be “sandwiched” between c1g(n), c2g(n),for sufficiently large n.

  4. So for all n ≥ n0 the function f(n) is equal to g(n) to within a constant factor. We say that g(n) is an asymptotically tight bound for f(n).





Q. To prove: \( \frac{1n^2}{2} - 3n = \Theta(n^2) \)



  1. To do so, we must determine positive constants n0, c1, c2 such that

  2. \( c_1n^2 \le \frac{1n^2}{2} - 3n \le c_2n^2 \) for all n ≥ n0. So dividing by n2 yields

  3. \( c_1 \le \frac{1}{2} - \frac{3}{n} \le c_2 \)

  4. By choosing \( c_1 = \frac{1}{14}, c_2 = \frac{1}{2}, n_0 = 7 \) we prove the equation.





  1. O(g(n)) = { f(n) : there exist positive constants c and n0 such that 0 ≤ f(n) ≤ cg(n) for all n ≥ n0 }

  2. f(n) = Θ(g(n)) implies f(n) = O(g(n)),since Θ - notation is a stronger notion than O-notation.

  3. Thus, the O(n2) bound on worst-case running time of insertion sort also applies to its running time on every input combination possible. The Θ(n2) bound on the worst-case running time of insertion sort, however, does not imply Θ(n2) bound on the running time of insertion sort on every input.





  1. Ω(g(n)) = { f(n) : there exist positive constants c and n0 such that 0 ≤ cg(n) ≤ f(n) for all n ≥ n0 }

  2. For any two functions f(n) and g(n),we have f(n) = Θ(g(n)) if and only if f(n) = O(g(n)) and f(n) = Ω(g(n)).

  3. The running time (no modifier) of an algorithm is Ω(g(n)), means that no matter what particular input of size n is chosen for each value of n, the running time on that input is at least a constant times g(n), for sufficiently large n.

  4. Equivalently, we are giving a lower bound on the best-case running time of an algorithm. For example, the best-case running time of insertion sort is Ω(n), which implies that the running time of insertion sort is Ω(n). The running time of insertion sort therefore belongs to both Ω(n) and Θ(n2)







  1. o(g(n)) = { f(n) : for any positive constant c > 0 there exists a constant n0 > 0 such that 0 ≤ f(n) < cg(n) for all n ≥ n0 }

  2. For example, 2n = o(n2) but 2n2 ≠ o(n2)

  3. The main difference between O - notation and o-notation is that in f(n) = O(g(n)), the bound 0 ≤ f(n) ≤ cg(n) holds for some constant c > 0,but in f(n) = o(g(n)), the bound 0 ≤ f(n) < cg(n) holds for all constants c > 0. Intuitively, in o-notation, the function f(n) becomes insignificant relative to g(n) as n approaches infinity; That is \( \lim_{a \rightarrow b} \frac{f(n)}{g(n)} = 0 \)





  1. We use ω-notation to denote a lower bound that is not asymptotically tight.

  2. ω(g(n)) = { f(n) : for any positive constant c > 0 there exists a constant n0 > 0 such that 0 ≤ cg(n) < f(n) for all n ≥ n0 }

  3. Thus, \( \frac{n^2}{2} = ω(n), but \frac{n^2}{2} \ne ω(n^2) \)

  4. The relation ω(g(n)) = f(n) implies \( \lim_{a \rightarrow b} \frac{f(n)}{g(n)} =\infty \) if the limit exists. That is, f(n) becomes arbitrarily large relative to g(n) as n approaches infinity.





asymptotic-notations



Transitivity:



  1. f(n) = Θ(g(n)) and g(n) = Θ(h(n)) then f(n) = Θ(h(n))


  2. f(n) = O(g(n)) and g(n) = O(h(n)) then f(n) = O(h(n))


  3. f(n) = Ω(g(n)) and g(n) = Ω(h(n)) then f(n) = Ω(h(n))


  4. f(n) = o(g(n)) and g(n) = o(h(n)) then f(n) = o(h(n))


  5. f(n) = ω(g(n)) and g(n) = ω(h(n)) then f(n) = ω(h(n))



Reflexivity:



  1. f(n) = Θ(f(n))


  2. f(n) = O(f(n))


  3. f(n) = Ω(f(n))




Symmetry:



  1. f(n) = Θ(g(n)) if and only if g(n) = Θ(f(n))



Transpose Symmetry:



  1. f(n) = O(g(n)) if and only if g(n) = Ω(f(n))


  2. f(n) = o(g(n)) if and only if g(n) = ω(f(n))


  3. f(n) = O(g(n)) is like a ≤ b


  4. f(n) = Ω(g(n)) is like a ≥ b


  5. f(n) = Θ(g(n)) is like a = b


  6. f(n) = o(g(n)) is like a < b


  7. f(n) = ω(g(n)) is like a > b


We say that f(n) is asymptotically smaller than g(n) if f(n) = o(g(n)),and f(n) is asymptotically larger than g(n) if f(n) = ω(g(n)). It can also be possible that for two functions f(n) and g(n) we have neither f(n) = O(g(n)) and f(n) = Ω(g(n)).





Q. Let f(n) and g(n) be asymptotically non negative functions. Using the basic definition of Θ-notation, prove that max(f(n),g(n)) = Θ(f(n)+g(n)).


  1. Since we are requiring both f and g to be aymptotically non-negative, suppose that we are past some n1 where both are non-negative (take the max of the two bounds on the n corresponding to both f and g. Let c1 = 0.5 and c2=1.

  2. 0 ≤ 0.5(f(n) + g(n)) ≤ 0.5(max(f(n), g(n)) + max(f(n), g(n)))

  3. = max(f(n), g(n)) ≤ max(f(n), g(n)) + min(f(n), g(n)) = (f(n) + g(n))





Q. Show that for any real constants a and b,where b > 0 and we get (n + a)b = Θ(nb)


  1. Let c = 2b and n0 = 2a, Then for all n > n0 we have (n + a)b = O(nb)

  2. Now let n0 ≥ \( \frac{-a}{1 - {1/2}^1/b} \) and c = 1/2

  3. Then n ≥ n0 ≥ \( \frac{-a}{1 - {1/2}^1/b} \) if and only if n - \( \frac{n}{2^{1/b}} \) ≥ -a if and only if n + a ≥ \( \frac{1}{2}^{a/b}n \)

  4. if and only if (n + a)b ≥ cnb. Therefore, (n + a)b = Ω(nb) Therefore we can also say that (n + a)b = Θ(nb)



Q. Explain why the statement, “The running time of algorithm A is at least O(n2)” is meaningless. 2


  1. There are a ton of different functions that have growth rate less than or equal to n2 . In particular, functions that are constant or shrink to zero arbitrarily fast. Saying that you grow more quickly than a function that shrinks to zero quickly means nothing.



Q. Is 2n+1 = O(2n)?Is 22n = O(2n)?


  1. 2n+1 ≥ 2.2n for all n ≥ 0, so 2n+1 = O(2n)

  2. However 22n ≠ O(2n)

  3. Since if it were then there would exist n0 and c such that n ≥ n0 implies 2n.2n = 22n ≤ c2n so 2n ≤ c for n ≥ n0 which is not possible as c is a constant.





Q. Prove that: For any two functions f(n) and g(n),we have f(n) = Θ(g(n)) if and only if f(n) = O(g(n)) and f(n) = Ω(g(n)).


  1. Suppose that we have f(n) ∈ Θ(g(n)) then ∃ c1, c2, n0 ∀n ≥ n0

  2. 0 ≤ c1g(n) ≤ f(n) ≤ c2g(n)

  3. if we look at the equations as c1g(n) ≤ f(n) i.e. f(n) ∈ Ωg(n) and f(n) ≤ c2g(n) then we get f(n) ∈ O(g(n)).

  4. Suppose we had &exists; n1, c1 ∀ n ≥ n1 we get c1g(n) ≤ f(n) and &exists; n2, c2 ∀n ≥ n2 f(n) ≤ c2g(n)

  5. Putting these together we get n0 = max(n1,n2) we have ∀ n ≥ n0, c1g(n) ≤ f(n) ≤ c2g(n)



Q. Prove that the running time of an algorithm is Θ(g(n)) if and only if its worst-case running time is O(g(n)) and its best-case running time is Ω(g(n)).


  1. Suppose the running time is Θ(g(n)). Thus, the running time is O(g(n)), which implies that for any input of size n ≥ n0 the running time is bounded above by c1g(n) for some c1.

  2. This includes the running time on the worst-case input. thus it also implies the running time is Ω(g(n)), which implies that for any input of size n ≥ n0 the running time is bounded below by c2g(n) for some c2. This includes the running time of the best-case input.

  3. On the other hand, the running time of any input is bounded above by the worst-case running time and bounded below by the best-case running time. If the worst-case and best-case running times are O(g(n)) and Θ(g(n)) respectively, then the running time of any input of size 'n' must be O(g(n)) and Θ(g(n)). Thus it implies that the running time is Θ(g(n)).

  4. The Theorem for above is : For any two functions f(n) and g(n),we have f(n) = Θ(g(n)) if and only if f(n) = O(g(n)) and f(n) = Ω(g(n)).



Q. Prove that o(g(n)) ∩ ω(g(n)) is the empty set.


  1. Suppose we had some f(n) ∈ o(g(n)) ∩ ω(g(n)). Then, we have

  2. \( 0 = \lim_{n \rightarrow \infty \frac{f(n)}{g(n)} = \infty} \)

  3. which is a contradiction.





Q. We can extend our notation to the case of two parameters n and m that can go to infinity independently at different rates. For a given function g(n,m), we denote by O(g(n,m)) the set of functions.

  1. O(g(n,m)) = { f(n,m): there exist positive constants c , n0, m0 such that 0 ≤ f(n,m) ≤ cg(n,m) for all n ≥ n0 or m ≥ m0}

  2. Give corresponding definitions for Ω(g(n,m)) and Θ(g(n,m)).


  1. Ω(g(n,m)) = { f(n,m): there exist positive constants c , n0, m0 such that f(n,m) ≥ cg(n,m) for all n ≥ n0 or m ≥ m0}

  2. Θ(g(n,m)) = { f(n,m): there exist positive constants c1,c2 , n0, m0 such that c1g(n,m) ≤ f(n,m) ≤ c2g(n,m) for all n ≥ n0 or m ≥ m0}





  1. A function f(n) is monotonically increasing if m ≤ n implies f(m) ≤ f(n) Similarly, it is monotonically decreasing if m ≤ n implies f(m) ≥ f(n). A function f(n) is strictly increasing if m < n implies f(m) < f(n) and strictly decreasing if m < n implies f(m) > f(n).







Q. Which one of the following is the tightest upper bound that represents the number of swaps required to sort n numbers using selection sort? (GATE 2013 SET D)


  1. \(O(\log n)\)


  2. \(O(n)\)


  3. \(O(n \log n)\)


  4. \(O(n^{2})\)



Ans . 2


  1. In selection sort, in the unsorted part of the array, we find the minimum element and swap it with the value placed at the index where the unsorted array starts.

  2. Hence, for each element to put it in its sorted position, we will do some swaps. In each iteration, when we find the minimum and place it in sorted position, we do only one swap.

  3. There are n such interactions, since maximum number of positions to sort is n.

  4. Hence, there are \(O(n)\) swaps.



Q.3 The recurrence relation capturing the optimal execution time of the Towers of Hanoi problem with n discs is? (GATE 2012 )


(A) T(n) = 2T(n - 2) + 2
(B) T(n) = 2T(n - 1) + n
(C) T(n) = 2T(n/2) + 1
(D) T(n) = 2T(n - 1) + 1


Ans . D


Let rod 1 = 'A', rod 2 = 'B', rod 3 = 'C'.
Step 1 : Shift first disk from 'A' to 'B'.
Step 2 : Shift second disk from 'A' to 'C'.
Step 3 : Shift first disk from 'B' to 'C'.
The pattern here is : Shift 'n-1' disks from 'A' to 'B'.
Shift last disk from 'A' to 'C'.
Shift 'n-1' disks from 'B' to 'C'.
The recurrence function T(n) for time complexity of the above recursive solution can be written as following.
T(n) = 2T(n-1) + 1



Q.4 Let w(n) and A(n) denote respectively, the worst case and average case running time of an algorithm executed on an input of size n.Which of the following is ALWAYS TRUE? (GATE 2012 )


(A) A(n) = Ω(W(n))
(B) A(n) = θ(W(n))
(C) A(n) = O(W(n))
(D) A(n) = o(W(n))


Ans . C


The worst case time complexity is always greater than or same as the average case time complexity.
The term written in Big O notation can always asymptotically same or greater than the term on the other side.
Low=O(high)



Q.Big – O estimate for f(x) = (x + 1) log(x2 + 1) + 3x2 is given as (UGC-NET 2013)


  1. O(xlogx)


  2. O(x2)


  3. O(x3)


  4. O(x2logx)


Ans . 2

Big – O estimate



Que 1:Let W(n) and A(n) denote respectively, the worst case and average case running time of an algorithm executed on an input of size n. Which of the following is ALWAYS TRUE? (GATE 2016 SET B)

  1. A(n) = Ω(W(n))

  2. A(n) = θ(W(n))

  3. A(n) = O(W(n))

  4. A(n) = o(W(n))

Ans . C

The average case time can be lesser than or even equal to the worst case. So A(n) would be upper bounded by W(n) and it will not be strict upper bound as it can even be same (e.g. Bubble Sort and merge sort).

∴ A(n) = O(W(n))



Que 2: Let G be a simple undirected planar graph on 10 vertices with 15edges. If G is a connected graph, then the number of bounded faces in any embedding of G on the plane is equal to (GATE 2016 SET B)


  1. 3

  2. 4

  3. 5

  4. 6


Ans . D


We have the relation V-E+F=2, by this we will get the total number of faces F = 7. Out of 7 faces one is an unbounded face, so total 6 bounded faces.



Que 3:The recurrence relation capturing the optimal execution time of the Towers of Hanoi problem with n discs is (GATE 2016 SET B)

  1. T(n) = 2T(n-2) + 2

  2. T(n) = 2T(n-1) + n

  3. T(n) = 2T(n/2) + 1

  4. T(n) = 2T(n-1) + 1


Ans . D


  1. Let the three pegs be A,B and C, the goal is to move n pegs from A to C using peg B The following sequence of steps are executed recursively

  2. Move n-1 discs from A to B. This leaves disc n alone on peg A --- T(n-1)

  3. Move disc n from A to C---------1

  4. Move n-1 discs from B to C so they sit on disc n----- T(n-1) So, T(n) = 2T(n-1) +1



Q.1 The Solution to the recurrence equation: \[T(2^k) = 3T(2^{k-1})+1, T(1) =1\] (GATE 2002)


  1. \[2^k\]


  2. \[\frac{(3^{k+1}-1)}{2}\]


  3. \[3^{\log_2 k}\]


  4. \[2^{\log_3 k}\]



Ans. 2


Explanation: We have

\[T\left(2^k\right) = 3T\left(2^{k-1}\right) + 1 \\= 3^2T\left(2^{k-2}\right) + 1 +3 \\ \dots \\= 3^k T\left(2^{k-k}\right) + \left( 1 + 3 + 9 + \dots + 3^{k-1}\right) \\ \left(\text{recursion depth is }k\right)\\= 3^k + \frac{3^{k -1}} {3-1}\\\left(\text{Sum to n terms of GP with } a = 1 \text{ and } r = 3 \right) \\=3^k + \frac{3^k -1}{2}\\=\frac{3. 3^k - 1}{2} \\=\frac{3^{k+1} -1}{2}\]

Hence, 2 is the correct choice.



Q.2 In the worst case, the number of comparisons needed to search a singly linked list of lenghth n for a given element is (GATE 2002)

  1. \[\log n\]


  2. \[\frac{n}{2}\]


  3. \[\log_2 {n} - 1\]


  4. \[n\]



Ans . 4


Description:

Binary search algorithm is based on the logic of reducing your input size by half in every step until your search succeeds or input gets exhausted. Important point here is "the step to reduce input size should take constant time". In case of an array, it's always a simple comparison based on array indexes that takes O(1) time.

But in case of Linked list you don't have indexes to access items. To perform any operation on a list item, you first have to reach it by traversing all items before it. So to divide list by half you first have to reach middle of the list then perform a comparison. Getting to middle of list takes O(n/2)[you have to traverse half of the list items] and comparison takes O(1).

Total = O(n/2) + O(1) = O(n/2)

So the input reduction step does not take constant time. It depends on list size.

hence violates the essential requirement of Binary search. thats why 4 is the answer only in worst case.



Q.3 The recurrence relation capturing the optimal execution time of the Towers of Hanoi problem with n discs is ?(GATE 2012)


  1. T(n) = 2T(n − 2) + 2


  2. T(n) = 2T(n − 1) + n


  3. T(n) = 2T(n/2) + 1


  4. T(n) = 2T(n − 1) + 1



Ans . 4


    Following are the steps to follow to solve Tower of Hanoi problem recursively. Let the three pegs be A, B and C. The goal is to move n pegs from A to C. To move n discs from peg A to peg C: move n-1 discs from A to B. This leaves disc n alone on peg A move disc n from A to C move n?1 discs from B to C so they sit on disc n The recurrence function T(n) for time complexity of the above recursive solution can be written as following. T(n) = 2T(n-1) + 1



Q.4 Consider the following algoritham for searching for a given number x in an unsorted array A[1.....n] having n distinct values:

1. Choose an i uniformaly at random from 1.....n;

2. If A[i]=x then Stop else Go to 1;

Assuming that x is present in A, what is the expected number of comparisons made by the algorithm before it terminates?(GATE 2002)

  1. n


  2. n-1


  3. 2n


  4. n/2



Ans . 1


Description:-

Expected number of comparisons (E) = 1 * Probability of find on first comparison + 2 * Probability of find on second comparison + ... + i* Probability of find on ith comparison + ...

\[= 1 \times \frac{1}{n} + 2 \times \frac{n-1}{n^2} + 3 \times \frac{ (n-1)^2}{n^3} + \dots\]

\[= \frac{1/n} {1 - \frac{n-1}{n} } + \frac{(n-1)/n^2}{\left( {1- \frac{n-1}{n} }\right)^2} \\ \left( \text{Sum to infinity of aritmetico-geometric series with }\\a = \frac{1}{n}, r = \frac{n-1}{n} \text{ and } d = \frac{1}{n} \right) \\= 1 + n - 1 = n\]

Hence answer is 1



Q.5 Let w(n) and A(n) denote respectively, the worst case and average case running time of an algorithm executed on an input of size n. which of the following is ALWAYS TRUE? (GATE 2012)


  1. $$\Omega (W(n))$$


  2. $$\theta (W(n))$$


  3. $${\rm O}(W(n))$$


  4. $$o(W(n))$$



Ans . 3


    The worst case time complexity is always greater than or same as the average case time complexity. The term written in Big O notation can always asymptotically same or greater than the term on the other side.

Q.6 A list of n strings, each of length n, is sorted into lexicographic order using the merge-sort algorithm. The worst case running time of this computation is (GATE 2012)


  1. $$O(\log n)$$


  2. $$O({n^2}\log n)$$


  3. $$O({n^2} + \log n)$$


  4. $$O({n^2})$$



Ans . 2


    When we are sorting an array of n integers, Recurrence relation for Total number of comparisons involved will be, T(n) = 2T(n/2) + (n) where (n) is the number of comparisons in order to merge 2 sorted subarrays of size n/2. = (nlog2n) Instead of integers whose comparison take O(1) time, we are given n strings. We can compare 2 strings in O(n) worst case. Therefore, Total number of comparisons now will be (n2log2n) where each comparison takes O(n) time now. In general, merge sort makes (nlog2n) comparisons, and runs in (nlog2n) time if each comparison can be done in O(1) time





Q.7 An element in an array X is called a leader if it is greater than all elements to the right of it in X. The best algorithm to find all leaders in an array (GATE 2006)


  1. Solves it in linear time using a left to right pass of the array


  2. Solves it in linear time using a right to left pass of the array


  3. Solves it using divide and conquer in time\(\ominus\left(n^{2}\right)\)


  4. Solves it in time\(\ominus\left(nlogn\right)\)



Ans . (B) Solves it in linear time using a right to left pass of the array




Q.8 Two matrices M1 and M2 are to be stored in arrays A and B respectively. Each array can be stored either in row-major or column-major order in contiguous memory locations. The time complexity of an algorithm to compute M1 × M2 will be (GATE 2004)


  1. best if A is in row-major, and B is in column- major order


  2. best if both are in row-major order


  3. best if both are in column-major order


  4. independent of the storage scheme



Ans . (4)


  1. Algorithm assumes that all elements whichever you refer are going to be available at uniform rate,So it is independent on the storage schemes.Coming to the implementation option A will be right.



Q.9 The time complexity of the following C function is (assume n > 0)(GATE 2004)

  int recursive (int n)
  {
    if (n == 1)
    return (1);
    else
    return (recursive (n - 1) + recursive (n - 1));
  }

  1. O(n)


  2. O(n log n)


  3. \(O \left(n^{2}\right)\)


  4. \(O \left(2^{n}\right)\)



Ans . (4)


\(T \left(n \right)=T \left(n-1 \right)+T \left(n-1 \right)\)
\(T\left(n\right)=2T\left(n-1\right)\)..........1
where base condition is T(1)=1
\(T\left(n-1\right)=2T\left(n-1\right)\)..........2
substitute T(n-1) in equation 1
\(T\left(n)=(2^{2}\right) T\left(n-2\right)\)
\(T\left(n)=(2^{3}\right) T\left(n-3\right)\)
.
.
.
\(T\left(n)=(2^{k}\right) T\left(n-k\right)\)
By equating n-k with base condition we get k=n-1,
Substituting the value of k we get
\(T\left(n)=(2^{n-1}\right) T\left(1 \right)\)
\(T\left(n)=(2^{n-1}\right)\)
\(T\left(n)=O(2^{n-1}\right)\)
\(T\left(n)=O(2^{n}\right)\)


Q.10 The recurrence equation
   T(1) = 1
   T(n) = 2T(n - 1) + n, n>= 2
(GATE 2004)


  1. \(\left(2^{n + 1}\right)- n - 2\)


  2. \(\left(2^{n}\right) - n\)


  3. \(\left(2^{n + 1}\right) - 2n - 2\)


  4. \(\left(2^{n}\right) - n\)



Ans . (1)


  1. One way to solve is to use hit and try method.
      Given T(n) = 2T(n-1) + n and T(1) = 1
      For n = 2T(2) = 2T(2-1) + 2
          = 2T(1) + 2
         = 2.1 + 2 = 4
      Now when you will put n = 2 in all options, only 1st option\(\left (2^{n+1}\right) - n - 2\) satisfies it.



Q.11 Let A[1, ..., n] be an array storing a bit (1 or 0) at each location, and f(m) is a unction whose time complexity is Θ(m). Consider the following program fragment written in a C like language: (GATE 2004)

Code.   counter = 0;
  for (i = 1; i < = n; i++)
  {
    if (A[i] == 1)
    counter++;
    else
  {
    f(counter);
    counter = 0;
  }
  }


  1. Ω \(\left(n^{2}\right)\)


  2. Ω (nlog n) and O\(\left(n^{2}\right)\)


  3. Θ(n)


  4. O(n)



Ans . (3)


  1. Please note that inside the else condition, f() is called first, then counter is set to 0. Consider the following cases:

  2. a) All 1s in A[] : Time taken is T(n) as only counter++ is executed n times.

  3. b) All 0s in A[] : Time taken is T(n) as only f(0) is called n times.

  4. c) Half 1s, then half 0s : Time taken is T(n) a only f(n/2) is called once.