The order of growth of the running time of an algorithm gives a simple characterization of the algorithm’s efficiency and also allows us to compare the relative performance of alternative algorithms.

At input sizes large enough to make only the order of growth of the running time relevant we study the asymptotic efficiency of algorithms.

Usually, an algorithm that is asymptotically more efﬁcient will be the best choice for all but very small inputs.

The notation Θ(1) to mean either a constant or a constant function with respect to some variable.

For a given function g(n), we denote by Θ(g(n)) the set of functions

Θ(g(n)) = { f(n) : there exist positive constants c

_{1}, c_{2}and n_{0}such that 0 ≤ c_{1}g(n) ≤ f(n) ≤ c_{2}g(n) for all n ≥ n_{0}}A function f(n) belongs to the set Θ(g(n)) if there exist positive constants c

_{1}, c_{2}such that it can be “sandwiched” between c_{1}g(n), c_{2}g(n),for sufficiently large n.So for all n ≥ n

_{0}the function f(n) is equal to g(n) to within a constant factor. We say that g(n) is an asymptotically tight bound for f(n).

**Q. To prove: \( \frac{1n^2}{2} - 3n = \Theta(n^2) \)**

To do so, we must determine positive constants n

_{0}, c_{1}, c_{2}such that\( c_1n^2 \le \frac{1n^2}{2} - 3n \le c_2n^2 \) for all n ≥ n

_{0}. So dividing by n^{2}yields\( c_1 \le \frac{1}{2} - \frac{3}{n} \le c_2 \)

By choosing \( c_1 = \frac{1}{14}, c_2 = \frac{1}{2}, n_0 = 7 \) we prove the equation.

O(g(n)) = { f(n) : there exist positive constants c and n

_{0}such that 0 ≤ f(n) ≤ cg(n) for all n ≥ n_{0}}f(n) = Θ(g(n)) implies f(n) = O(g(n)),since Θ - notation is a stronger notion than O-notation.

Thus, the O(n

^{2}) bound on worst-case running time of insertion sort also applies to its running time on every input combination possible. The Θ(n^{2}) bound on the worst-case running time of insertion sort, however, does not imply Θ(n^{2}) bound on the running time of insertion sort on every input.

Ω(g(n)) = { f(n) : there exist positive constants c and n

_{0}such that 0 ≤ cg(n) ≤ f(n) for all n ≥ n_{0}}For any two functions f(n) and g(n),we have f(n) = Θ(g(n)) if and only if f(n) = O(g(n)) and f(n) = Ω(g(n)).

The running time (no modifier) of an algorithm is Ω(g(n)), means that no matter what particular input of size n is chosen for each value of n, the running time on that input is at least a constant times g(n), for sufficiently large n.

Equivalently, we are giving a lower bound on the best-case running time of an algorithm. For example, the best-case running time of insertion sort is Ω(n), which implies that the running time of insertion sort is Ω(n). The running time of insertion sort therefore belongs to both Ω(n) and Θ(n

^{2})

o(g(n)) = { f(n) : for any positive constant c > 0 there exists a constant n

_{0}> 0 such that 0 ≤ f(n) < cg(n) for all n ≥ n_{0}}For example, 2n = o(n

^{2}) but 2n^{2}≠ o(n^{2})The main difference between O - notation and o-notation is that in f(n) = O(g(n)), the bound 0 ≤ f(n) ≤ cg(n) holds for some constant c > 0,but in f(n) = o(g(n)), the bound 0 ≤ f(n) < cg(n) holds for all constants c > 0. Intuitively, in o-notation, the function f(n) becomes insignificant relative to g(n) as n approaches inﬁnity; That is \( \lim_{a \rightarrow b} \frac{f(n)}{g(n)} = 0 \)

We use ω-notation to denote a lower bound that is not asymptotically tight.

ω(g(n)) = { f(n) : for any positive constant c > 0 there exists a constant n

_{0}> 0 such that 0 ≤ cg(n) < f(n) for all n ≥ n_{0}}Thus, \( \frac{n^2}{2} = ω(n), but \frac{n^2}{2} \ne ω(n^2) \)

The relation ω(g(n)) = f(n) implies \( \lim_{a \rightarrow b} \frac{f(n)}{g(n)} =\infty \) if the limit exists. That is, f(n) becomes arbitrarily large relative to g(n) as n approaches inﬁnity.

__Transitivity:__

f(n) = Θ(g(n)) and g(n) = Θ(h(n)) then f(n) = Θ(h(n))

f(n) = O(g(n)) and g(n) = O(h(n)) then f(n) = O(h(n))

f(n) = Ω(g(n)) and g(n) = Ω(h(n)) then f(n) = Ω(h(n))

f(n) = o(g(n)) and g(n) = o(h(n)) then f(n) = o(h(n))

f(n) = ω(g(n)) and g(n) = ω(h(n)) then f(n) = ω(h(n))

__Reﬂexivity:__

f(n) = Θ(f(n))

f(n) = O(f(n))

f(n) = Ω(f(n))

__Symmetry:__

f(n) = Θ(g(n)) if and only if g(n) = Θ(f(n))

__Transpose Symmetry:__

f(n) = O(g(n)) if and only if g(n) = Ω(f(n))

f(n) = o(g(n)) if and only if g(n) = ω(f(n))

f(n) = O(g(n)) is like a ≤ b

f(n) = Ω(g(n)) is like a ≥ b

f(n) = Θ(g(n)) is like a = b

f(n) = o(g(n)) is like a < b

f(n) = ω(g(n)) is like a > b

We say that f(n) is asymptotically smaller than g(n) if f(n) = o(g(n)),and f(n) is asymptotically larger than g(n) if f(n) = ω(g(n)). It can also be possible that for two functions f(n) and g(n) we have neither f(n) = O(g(n)) and f(n) = Ω(g(n)).

**Q. Let f(n) and g(n) be asymptotically non negative functions. Using the basic deﬁnition
of Θ-notation, prove that max(f(n),g(n)) = Θ(f(n)+g(n)).**

Since we are requiring both f and g to be aymptotically non-negative, suppose that we are past some n

_{1}where both are non-negative (take the max of the two bounds on the n corresponding to both f and g. Let c_{1}= 0.5 and c_{2}=1.0 ≤ 0.5(f(n) + g(n)) ≤ 0.5(max(f(n), g(n)) + max(f(n), g(n)))

= max(f(n), g(n)) ≤ max(f(n), g(n)) + min(f(n), g(n)) = (f(n) + g(n))

**Q. Show that for any real constants a and b,where b > 0 and we get (n + a) ^{b} = Θ(n^{b})**

Let c = 2

^{b}and n_{0}= 2a, Then for all n > n_{0}we have (n + a)^{b}= O(n^{b})Now let n

_{0}≥ \( \frac{-a}{1 - {1/2}^1/b} \) and c = 1/2Then n ≥ n

_{0}≥ \( \frac{-a}{1 - {1/2}^1/b} \) if and only if n - \( \frac{n}{2^{1/b}} \) ≥ -a if and only if n + a ≥ \( \frac{1}{2}^{a/b}n \)if and only if (n + a)

^{b}≥ cn^{b}. Therefore, (n + a)^{b}= Ω(n^{b}) Therefore we can also say that (n + a)^{b}= Θ(n^{b})

**Q. Explain why the statement, “The running time of algorithm A is at least O(n ^{2})” is
meaningless.
2**

There are a ton of different functions that have growth rate less than or equal to n

^{2}. In particular, functions that are constant or shrink to zero arbitrarily fast. Saying that you grow more quickly than a function that shrinks to zero quickly means nothing.

**Q. Is 2 ^{n+1} = O(2^{n})?Is 2^{2n} = O(2^{n})?
**

2

^{n+1}≥ 2.2^{n}for all n ≥ 0, so 2^{n+1}= O(2^{n})However 2

^{2n}≠ O(2^{n})Since if it were then there would exist n

_{0}and c such that n ≥ n_{0}implies 2^{n}.2^{n}= 2^{2n}≤ c2^{n}so 2^{n}≤ c for n ≥ n_{0}which is not possible as c is a constant.

**Q. Prove that: For any two functions f(n) and g(n),we have f(n) = Θ(g(n)) if and only if f(n) = O(g(n)) and f(n) = Ω(g(n)).**

Suppose that we have f(n) ∈ Θ(g(n)) then ∃ c

_{1}, c_{2}, n_{0}∀n ≥ n_{0}0 ≤ c

_{1}g(n) ≤ f(n) ≤ c_{2}g(n)if we look at the equations as c

_{1}g(n) ≤ f(n) i.e. f(n) ∈ Ωg(n) and f(n) ≤ c_{2}g(n) then we get f(n) ∈ O(g(n)).Suppose we had &exists; n

_{1}, c_{1}∀ n ≥ n_{1}we get c_{1}g(n) ≤ f(n) and &exists; n_{2}, c_{2}∀n ≥ n_{2}f(n) ≤ c_{2}g(n)Putting these together we get n

_{0}= max(n_{1},n_{2}) we have ∀ n ≥ n_{0}, c_{1}g(n) ≤ f(n) ≤ c_{2}g(n)

**Q. **Prove that the running time of an algorithm is Θ(g(n)) if and only if its worst-case
running time is O(g(n)) and its best-case running time is Ω(g(n)).

Suppose the running time is Θ(g(n)). Thus, the running time is O(g(n)), which implies that for any input of size n ≥ n

_{0}the running time is bounded above by c_{1}g(n) for some c_{1}.This includes the running time on the worst-case input. thus it also implies the running time is Ω(g(n)), which implies that for any input of size n ≥ n

_{0}the running time is bounded below by c_{2}g(n) for some c_{2}. This includes the running time of the best-case input.On the other hand, the running time of any input is bounded above by the worst-case running time and bounded below by the best-case running time. If the worst-case and best-case running times are O(g(n)) and Θ(g(n)) respectively, then the running time of any input of size 'n' must be O(g(n)) and Θ(g(n)). Thus it implies that the running time is Θ(g(n)).

The Theorem for above is : For any two functions f(n) and g(n),we have f(n) = Θ(g(n)) if and only if f(n) = O(g(n)) and f(n) = Ω(g(n)).

**Q. Prove that o(g(n)) ∩ ω(g(n)) is the empty set.**

Suppose we had some f(n) ∈ o(g(n)) ∩ ω(g(n)). Then, we have

\( 0 = \lim_{n \rightarrow \infty \frac{f(n)}{g(n)} = \infty} \)

which is a contradiction.

**Q. We can extend our notation to the case of two parameters n and m that can go to
infinity independently at different rates. For a given function g(n,m), we denote
by O(g(n,m)) the set of functions. **

O(g(n,m)) = { f(n,m): there exist positive constants c , n

_{0}, m_{0}such that 0 ≤ f(n,m) ≤ cg(n,m) for all n ≥ n_{0}or m ≥ m_{0}}Give corresponding deﬁnitions for Ω(g(n,m)) and Θ(g(n,m)).

Ω(g(n,m)) = { f(n,m): there exist positive constants c , n

_{0}, m_{0}such that f(n,m) ≥ cg(n,m) for all n ≥ n_{0}or m ≥ m_{0}}Θ(g(n,m)) = { f(n,m): there exist positive constants c

_{1},c_{2}, n_{0}, m_{0}such that c_{1}g(n,m) ≤ f(n,m) ≤ c_{2}g(n,m) for all n ≥ n_{0}or m ≥ m_{0}}

A function f(n) is monotonically increasing if m ≤ n implies f(m) ≤ f(n) Similarly, it is monotonically decreasing if m ≤ n implies f(m) ≥ f(n). A function f(n) is strictly increasing if m < n implies f(m) < f(n) and strictly decreasing if m < n implies f(m) > f(n).

**Q.** Which one of the following is the tightest upper bound that represents the number of swaps required to
sort n numbers using selection sort? **(GATE 2013 SET D)**

\(O(\log n)\)

\(O(n)\)

\(O(n \log n)\)

\(O(n^{2})\)

**Ans . ** 2

In selection sort, in the unsorted part of the array, we find the minimum element and swap it with the value placed at the index where the unsorted array starts.

Hence, for each element to put it in its sorted position, we will do some swaps. In each iteration, when we find the minimum and place it in sorted position, we do only one swap.

There are n such interactions, since maximum number of positions to sort is n.

Hence, there are \(O(n)\) swaps.

**Q.3** The recurrence relation capturing the optimal execution time of the Towers of Hanoi problem with n discs is? **(GATE 2012 )**

(A) T(n) = 2T(n - 2) + 2

(B) T(n) = 2T(n - 1) + n

(C) T(n) = 2T(n/2) + 1

(D) T(n) = 2T(n - 1) + 1

**Ans . ** D

Let rod 1 = 'A', rod 2 = 'B', rod 3 = 'C'.

Step 1 : Shift first disk from 'A' to 'B'.

Step 2 : Shift second disk from 'A' to 'C'.

Step 3 : Shift first disk from 'B' to 'C'.

The pattern here is :
Shift 'n-1' disks from 'A' to 'B'.

Shift last disk from 'A' to 'C'.

Shift 'n-1' disks from 'B' to 'C'.

The recurrence function T(n) for time complexity of the above recursive solution can be written as following.

T(n) = 2T(n-1) + 1

**Q.4** Let w(n) and A(n) denote respectively, the worst case and average case running time of an algorithm executed on an input of size n.Which of the following is ALWAYS TRUE? **(GATE 2012 )**

(A) A(n) = Ω(W(n))

(B) A(n) = θ(W(n))

(C) A(n) = O(W(n))

(D) A(n) = o(W(n))

**Ans . ** C

The worst case time complexity is always greater than or same as the average case time complexity.

The term written in Big O notation can always asymptotically same or greater than the term on the other side.

Low=O(high)

**Q.**Big – O estimate for f(x) = (x + 1) log(x^{2} + 1) + 3x^{2} is given as **(UGC-NET 2013)**

O(xlogx)

O(x

^{2})O(x

^{3})O(x

^{2}logx)

**Ans . **2

**Que 1:**Let W(n) and A(n) denote respectively, the worst case and average case running time of an algorithm executed on an input of size n. Which of the following is ALWAYS TRUE? **(GATE 2016 SET B)**

- A(n) = Ω(W(n))
- A(n) = θ(W(n))
- A(n) = O(W(n))
- A(n) = o(W(n))

**Ans . ** C

The average case time can be lesser than or even equal to the worst case. So A(n) would be upper bounded by W(n) and it will not be strict upper bound as it can even be same (e.g. Bubble Sort and merge sort).

∴ A(n) = O(W(n))

**Que 2:** Let G be a simple undirected planar graph on 10 vertices with 15edges. If G is a connected
graph, then the number of bounded faces in any embedding of G on the plane is equal to
**(GATE 2016 SET B)**

- 3
- 4
- 5
- 6

**Ans . ** D

We have the relation V-E+F=2, by this we will get the total number of faces F = 7. Out of 7 faces one is an unbounded face, so total 6 bounded faces.

**Que 3:**The recurrence relation capturing the optimal execution time of the Towers of Hanoi problem with n discs is ** (GATE 2016 SET B)**

- T(n) = 2T(n-2) + 2
- T(n) = 2T(n-1) + n
- T(n) = 2T(n/2) + 1
- T(n) = 2T(n-1) + 1

**Ans . **D

Let the three pegs be A,B and C, the goal is to move n pegs from A to C using peg B The following sequence of steps are executed recursively

Move n-1 discs from A to B. This leaves disc n alone on peg A --- T(n-1)

Move disc n from A to C---------1

Move n-1 discs from B to C so they sit on disc n----- T(n-1) So, T(n) = 2T(n-1) +1

**Q.1** The Solution to the recurrence equation:
\[T(2^k) = 3T(2^{k-1})+1, T(1) =1\] **(GATE 2002)**

\[2^k\]

\[\frac{(3^{k+1}-1)}{2}\]

\[3^{\log_2 k}\]

\[2^{\log_3 k}\]

**Ans.** 2

Explanation: We have

\[T\left(2^k\right) = 3T\left(2^{k-1}\right) + 1 \\= 3^2T\left(2^{k-2}\right) + 1 +3 \\ \dots \\= 3^k T\left(2^{k-k}\right) + \left( 1 + 3 + 9 + \dots + 3^{k-1}\right) \\ \left(\text{recursion depth is }k\right)\\= 3^k + \frac{3^{k -1}} {3-1}\\\left(\text{Sum to n terms of GP with } a = 1 \text{ and } r = 3 \right) \\=3^k + \frac{3^k -1}{2}\\=\frac{3. 3^k - 1}{2} \\=\frac{3^{k+1} -1}{2}\]

Hence, 2 is the correct choice.

**Q.2** In the worst case, the number of comparisons needed to search
a singly linked list of lenghth n for a given element is **(GATE 2002)**

\[\log n\]

\[\frac{n}{2}\]

\[\log_2 {n} - 1\]

\[n\]

**Ans . ** 4

Description:

Binary search algorithm is based on the logic of reducing your input size by half in every step until your search succeeds or input gets exhausted. Important point here is "the step to reduce input size should take constant time". In case of an array, it's always a simple comparison based on array indexes that takes O(1) time.

But in case of Linked list you don't have indexes to access items. To perform any operation on a list item, you first have to reach it by traversing all items before it. So to divide list by half you first have to reach middle of the list then perform a comparison. Getting to middle of list takes O(n/2)[you have to traverse half of the list items] and comparison takes O(1).

Total = O(n/2) + O(1) = O(n/2)

So the input reduction step does not take constant time. It depends on list size.

hence violates the essential requirement of Binary search. thats why 4 is the answer only in worst case.**Q.3** The recurrence relation capturing the optimal execution time of the Towers of Hanoi problem with n discs is ?**(GATE 2012)**

T(n) = 2T(n − 2) + 2

T(n) = 2T(n − 1) + n

T(n) = 2T(n/2) + 1

T(n) = 2T(n − 1) + 1

**Ans . ** 4

Following are the steps to follow to solve Tower of Hanoi problem recursively. Let the three pegs be A, B and C. The goal is to move n pegs from A to C. To move n discs from peg A to peg C: move n-1 discs from A to B. This leaves disc n alone on peg A move disc n from A to C move n?1 discs from B to C so they sit on disc n The recurrence function T(n) for time complexity of the above recursive solution can be written as following. T(n) = 2T(n-1) + 1

**Q.4** Consider the following algoritham for searching for a given number x in an unsorted array A[1.....n] having n distinct values:

1. Choose an i uniformaly at random from 1.....n;

2. If A[i]=x then Stop else Go to 1;

Assuming that x is present in A, what is the expected number of comparisons made by the algorithm before it terminates?**(GATE 2002)**

n

n-1

2n

n/2

**Ans . **1

Description:-

Expected number of comparisons (E) = 1 * Probability of find on first comparison + 2 * Probability of find on second comparison + ... + i* Probability of find on ith comparison + ...

\[= 1 \times \frac{1}{n} + 2 \times \frac{n-1}{n^2} + 3 \times \frac{ (n-1)^2}{n^3} + \dots\]

\[= \frac{1/n} {1 - \frac{n-1}{n} } + \frac{(n-1)/n^2}{\left( {1- \frac{n-1}{n} }\right)^2} \\ \left( \text{Sum to infinity of aritmetico-geometric series with }\\a = \frac{1}{n}, r = \frac{n-1}{n} \text{ and } d = \frac{1}{n} \right) \\= 1 + n - 1 = n\]

Hence answer is 1

**Q.5** Let w(n) and A(n) denote respectively, the worst case and average case running time of an algorithm executed on an input of size n. which of the following is ALWAYS TRUE? **(GATE 2012)**

$$\Omega (W(n))$$

$$\theta (W(n))$$

$${\rm O}(W(n))$$

$$o(W(n))$$

**Ans . ** 3

The worst case time complexity is always greater than or same as the average case time complexity. The term written in Big O notation can always asymptotically same or greater than the term on the other side.

**Q.6** A list of n strings, each of length n, is sorted into lexicographic order using the merge-sort algorithm. The worst case running time of this computation is **(GATE 2012)**

$$O(\log n)$$

$$O({n^2}\log n)$$

$$O({n^2} + \log n)$$

$$O({n^2})$$

**Ans . ** 2

When we are sorting an array of n integers, Recurrence relation for Total number of comparisons involved will be, T(n) = 2T(n/2) + (n) where (n) is the number of comparisons in order to merge 2 sorted subarrays of size n/2. = (nlog2n) Instead of integers whose comparison take O(1) time, we are given n strings. We can compare 2 strings in O(n) worst case. Therefore, Total number of comparisons now will be (n2log2n) where each comparison takes O(n) time now. In general, merge sort makes (nlog2n) comparisons, and runs in (nlog2n) time if each comparison can be done in O(1) time

**Q.7** An element in an array X is called a leader if it is greater than all elements to the right of it in X.
The best algorithm to find all leaders in an array **(GATE 2006)**

Solves it in linear time using a left to right pass of the array

Solves it in linear time using a right to left pass of the array

Solves it using divide and conquer in time\(\ominus\left(n^{2}\right)\)

Solves it in time\(\ominus\left(nlogn\right)\)

**Ans . **(B) Solves it in linear time using a right to left pass of the array

**Q.8** Two matrices M1 and M2 are to be stored in arrays A and B respectively. Each array can be stored either in row-major or column-major order in contiguous memory locations. The time complexity of an algorithm to compute M1 × M2 will be **(GATE 2004)**

best if A is in row-major, and B is in column- major order

best if both are in row-major order

best if both are in column-major order

independent of the storage scheme

**Ans . ** (4)

Algorithm assumes that all elements whichever you refer are going to be available at uniform rate,So it is independent on the storage schemes.Coming to the implementation option A will be right.

**Q.9** The time complexity of the following C function is (assume n > 0)**(GATE 2004)**

{

if (n == 1)

return (1);

else

return (recursive (n - 1) + recursive (n - 1));

}

O(n)

O(n log n)

\(O \left(n^{2}\right)\)

\(O \left(2^{n}\right)\)

**Ans . **(4)

\(T \left(n \right)=T \left(n-1 \right)+T \left(n-1 \right)\)

\(T\left(n\right)=2T\left(n-1\right)\)..........1

where base condition is T(1)=1

\(T\left(n-1\right)=2T\left(n-1\right)\)..........2

substitute T(n-1) in equation 1

\(T\left(n)=(2^{2}\right) T\left(n-2\right)\)

\(T\left(n)=(2^{3}\right) T\left(n-3\right)\)

.

.

.

\(T\left(n)=(2^{k}\right) T\left(n-k\right)\)

By equating n-k with base condition we get k=n-1,

Substituting the value of k we get

\(T\left(n)=(2^{n-1}\right) T\left(1 \right)\)

\(T\left(n)=(2^{n-1}\right)\)

\(T\left(n)=O(2^{n-1}\right)\)

\(T\left(n)=O(2^{n}\right)\)

**Q.10** The recurrence equation

T(1) = 1

T(n) = 2T(n - 1) + n, n>= 2 **(GATE 2004)**

\(\left(2^{n + 1}\right)- n - 2\)

\(\left(2^{n}\right) - n\)

\(\left(2^{n + 1}\right) - 2n - 2\)

\(\left(2^{n}\right) - n\)

**Ans . ** (1)

One way to solve is to use hit and try method.

Given T(n) = 2T(n-1) + n and T(1) = 1

For n = 2T(2) = 2T(2-1) + 2

= 2T(1) + 2

= 2.1 + 2 = 4

Now when you will put n = 2 in all options, only 1st option\(\left (2^{n+1}\right) - n - 2\) satisfies it.

**Q.11** Let A[1, ..., n] be an array storing a bit (1 or 0) at each location, and f(m) is a unction whose time complexity is Θ(m). Consider the following program fragment written in a C like language: **(GATE 2004)**

**Code.**
counter = 0;

for (i = 1; i < = n; i++)

{

if (A[i] == 1)

counter++;

else

{

f(counter);

counter = 0;

}

}

Ω \(\left(n^{2}\right)\)

Ω (nlog n) and O\(\left(n^{2}\right)\)

Θ(n)

O(n)

**Ans . ** (3)

Please note that inside the else condition, f() is called first, then counter is set to 0. Consider the following cases:

a) All 1s in A[] : Time taken is T(n) as only counter++ is executed n times.

b) All 0s in A[] : Time taken is T(n) as only f(0) is called n times.

c) Half 1s, then half 0s : Time taken is T(n) a only f(n/2) is called once.