Algorithms and Complexity



Algorithms



  1. An algorithm is any well-defined computational procedure that takes some value, or set of values, as input and produces some value, or set of values, as output.

  2. An algorithm is thus a sequence of computational steps that transform the input into the output.



Sorting Algorithms



  1. Input: A sequence of n numbers {$a_1,a_2,...,a_n$}

  2. Output: A permutation or reordering of {a'1,a'2, a'3...a'n} of the input sequence such that a'1 ≤ a'2 ≤ a'3 ≤ ... ≤ a'n

  3. For example, given the input sequence h31; 41; 59; 26; 41; 58i, a sorting algorithm returns as output the sequence h26; 31; 41; 41; 58; 59i. Such an input sequence is called an instance of the sorting problem.

  4. An algorithm is said to be correct if, for every input instance, it halts (finishes execution) with the correct output.

Data Structure



A data structure is a way to store and organize data in order to facilitate access and modifications. No single data structure works well for all purposes, and so it is important to know the strengths and limitations of several of them.


NP-Completeness A measure of efficiency of an algorithm is speed, i.e., how long an algorithm takes to produce its result. There are some problems, however, for which no efficient solution is known. An interesting subset of these problems are known as NP-complete.


  1. No efficient algorithm for an NP-complete problem has ever been found

  2. No one has proven that an efficient algorithm for one cannot exist. So no one knows whether or not efficient algorithms exist for NP-complete problems.

  3. The set of NP-complete problems has the remarkable property that if an efficient algorithm exists for any one of them, then efficient algorithms exist for all of them.

  4. If you can show that the problem is NP-complete, you can instead spend your time developing an efficient algorithm that gives a good, but not the best possible, solution.



Efficiency notes:




  1. Insertion sort, takes time roughly equal to c1n2 to sort 'n' items, where c1 is a constant that does not depend on 'n'.

  2. So it takes time roughly proportional to n2

  3. Merge sort, takes time roughly equal to c2n log2 n where c2 is another constant that also does not depend on 'n'.

  4. Insertion sort has a smaller constant factor than merge sort and so c1 < c2

  5. Insertion sort usually runs faster than merge sort for small input sizes, once the input size 'n' becomes large enough, merge sort’s advantage of log2 n vs. n will compensate for the difference in constant factors. No matter how much smaller c2 is than c1 , there will always be a crossover point beyond which merge sort is faster. In general, as the problem size increases, so does the relative advantage of merge sort.



Q. Suppose we are comparing implementations of insertion sort and merge sort on the same machine. For inputs of size n, insertion sort runs in 8n2 steps, while merge sort runs in 64n lg n steps. For which values of n does insertion sort beat merge sort?


  1. We wish to determine for which values of n the inequality 8n2 < 64 nlog2(n) holds. This happens when n < 8 log2 (n), or when n ≤ 43. In other words, insertion sort runs faster when we're sorting at most 43 items. Otherwise merge sort is faster.



Q. What is the smallest value of n such that an algorithm whose running time is 100n2 runs faster than an algorithm whose running time is 2n on the same machine?


  1. We want that 100nn < 2n. note that if n = 14, this becomes 100(14)n = 19600 > 214 = 16384. For n = 15 it is 100(15)2 = 22500 < 215 = 32768. So, the answer is n = 15.



Q. For each function f (n) and time t in the following table, determine the largest size n of a problem that can be solved in time t , assuming that the algorithm to solve the problem takes f (n) microseconds.


    chapter 1 cormen solution



Algorithm:



  1. Input: A sequence of n numbers A = { $a_1,a_2,a_3, ... , a_n$ }

  2. Output: A permutation or reordering of {a'1,a'2, a'3...a'n} of the input sequence such that a'1 ≤ a'2 ≤ a'3 ≤ ... ≤ a'n



INSERTION-SORT(A):



  1. for j = 2 to A.length
  2.    key = A[j]
  3.    // Insert A[j] into the sorted sequence A[1 .. j-1]
  4.    i = j-1
  5.    while i > 0 and A[i] > key
  6.        A[i+1] = A[i]
  7.        i=i-1
  8.    A[i+1]=key




insertion sort example



insertion-sort-complexity

The running time of the algorithm is the sum of running times for each statement executed; a statement that takes ci steps to execute and executes n times will contribute cin to the total running time. i To compute T .n/, the running time of I NSERTION-SORT on an input of n values, we sum the products of the cost and times columns, obtaining



\( T(n) = c_1*n + c_2*(n-1) + c_4*(n-1) + c_5*\sum_{j=2}^{n} t_j + c_6 * \sum_{j=2}^{n} (t_j-1) + c_7 * \sum_{j=2}^{n} (t_j-1) + c_8*(n-1) \)



In INSERTION-SORT, the best case occurs if the array is already sorted as the T(n) function becomes a linear function in 'n'. If the array is in reverse sorted order the worst case results is seen as T(n) becomes a quadratic function in 'n'.



Q. illustrate the operation of INSERTION-SORT on the array A = { 31; 41; 59; 26; 41; 58 }


insertion sort

Q. Rewrite the INSERTION-SORT procedure to sort into non increasing instead of non decreasing order.


  1. for j = 2 to A.length do
  2.      key = A[j]
  3.      // Insert A[j] into the sorted sequence A[1..j - 1].
  4.      i = j  1
  5.      while i > 0 and A[i] < key do
  6.           A[i + 1] = A[i]
  7.           i = i - 1
  8.      end while
  9. end for
  10. A[i + 1] = key




Q. Consider the searching problem:


Input: A sequence of n numbers A = { $a_1, a_2, ... , a_n$ } and a value 'v'.


Output: An index i such that v = A[i] or the special value NIL if 'v' does not appear in A.


Write pseudo-code for linear search, which scans through the sequence, looking for 'v'. Using a loop invariant, prove that your algorithm is correct. Make sure that your loop invariant fulfills the three necessary properties.


  1. On each iteration of the loop body, the invariant upon entering is that there is no index k < j so that A[k] = v. In order to proceed to the next iteration of the loop, we need that for the current value of j, we do not have A[j] = v. If the loop is exited by line 5, then we have just placed an acceptable value in i on the previous line. If the loop is exited by exhausting all possible values of j, then we know that there is no index that has value j, and so leaving NIL in i is correct.



  2. i = NIL
  3. for j = 1 to A.length do
  4.    if A[j] = v then
  5.         i = j
  6.         return i
  7.    end if
  8.    return i
  9. end for


Q. Consider the problem of adding two n-bit binary integers, stored in two n-element arrays A and B. The sum of the two integers should be stored in binary form in an .(n + 1) - element array C. State the problem formally and write pseudo-code for adding the two integers.


  1. Input: two n-element arrays A and B containing the binary digits of two numbers a and b.

  2. Output: an (n + 1)-element array C containing the binary digits of a + b.



  3. carry = 0
  4. for i=1 to n do
  5.    C[i] = (A[i] + B[i] + carry) (mod 2)
  6.    if A[i] + B[i] + carry ≥ 2 then
  7.       carry = 1
  8.    else
  9.       carry = 0
  10.    end if
  11. end for
  12. C[n+1] = carry

Worst-case analysis


  1. The worst-case running time of an algorithm gives us an upper bound on the running time for any input. Knowing it provides a guarantee that the algorithm will never take any longer.

  2. In analysis we consider only the leading term of a formula (e.g., an2), since the lower-order terms are relatively insignificant for large values of n.

  3. We also ignore the leading term’s constant coefficient, since constant factors are less significant than the rate of growth in determining computational efficiency for large inputs.

  4. We usually consider one algorithm to be more efficient than another if its worstcase running time has a lower order of growth. Due to constant factors and lowerorder terms, an algorithm whose running time has a higher order of growth might take less time for small inputs than an algorithm whose running time has a lower order of growth.


Q. Express the function n3 / 1000 - 100n2 - 100n + 3 in terms of Θ -notation.


  1. n3 / 1000 - 100n2 - 100n + 3 is Θ(n3)



Q. Consider sorting n numbers stored in array A by first finding the smallest element of A and exchanging it with the element in A[1]. Then find the second smallest element of A, and exchange it with A[2]. Continue in this manner for the first n-1 elements of A. Write pseudocode for this algorithm, which is known as selection sort. What loop invariant does this algorithm maintain? Why does it need to run for only the first n - 1 elements, rather than for all n elements? Give the best-case and worst-case running times of selection sort in Θ-notation.


  1. Input: An n-element array A.

  2. Output: The array A with its elements rearranged into increasing order.

  3. The loop invariant of selection sort is as follows: At each iteration of the for loop of lines 1 through 10, the subarray A[1..i - 1] contains the i - 1 smallest elements of A in increasing order. After n - 1 iterations of the loop, the n - 1 smallest elements of A are in the first n - 1 positions of A in increasing order, so the nth element is necessarily the largest element. Therefore we do not need to run the loop a final time. The best-case and worst-case running times of selection sort are Θ(n2). This is because regardless of how the elements are initially arranged, on the ith iteration of the main for loop the algorithm always inspects each of the remaining n - i elements to find the smallest one remaining.


  1. for i = 1 to n - 1 do
  2.     min = i
  3.     for j = i + 1 to n do
  4.          // Find the index of the ith smallest element
  5.          if A[j] < A[min] then
  6.             min = j
  7.          end if
  8.     end for
  9.     Swap A[min] and A[i]
  10. end for


This yields a running time of: $\sum_{i-1}^{n-1} n - i$ = Θ(n2)





Q. Consider linear search , How many elements of the input sequence need to be checked on the average, assuming that the element being searched for is equally likely to be any element in the array? How about in the worst case? What are the average-case and worst-case running times of linear search in ‚Θ -notation? Justify your answers.


  1. Suppose that every entry has probability p of being the element looked for. Then, we will only check k elements if the previous k - 1 positiions were not the element being looked for, and the kth position is the desired value. Taking the expected value of this distribution we get it to be

  2. $ (1-p)^{A.length} + \sum_{k = 1}^{A.length} k*(1-p)^{k-1} * p^k$



Q. How can we modify almost any algorithm to have a good best-case running time?


  1. For a good best-case running time, modify an algorithm to first randomly produce output and then check whether or not it satisfies the goal of the algorithm.

  2. If so, produce this output and halt. Otherwise, run the algorithm as usual. It is unlikley that this will be successful, but in the best-case the running time will only be as long as it takes to check a solution.

  3. For example, we could modify selection sort to first randomly permut the elements of A, then check if they are in sorted order.

  4. If they are, output A. Otherwise run selection sort as usual. In the best case, this modified algorithm will have running time Θ(n).





  1. Approach involves break the problem into several subproblems that are similar to the original problem but smaller in size, solve the subproblems recursively, and then combine these solutions to create a solution to the original problem.

  2. Divide the problem into a number of subproblems that are smaller instances of the same problem.

  3. Conquer the subproblems by solving them recursively. If the subproblem sizes are small enough, however, just solve the subproblems in a straightforward manner.

  4. Combine the solutions to the subproblems into the solution for the original problem.



MERGE SORT: MERGE and MERGE-SORT functions



  1. MERGE(A,p,q,r)
  2. n1 = q - p - 1
  3. n2 = r - q
  4. let L[1 ... n1 + 1] and R[1 ... n2 + 1] be new arrays
  5. for i = 1 to n1
  6.     L[i] = A[p+i-1]
  7. for j = 1 to n2
  8.     R[j] = A[q+j]
  9. L[n1 + 1] = ∞
  10. R[n2 + 1] = ∞
  11. i = 1
  12. j = 1
  13. for k = p to r
  14.     if L[i] ≤ R[j]
  15.          A[k] = L[i]
  16.          i = i + 1
  17.     else A[k] = R[j]
  18.          j = j + 1




MERGE



  1. MERGE-SORT(A,p,r)
  2. if p < r
  3.    q = (p+r)/2
  4.    MERGE-SORT(A,p,q)
  5.    MERGE-SORT(A,q+1,r)
  6.    MERGE(A,p,q,r)


merge sort



  1. Divide: The divide step just computes the middle of the subarray, which takes constant time. Thus, D(n) = Θ(1)

  2. Conquer: We recursively solve two subproblems, each of size \( \frac{n}{2} \) which contributes \( 2T(\frac{n}{2}) \) to the running time.

  3. Combine: We have already noted that the MERGE procedure on an n-element subarray takes time Θ(n),and so C(n) = Θ(n)

  4. When we add the functions D(n) and C(n) for the merge sort analysis, we are adding a function that is Θ(n) and a function that is Θ(1). This sum is a linear function of n, i.e. Θ(n). Adding it to the \( 2T (\frac{n}{2}) \) term from the “conquer” step gives the recurrence for the worst-case running time T(n) of merge sort:

\( T(n) = \left\{\begin{matrix} \Theta (1) & \text{; if n = 1}\\ 2T(\frac{n}{2}) + \Theta(n) & ; if n > 1 \end{matrix}\right. \)


Q. illustrate the operation of merge sort on the array A = {3; 41; 52; 26; 38; 57; 9; 49}.


  1. If we start with reading across the bottom of the tree and then go up level by level.

  2. merge sort


Q. Rewrite the MERGE procedure so that it does not use sentinels, instead stopping once either array L or R has had all its elements copied back to A and then copying the remainder of the other array back into A.


  1. The following is a rewrite of MERGE which avoids the use of sentinels. Much like MERGE, it begins by copying the subarrays of A to be merged into arrays L and R. At each iteration of the while loop starting on line 13 it selects the next smallest element from either L or R to place into A. It stops if either L or R runs out of elements, at which point it copies the remainder of the other subarray into the remaining spots of A



  1. MERGE(A,p,q,r)
  2. n1 = q - p - 1
  3. n2 = r - q
  4. let L[1 ... n1 + 1] and R[1 ... n2 + 1] be new arrays
  5. for i = 1 to n1
  6.     L[i] = A[p+i-1]
  7. end for
  8. for j = 1 to n2
  9.     R[j] = A[q+j]
  10. end for
  11. i = 1
  12. j = 1
  13. k = p
  14. while i ≠ n1 and j ≠ n2
  15.         if L[i] ≤ R[j] then
  16.            A[k] = L[i]
  17.            i = i + 1
  18.         else A[k] = R[j]
  19.            j = j + 1
  20.         end if
  21.         k = k + 1
  22. end while
  23. if i = = n1 + 1 then 
  24.         for m = j to n2 do
  25.             A[k] = R[m]
  26.             k = k + 1
  27.         end for
  28. end if
  29. if j = = n2 + 1 then 
  30.         for m = i to n1 do
  31.             A[k] = L[m]
  32.             k = k + 1
  33.         end for
  34. end if



Q. Use mathematical induction to show that when n is an exact power of 2,the solution of the recurrence T(n) = n lg n

\( T(n) = \left\{\begin{matrix} 2 & \text{; if n = 2}\\ 2T(\frac{n}{2}) + n & ; if n = 2^k, for k > 1 \end{matrix}\right. \)


  1. Since n is a power of two, we may write n = 2k. If k = 1, then T(2) = 2 = 2 lg(2). Suppose it is true for k, we will show it is true for k + 1.

  2. \( T(2^{k+1}) = 2T(\frac{2^{k+1}}{2}) + 2^{k+1} = 2T(2^{k}) + 2^{k+1} \)

  3. \( 2(2^k lg (2^k)) + 2^{k+1} \)

  4. \( k*2^{k+1} + 2^{k+1} \)

  5. \( k*2^{k+1} + 2^{k+1} \)

  6. \( (k+1)*2^{k+1} \)

  7. \( 2^{k+1}lg(2^{k+1}) \)

  8. n lg(n)





Q. We can express insertion sort as a recursive procedure as follows. In order to sort A[1...n], we recursively sort A[1...n-1] and then insert A[n] into the sorted array A[1...n-1]. Write a recurrence for the running time of this recursive version of insertion sort.


  1. Let T(n) denote the running time for insertion sort called on an array of size n. We can express T(n) recursively as

  2. \( \begin{cases}\Theta(1) & n \le c \\ T(n-1)I(n) & otherwise\end{cases} \)

  3. where I(n) denotes the amount of time it takes to insert A[n] into the sorted array A[1...n 1]. As seen before, I(n) is Θ(log n).



Q. The binary search algorithm repeats this procedure, halving the size of the remaining portion of the sequence each time. Write pseudocode, either iterative or recursive, for binary search. Argue that the worst-case running time of binary search is Θ (lg n).


  1. The following recursive algorithm gives the desired result when called with a = 1 and b = n.



  2. BinSearch(a,b,v)
  3. if then a > b
  4.     return NIL
  5. end if
  6. m =  (a+b)/2 
  7. if then m = v
  8.     return m
  9. end if
  10. if then m < v
  11.    return BinSearch(a,m,v)
  12. end if
  13. return BinSearch(m+1,b,v)


  14. Note that the initial call should be BinSearch(1, n, v). Each call results in a constant number of operations plus a call to a problem instance where the quantity b - a falls by at least a factor of two. So, the runtime satis es the recurrence T(n) = T(n/2) + c. So, T(n) ∈ Θ(lg(n))



Q. Observe that the while loop of lines 5–7 of the INSERTION-SORT procedure uses a linear search to scan (backward) through the sorted subarray A[1...j-1]. Can we use a binary search instead to improve the overall worst-case running time of insertion sort to Θ (n lg n)?


  1. A binary search wouldn't improve the worst-case running time.

  2. Insertion sort has to copy each element greater than key into its neighboring spot in the array.

  3. Doing a binary search would tell us how many how many elements need to be copied over, but wouldn't rid us of the copying needed to be done.



Q. Describe a Θ(n lg n) time algorithm that, given a set S of n integers and another integer x, determines whether or not there exist two elements in S whose sum is exactly x


  1. Use Merge Sort to sort the array A in time Θ(n lg(n))
  2. i = 1
  3. j = n
  4. while i < j do
  5.    if A[j] + A[j] = S then
  6.       return true
  7.    end if
  8.    if A[i] + A[j] < S then
  9.       i = i + 1
  10.    end if
  11.    if A[i] + A[j] > S then
  12.       j = j - 1
  13.    end if
  14. end while
  15. return false




  1. BUBBLESORT(A)
    
  2. for i = 1 to A.length - 1
  3.     for j = A.length downto i + 1
  4.        if A[j] < A[j-1]
  5.            exchange A[j] with A[j-1]


The best case and worst case running time of bubble sort is Θ(n2)





Q. 2 Assume that the algorithms considered here sort the input sequences in ascending order. If the input is already in ascending order, which of the following are TRUE?
I. Quick sort runs in θ(n2) time
II. Bubble sort runs in θ(n2)time
III. Merge sort runs in θ(n) time
IV. Insertion sort runs in θ(n) time. (GATE 2016 SET B)


  1. I and II only


  2. I and II only


  3. II and IV only


  4. I and IV only


Ans . D

Solution steps: As input is already sorted quick sort runs in θ(n2) and insertion sort runs in θ(n).



Q.3 The worst case running times of Insertion sort, Merge sort and Quick sort, respectively, are: (GATE 2016 SET A)


  1. θ(nlogn), θ(nlogn), and θ(n2)


  2. θ(n2),θ(n2), and θ(nlogn)


  3. θ(n2), θ(nlogn), and θ(nlogn)


  4. θ(n2), θ(nlogn), and θ(n2)


Ans : (D)

Solution steps:

  1. Merge sort θ(nlogn) in all the cases.We always divide array in two halves, sort the two halves and merge them. The recursive formula is T(n) = 2T(n/2) + θ(n)

  2. Quick sort takes θ(n2) in worst case. In QuickSort, we take an element as pivot and partition the array around it. In worst case, the picked element is always a corner element and recursive formula becomes T(n) = T(n-1) + θ(n). An example scenario when worst case happens is, arrays is sorted and our code always picks a corner element as pivot.

  3. Insertion sort takes θ(n2) in worst case as we need to run two loops. The outer loop is needed to one by one pick an element to be inserted at right position. Inner loop is used for two things, to find position of the element to be inserted and moving all sorted greater elements one position ahead. Therefore the worst case recursive formula is T(n) = T(n-1) + θ(n).



Q. What is the number of swaps required to sort n elements using selection sort, in the worst case? (GATE 2009)


(A) θ(n)
(B) θ(n log n)
(C) θ(n)
(D) θ(nlog n)


Ans . A


If there are n elements then in worst case , total no of swaps will be (n-1) in slection sort. so the no of swaps is (n).



Q.3 Consider the C function given below. Assume that the array listA contains n (> 0) elements, sorted in ascending order. GATE
Which one of the following statements about the function ProcessArray is CORRECT? (GATE 2014 SET 3)


  1. It will run into an infinite loop when x is not in listA.


  2. It is an implementation of binary search.


  3. It will always find the maximum element in listA.


  4. It will return −1 even when x is present in listA.



Ans . 2.It is an implementation of binary search.

    The function is iterative implementation of Binary Search.
    k keeps track of current middle element.
    i and j keep track of left and right corners of current subarray



Q. Output of following program?

#include <stdio.h>
int fun(int n, int *f_p)
{
    int t, f;
    if (n <= 1)
    {
        *f_p = 1;
        return 1;
    }
    t = fun(n- 1,f_p);
    f = t+ * f_p;
    *f_p = t;
    return f;
}

int main()
{
    int x = 15;
    printf (" %d \n", fun(5, &x));
    return 0;
}
(GATE 2009)


  1. 6


  2. 8


  3. 14


  4. 15



Ans: B




Q.2 Which one of the following is the tightest upper bound that represents the number of swaps required to sort n numbers using selection sort? (GATE 2013)
  1. O(log n)


  2. O(n)

  3. O(nLogn)

  4. O(n^2)


Ans . 2


    Solution:

    To sort elements in increasing order, selection sort always picks the maximum element from remaining unsorted array and swaps it with the last element in the remaining array.

    So the number of swaps, it makes in n-1 which is O(n).



    Q. Which of the following sorting algorithms has the lowest worst-case complexity? (GATE 2007)


    1. Merge sort


    2. Bubble sort


    3. Quick sort


    4. Selection sort


    Ans. 1


    ALGORITHM WORST CASE TIME COMPLEXITY
    MERGE SORT \[o\left(N \log N\right)\]
    BUBBLE SORT \[o\left(N*N\right)\]
    QUICKSORT \[o\left(N*N\right)\]
    SELECTION SORT \[o\left(N*N\right)\]

    Q. Randomized quicksort is an extension of quicksort where the pivot is chosen randomly. What is the worst case complexity of sorting n numbers using randomized quicksort? (GATE 2001)


    1. O(n)


    2. O(n Log n)


    3. O( n 3 )


    4. O(n!)


    Ans. 3


    Explanation:

    Randomized quicksort has expected time complexity as O(nLogn), but worst case time complexity remains same. In worst case the randomized function can pick the index of corner element every time


    Q. Which one of the following is the tightest upper bound that represents the number of swaps required to sort n numbers using selection sort? (GATE 2013)


    1. O(log n)


    2. O(n)


    3. O(nLogn)


    4. \(O\left(n^{2}\right)\)


    Ans. 2


    Solution:

    To sort elements in increasing order, selection sort always picks the maximum element from remaining unsorted array and swaps it with the last element in the remaining array.

    So the number of swaps, it makes in n-1 which is O(n).


    Q. The number of elements that can be sorted in \(\ominus\left(\log{n}\right)\) time using heap sort is (GATE 2013)



    MathJax View
    1. A


    2. B

    3. C

    4. D


    Ans. 4


    Solution:
  1. Time complexity of Heap Sort is \(\ominus\left(m\log{m}\right)\) for m input elements.
  2. For m = \(\ominus\left(logn / (log log n)\right)\), the value of \(\ominus\left(m*log{m}\right)\) will be \(\ominus\left(( [Log n/(Log Log n)] * [Log (Log n/(Log Log n))] )\right)\) which will be \(\ominus\left(( [Log n/(Log Log n)] * [ Log Log n - Log Log Log n] )\right)\) which is \(\ominus\left(\log{n}\right)\)


  3. Ans. 2


    Solution:

  4. Here we have to tell the value of k returned not the time complexity.

    >for (i = n/2; i <= n; i++)

    for (j = 2; j <= n; j = j * 2)

    k = k + n/2;

    return k;

    The outer loop runs (n/2) times The inner loop runs logn times.(\((2^{k})\) = n => k = logn). Now looking at the value of k in inner loop, n is added to k, logn times as the inner loop is running logn times. Therefore the value of k after running the inner loop one time is (n*logn).

  5. Therefore total time complexity is inner multiplied with outer loop complexity which (n for outer and nlogn for inner) \((n^{2logn})\).



  6. Q. Which one of the following in place sorting algorithms needs the minimum number of swaps? (GATE 2006)


    1. Quick sort


    2. Insertion sort


    3. Selection sort


    4. Heap sort



    Ans . (C) Selection sort


    1. For selection sort, number of swaps required is minimum ( T(n) ).


    Q. The worst case running time to search for an element in a balanced in a binary search tree with n2^n elements is (GATE 2012)


    1. $$\theta (n\log n) $$


    2. $$\theta ({n^2})$$


    3. $$\theta (n)$$


    4. $$\theta (\log n)$$



    Ans . 3


      The search time in a binary search tree depends on the form of the tree, that is on the order in which its nodes were inserted. A pathological case: The n nodes were inserted by increasing order of the keys, yielding something like a linear list (but with a worse space consumption), with O(n) search time(in the case of skew tree). -> A balanced tree is a tree where every leaf is “not more than a certain distance” away from the root than any other leaf.So in balanced tree, the height of the tree is balanced to make distance between root and leafs nodes a low as possible. In a balanced tree, the height of tree is log2(n). -> So , if a Balanced Binary Search Tree contains n2n elements then Time complexity to search an item: Time Complexity = log(n2n) = log (n) + log(2n) = log (n) +n = O(n) So Answer is 3



    Q. Which one of the following is the tightest upper bound that represents the time complexity of inserting an object into a binary search tree of n nodes? (GATE 2013)


    1. O(1)


    2. O(Logn)


    3. O(n)


    4. O(nLogn)


    Ans. 3


    Solution:

    To insert an element, we need to search for its place first. The search operation may take O(n) for a skewed tree like following.

    To insert 50, we will have to traverse all nodes. 10 - 20 - 30 - 40


    Q. A scheme for storing binary trees in an array X is as follows. Indexing of X starts at 1 instead of 0. the root is stored at X[1]. For a node stored at X[i], the left child, if any, is stored in X[2i] and the right child, if any, in X[2i+1]. To be able to store any binary tree on n vertices the minimum size of X should be. (GATE 2006)


    1. logn


    2. n


    3. 2n + 1


    4. \((2^n - 1)\)



    Ans . (D) \((2^n-1)\)


    1. For a right skewed binary tree, number of nodes will be 2^n – 1. For example, in below binary tree, node ‘A’ will be stored at index 1, ‘B’ at index 3, ‘C’ at index 7 and ‘D’ index15.
       A
        \
         \
           B
            \
             \
               C
                 \
                   \
                     D



    Q. The preorder traversal sequence of a binary search tree is 30, 20, 10, 15, 25, 23, 39, 35, 42. Which one of the following is the postorder traversal sequence of the same tree? (GATE 2013)


    1. 10, 20, 15, 23, 25, 35, 42, 39, 30


    2. 15, 10, 25, 23, 20, 42, 35, 39, 30

    3. 15, 20, 10, 23, 25, 42, 35, 39, 30

    4. 15, 10, 23, 25, 20, 35, 42, 39, 30


    Ans. 4


    Solution:

    In order to construct a binary tree from given traversal sequences, one of the traversal sequence must be Inorder. The other traversal sequence can be either Preorder or Postorder.

    We know that the Inorder traversal of Binary Search Tree is always in ascending order so the Inorder Traversal would be the ascending order of given Preorder traversal i.e 10, 15, 20, 23, 25, 30, 35, 39, 42.

    Now we have to construct a tree from given Inorder and Preorder traversals.

    Preorder traversals