1. In divide-and-conquer, we solve a problem recursively, applying three steps at each level of the recursion:

1. Divide the problem into a number of sub-problems that are smaller instances of the same problem.

2. Conquer the sub-problems by solving them recursively. If the sub-problem sizes are small enough, however, just solve the sub-problems in a straightforward manner.

3. Combine the solutions to the sub-problems into the solution for the original problem.

2. When the sub-problems are large enough to solve recursively, we call that the recursive case. Once the sub-problems become small enough that we no longer recurse, we say that the recursion “bottoms out” and that we have gotten down to the base case.

3. Recurrences can take many forms. For example, a recursive algorithm might divide sub-problems into unequal sizes, such as a 2/3 to 1/3 split. If the divide and combine steps take linear time, such an algorithm would give rise to the recurrence

4. T(n) = T(2n/3) + T(n/3) + Θ(n)

Aim: To find the maximum subarray of the Array

1. Suppose we want to ﬁnd a maximum subarray of the subarray A[low...high]. Divide-and-conquer suggests that we divide the subarray into two subarrays of as equal size as possible. That is, we ﬁnd the midpoint, say mid, of the subarray, and consider the subarrays A[low..mid] and A[mid+1....high].

2. Any contiguous subarray A[i...j] of A[low...high] must lie in exactly one of the following places:

1. Entirely in the subarray A[low..mid],so that low ≤ i ≤ j ≤ mid

2. entirely in the subarray A[mid + 1...high],so that mid < i ≤ j ≤ high,or

3. crossing the midpoint, so that low ≤ i ≤ mid < j ≤ high.

3. Therefore, a maximum subarray of A[low...high] must lie in exactly one of these places. In fact, a maximum subarray of A[low...high] must have the greatest sum over all subarrays entirely in A[low...mid], entirely in A[mid + 1..high], or crossing the midpoint. We can ﬁnd maximum subarrays of A[low...mid] and A[mid+1...high] recursively, because these two subproblems are smaller instances of the problem of ﬁnding a maximum subarray.

4. Any subarray crossing the midpoint is itself made of two subarrays A[i...mid] and A[mid + 1...j],where low ≤ i ≤ mid and mid < j ≤ high. Therefore, we just need to ﬁnd maximum subarrays of the form A[i...mid] and A[mid+1..j] and then combine them.

Maximum Subarray Algorithm

1. FIND-MAX-CROSSING-SUBARRAY(A, low, mid, high)
2. left-sum = -∞
3. sum = 0
4. for i = mid downto low
5.      sum = sum + A[i]
6.      if sum > left-sum
7.         left-sum = sum
8.         max-left = i
9. right-sum = -∞
10. sum = 0
11. for j = mid + 1 to high
12.      sum = sum + A[j]
13.      if sum > right-sum
14.          right-sum = sum
15.          max-right = j
16. return (max-left, max-right, left-sum + right-sum)

Maximum Subarray Algorithm - Divide and Conquer

1. FIND-MAXIMUM-SUBARRAY(A, low, high)
2. if high == low
3.    return (low, high,A[low]) // base case: only one element

4. else mid = (low + high)/2
5.    (left-low,left-high, left-sum) = FIND-MAXIMUM-SUBARRAY(A, low, mid)

6.    (right-low, right-high, right-sum) = FIND-MAXIMUM-SUBARRAY(A, mid+1, high)
7.    (cross-low, cross-high, cross-sum) = FIND-MAX-CROSSING-SUBARRAY(A, low, mid, high)

8.    if left-sum > right-sum and left-sum > cross-sum

9.       return (left-low, left-high, left-sum)

10.    elseif right-sum > left-sum and right-sum > cross-sum
11.       return (right-low, right-high, right-sum)
12.    else return (cross-low, cross-high, cross-sum)

Q. What does FIND-MAXIMUM-SUBARRAY return when all elements of A are negative?

1. It will return the least negative position. As each of the cross sums are computed, the most positive one must have the shorest possible lengths.

2. The algoritm doesn't consider length zero sub arrays, so it must have length 1.

Q. Write pseudocode for the brute-force method of solving the maximum-subarray problem. Your procedure should run in Θ(n2) time.

1. Algorithm 1: Brute Force Algorithm to Solve Maximum Subarray Problem

2. left = 1
3. right = 1

4. max = A[1]

5. curSum = 0

6. for i = 1 to n do // Increment left end of subarray
7.    curSum = 0

8.    for j = i to n do // Increment right end of subarray
9.       curSum = curSum + A[j]

10.       if curSum > max then
11.          max = curSum

12.          left = i

13.          right = j
14.       end if

15.    end for
16. end for

Q. Develop a nonrecursive, linear-time algorithm for the maximum-subarray problem.

1. Algorithm 2: linear time maximum subarray(A)
2. M = - ∞
3. low_M, high_M  = null
4. M_r = 0
5. low_r = 1
6. for i from 1 to A.length do
7.     M_r + = A[i]
8.     if M_r > M then
9.        low_M = low_r
10.        high_M = i
11.        M = M_r
12.     end if
13.     if M_r < 0 then
14.        M_r = 0
15.        low_r = i + 1
16.     end if
17.     return ( low_M, high_M, M  )
18. end for

If A = (aij) and B = (bij) are square n * n matrices, then in the product C = A . B,we define the entry (cij) for i, j = 1, 2 , ... , n by

$$c_{ij} = \sum_{k = 1}^{n} a_{ik} . b_{kj}$$

1. We must compute n2 matrix entries, and each is the sum of n values. The following procedure takes n * n matrices A and B and multiplies them, returning their n * n product C. We assume that each matrix has an attribute rows, giving the number of rows in the matrix.

2. SQUARE-MATRIX-MULTIPLY(A,B)

3. n = A.rows
4. let C be a new n*n matrix
5. for i = 1 to n
6.    for j = 1 to n
7.        cij = 0
8.        for k = 1 to n
9.            cij = cij + (aik * bkj)
10. return C

Time Complexity:Because each of the triply-nested for loops runs exactly 'n' iterations, and each execution of line 9 takes constant time, the SQUARE-MATRIX-MULTIPLY procedure takes Θ(n3) time.

To compute the matrix product C = A * B, we assume that n is an exact power of 2 in each of the n * n matrices.

1. Algorithm 3: Strassen(A, B)

2. if A.length == 1 then
3.    return A[1] . B[1]

4. end if

5. Let C be a new n by n matrix

6. A11 = A[1..n/2][1..n/2]

7. A12 = A[1...n/2][n/2 + 1...n]

8. A21 = A[n/2 + 1...n][1...n/2]

9. A22 = A[n/2 + 1...n][n/2 + 1...n]

10. B11 = B[1...n/2][1...n/2]
11. B12 = B[1...n/2][n/2 + 1...n]

12. B21 = B[n/2 + 1...n][1...n/2]

13. B22 = B[n/2 + 1...n][n/2 + 1...n]

14. S1= B12 + B22
15. S2= A11 + A12
16. S3= A21 + A22
17. S4= B21 - B11
18. S5= A11 + A22
19. S6= B11 + B22
20. S7= A12 - A22
21. S8= B21 + B22
22. S9= A11 - A21
23. S10= B11 + B12
24. P1 = Strassen(A11; S1)
25. P2 = Strassen(S2, B22)
26. P3 = Strassen(S3, B11)
27. P4 = Strassen(A22, S4)
28. P5 = Strassen(S5, S6)
29. P6 = Strassen(S7, S8)
30. P7 = Strassen(S9, S10)
31. C[1...n/2][1...n/2] = P5 + P4 - P2 + P6
32. C[1...n/2][n/2 + 1...n] = P1 + P2
33. C[n/2 + 1...n][1...n/2] = P3 + P4
34. C[n/2 + 1...n][n/2 + 1...n] = P5 + P1 - P3 - P7
35. return C

Time Complexity:Θ(n2.81) time.

1. Let's use the iterative method to figure out the running time of merge_sort. We know that for Merge sort

2. T(1) = 1

3. T(n) = 2 T(n/2) + n

4. Starting with the iterative method, we can start expanding the time equation until we notice a pattern:

Solution:

1. T(n) = n + T(n/2)
2.      = n + n + T(n/4)
3.      = n + n + n + T(n/8)
4.      = n +  n + n + n + T(n/24)
5. After 'k' iterations
6.      T(n) = n +  n + n + n + n + ... + T(n/2k)
7. Assume  n=2k then  k = lg n  and
8.      T(n) = k * n + T(1)
9.      T(n) =  n lg n

Q. Solve the recurrence using iteration method: T(n) = c + T(n/2)

1. T(n) = c + T(n/2)
2.      = c + c + T(n/4)
3.      = c + c + c + T(n/8)
4.      = c +  c + c + c + T(n/24)
5. After 'k' iterations
6.      T(n) = c +  c + c + c + c + ... + T(n/2k)
7. Assume  n=2k then  k = lg n  and
8.      T(n) = k * c + T(1)
9.      T(n) =  c lg n

Q. Solve the recurrance using iteration method: T(n) = n + 2T(n/2)

1. T(n) = n + 2T(n/2)
2.      = n + 2(n/2 + 2T(n/4))
3. = n + n + 4T(n/4)
4. = n + n + 4(n/4 + 2T(n/8))
5. = n + n + n + 8T(n/8)
6. T(n) = 3n + 23T(n/23)
7. After 'k' iterations it is now:
8. T(n) = 3n + 2kT(n/2k)
9. Assume  n=2k, k = lg n
10. = n lg n+ n T(1)
11. T(n) = O(nlgn)

1. A recursion tree is useful for visualizing what happens when a recurrence is iterated. It diagrams the tree of recursive calls and the amount of work done at each call. For E.g:, Recurrence is T(n) = 2T(n/2) + n2. The recursion tree for this recurrence has the following form:

2. Sum across each row of the tree to obtain the total work done at a given level

3. This a geometric series, thus in the limit the sum is O(n2)

Q. T(n) = T(n/4) + T(n/2) + n2

1. “Cookbook” for solving recurrences of the form:

2. $$T(n) = aT(\frac{n}{b}) + f(n)$$

3. Where, a ≥ 1; b > 1 and f(n) > 0

4. Then, depending on the comparison between f(n), $$n^{{log}_b^a}$$ there are three cases:

5. f(n) is asymptotically smaller or larger than $$n^{{log}_b^a}$$ by a polynomial factor $$n^{\varepsilon}$$

6. f(n) is asymptotically equal with $$n^{{log}_b^a}$$

7. Case 1: f(n) = $$n^{{log}_b^a - \epsilon}$$ for some ε > 0 then T(n) = Θ( $$n^{{log}_b^a})$$

8. Case 2: f(n) = Θ( $$n^{{log}_b^a})$$ then T(n) = Θ( $$n^{{log}_b^a}) lg n$$

9. Case 3: f(n) = Ω( $$n^{{log}_b^a + \epsilon}$$ ) for some ε > 0 then, and if af(n/b) ≤ cf(n) for some c < 1 and all sufficiently large n, then: T(n) = Θ(f(n))

Q. The time complexity of computing the transitive closure of a binary relation on a set of n elements is known to be (GATE 2005)

1. O (n)

2. O (n log n)

3. O(n32)

4. O(n3)

Ans . (a)

1. In mathematics, the transitive closure of a binary relation R on a set X is the smallest transitive relation on X that contains R. If the original relation is transitive, the transitive closure will be that same relation; otherwise, the transitive closure will be a different relation.

Q. The recurrence relation capturing the optimal execution time of the Towers of Hanoi problem with n discs is(GATE 2012)

1. T(n)=2T(n-2)+2

2. T(n)=2T(n-1)+n

3. T(n)=2T(n/2)+1

4. T(n)=2T(n-2)+1

Ans . 4

Let the three pegs be A,B and C respectively. Our goal is to move 'n' pegs from A to C with the help of peg B. The steps to do this are as follows:

1. move n−1 discs from A to B. This leaves $$n^{th}$$ disc alone on peg A --- T(n-1)

2. move $$n^{th}$$ disc from A to C --- 1

3. move n−1 discs from B to C so they sit on $$n^{th}$$ disc --- T(n-1)

4. By adding the above we get
T(n) = T(n-1) + T(n-1) + 1
i.e T(n)=2T(n-2)+1