0% found this document useful (0 votes)
15 views32 pages

DAA Question

Uploaded by

shivkumar620400
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views32 pages

DAA Question

Uploaded by

shivkumar620400
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 32

MODULE - I

Group – A
(Multiple Choice Type Questions)
Each of 1 marks

1. (i) An algorithm in which we divide the problem into subproblem and then we combine the
subsolutions to form solution to the original problem is known as

a) Brute Force b) Divide and Conquer c) Greedy Algorithm d) None of the


mentioned

(ii) For an algorithm which is the most important characteristics that makes it acceptable

a) Fast b) Compact c) Correctness and Precision d) None of the mentioned


(iii) An algorithm which tries all the possibilities unless results are satisfactory is and
generally is time consuming is

a) Brute Force b) Divide and Conquer c) Dynamic Programming Algorithms d) None


of the mentioned
(iv) For a recursive algorithm

a) a base case is necessary and is solved without recursion b) A base case is not necessary
c) does not solve a base case directly d) None of the mentioned
(v) An algorithm which uses the past results and uses them to find the new results is

a) Brute Force b) Divide and Conquer c) Dynamic Programming Algorithm d) None


of the mentioned
(vi) The worst case complexity for merge sort is

a) O(n) b) O(logn) c) O(n2) d) O(nlogn)


(vii) The worst case occurs in quick sort when

a) pivot is the median of the array b) pivot is the smallest element c) pivot is the middle
element d) None of the mentioned
(viii) The time complexity of binary search is given by

a) constant b) quadratic c) exponential d) None of the mentioned


(ix) Which of the following case does not exist in complexity theory?

a) Best case b) Worst case c) Average case d) Null case


(x) The big-theta notation for function f(n)=2n3+n-1 is

a) n b) n2 c) n3 d) n4

Group – B
(Short Answer Type Questions)

Each of 5 marks
2. Find the best case and worst-case time complexity of quick sort.

The best case time complexity of Quick Sort is O(n log n). This occurs when the partitions
are as evenly balanced as possible: their sizes either are equal or are within 1 of each
other.

The worst-case time complexity of Quick Sort is O(n^2). This occurs when the array is
already sorted or reverse sorted.

3. Compute the time complexity of binary search in worst case and average case.

The average case time complexity of Binary Search is O(log N).

The worst case time complexity of Binary Search is also O(log N). This occurs when the
search element is at the first or last index (smallest or largest element in the array)

4. Discuss different asymptotic notations in brief.

Asymptotic notations are mathematical tools used to represent the time complexity of
algorithms for asymptotic analysis. They allow you to analyze an algorithm’s running time
by identifying its behavior as its input size grows. There are mainly three asymptotic
notations: Big-O notation, Omega notation, and Theta notation.

Big-O notation represents the upper bound of the running time of an algorithm, giving the
worst-case complexity of an algorithm.

Omega notation represents the lower bound of the running time of an algorithm,
providing the best-case complexity of an algorithm.

Theta notation encloses the function from above and below and is used for analyzing the
average-case complexity of an algorithm.

Group – C
(Long Answer Type Questions)
Each of 10 marks

5. What are the characteristics of an algorithm? Consider the following recurrence

T(n) = 4T(n/2)+n

Obtain the asymptotic bound using recursion tree method. 4+6

Characteristics of an algorithm:
• Input: An algorithm requires some input values. An algorithm can be given a value
other than 0 as input.
• Output: At the end of an algorithm, you will have one or more outcomes.
• Unambiguity: A perfect algorithm is defined as unambiguous, which means that its
instructions should be clear and straightforward.
• Finiteness: An algorithm must be finite. Finiteness in this context means that the
algorithm should have a limited number of instructions, i.e., the instructions should
be countable.
• Effectiveness: Because each instruction in an algorithm affects the overall process,
it should be adequate.
• Language independence: An algorithm must be language-independent, which
means that its instructions can be implemented in any language and produce the
same results.
6. Solve the following recurrence by using substitution method.
T(n)=2T(n/2)+n
Explain how reliability of a system is determined using dynamic programming. 5+5
𝑁
𝑇(𝑁) = 2𝑇 ( 2 ) + 𝑁 - (i)
𝑁 𝑁 𝑁
𝑇 ( 2 ) = 2𝑇 (22) + 2 - (ii)
𝑁 𝑁 𝑁
𝑇 (22) = 2𝑇 (23) + 23 - (iii)

putting values of equation (ii) in equation (i), we get


𝑁 𝑁
𝑇(𝑁) = 2 [2𝑇 (22) + 2 ] + 𝑁
𝑁
𝑇(𝑁) = 22 𝑇 (22) + 2𝑁 - (iv)

Now, putting the values of equation (iii) in equation (iv), we get


𝑁 𝑁
𝑇(𝑁) = 22 [2𝑇 (23) + 23] + 2𝑁
𝑁
𝑇(𝑁) = 23 𝑇 (23) + 3𝑁 -(v)

From this we can conclude,


𝑁
𝑇(𝑁) = 2𝑘 𝑇 (2𝑘) + 𝑘𝑁 - (vi)
𝑁
Now let 2𝑘 = 1, then 2𝑘 = 𝑁
K = log 𝑁 - (vii)
Putting equation (vii) in equation (vi), we get
𝑇(𝑁) = 2log 𝑁 𝑇(1) + 𝑁 log 𝑁
𝑇(𝑁) = 𝑇(1) + 𝑁 log 𝑁
𝑇(𝑁) = 𝑂(𝑁 log 𝑁)

Reliability of a system is the probability that it performs its intended function without
failure for a given time period. Dynamic programming is a method of solving complex
problems by breaking them down into simpler subproblems and using optimal solutions of
subproblems to find the optimal solution of the original problem. Reliability of a system
can be determined using dynamic programming by modeling the system as a series of
states and transitions, where each state represents a possible configuration of the system
components and each transition represents a possible failure or repair event. The
reliability of each state can be calculated recursively by multiplying the reliability of the
previous state and the transition probability. The reliability of the system is then the
reliability of the initial state.

7. Explain the algorithm of merge sort using divide and conquer technique. What is the
significance of Big oh Notation? 7+3

Merge sort is a sorting algorithm that uses the divide and conquer technique to sort an
array of elements. The basic idea is to divide the array into two subarrays of roughly equal
size, sort each subarray recursively, and then merge the two sorted subarrays into one
sorted array.

The algorithm can be described as follows:

1. Base case: If the array has zero or one element, it is already sorted and no further
action is needed.
2. Recursive case: If the array has more than one element, do the following steps:
1. Divide: Find the middle index of the array and split it into two subarrays: left
and right.
2. Conquer: Sort the left and right subarrays recursively by calling merge sort
on them.
3. Combine: Merge the two sorted subarrays into one sorted array by using a
helper function that compares the elements from both subarrays and puts
them in the correct order.

Answer Keys:

Group – A
(Multiple Choice Type Questions)
1. (i) b (ii) c (iii) a (iv) b (v) c (vi) d (vii) b (viii) d (ix) d (x)

Group – B
(Short Answer Type Questions)
2.

3.

4.

Group – C
(Long Answer Type Questions)
5.
6.

7.

MODULE - II

Group – A
(Multiple Choice Type Questions)
Each of 1 marks

1. (i) Which of the following problem cannot be solved using greedy approach?
a) Huffman code b) Minimum spanning tree c) Job scheduling d) 0-1 Knapsack

(ii) The travelling salesman problem can be solved in:

a) Polynomial time using dynamic programming algorithm


b) Polynomial time using branch-and-bound algorithm
c) Exponential time using dynamic programming algorithm or branch-and-bound algorithm
d) Polynomial time using backtracking algorithm

(iii) Time complexity of knapsack 0/1 where n is the number of items and W is the
capacity of knapsack.

a) O(W) b) O(n) c) O(nW) d) O(lognW )

(iv) In the development of dynamic programming the value of an optimal solution is


computed in
a) Top down fashion b) Bottom up fashion c) Both d) None

(v) Which algorithm strategy builds up a solution by choosing the option that looks the best
at every step?

a) greedy method b) branch and bound c) dynamic programming d) divide and


conquer

(vi) Fractional knapsack problem is also known as

a) 0/1 knapsack problem b) Continuous knapsack problem c) Divisible knapsack problem

d) Non continuous knapsack problem

(vii) Which of the following is/are property/properties of a dynamic programming problem?

a) Evolutionary Approach b) Require More Time c) Greedy Approach d)


Optimal Substructure and Overlapping Sub problems

(viii) Which of the following is used for solving the N Queens Problem?

a) Greedy algorithm b) Dynamic programming c) Backtracking d) Sorting


(ix) Time complexity of matrix chain multiplication is

a) O(n2) b) O(n) c) O(nlogn) d) O(n3)

(x) Which of the following problems is not solved by Dynamic programming?

a) 0/1 knapsack problem b) Matrix chain multiplication problem c) Travelling


Salesman problem d) Fractional knapsack problem

Group – B
(Short Answer Type Questions)
Each of 5 marks
2. What are the salient differences between Dynamic Programming and Greedy approach?
What are the basic characteristics of Dynamic Programming? 3+2

Feature Greedy method Dynamic programming


Feasibility In a greedy Algorithm, we In Dynamic Programming we
make whatever choice make decision at each step
seems best at the moment considering current problem
in the hope that it will lead and solution to previously
to global optimal solution. solved sub problem to
calculate optimal solution .
Optimality In Greedy Method, It is guaranteed that
sometimes there is no such Dynamic Programming will
guarantee of getting generate an optimal solution
Optimal Solution. as it generally considers all
possible cases and then
choose the best.
Recursion A greedy method follows A Dynamic programming is
the problem solving an algorithmic technique
heuristic of making the which is usually based on a
locally optimal choice at recurrent formula that uses
each stage. some previously calculated
states.
Memoization It is more efficient in terms It requires Dynamic
of memory as it never look Programming table for
back or revise previous Memoization and it
choices increases it’s memory
complexity.
Time complexity Greedy methods are Dynamic Programming is
generally faster. For generally slower. For
example, Dijkstra’s shortest example, Bellman Ford
path algorithm takes algorithm takes O(VE) time.
O(ELogV + VLogV) time.
Fashion The greedy method Dynamic programming
computes its solution by computes its solution
making its choices in a bottom up or top down by
serial forward fashion, synthesizing them from
never looking back or smaller optimal sub
revising previous choices. solutions.
Example Fractional knapsack . 0/1 knapsack problem

Dynamic Programming is a bottom-up algorithmic approach that builds up the solution to


a problem by solving its subproblems recursively. It stores the solutions to subproblems
and reuses them when necessary to avoid solving the same subproblems multiple times.
Dynamic Programming is used to obtain the optimal solution and is useful for solving
problems where the optimal solution can be obtained by combining optimal solutions to
subproblems.

3. State fractional knapsack problem. What are the differences between fractional and 0/1
knapsack problem? 2+3

The Fractional Knapsack Problem is an optimization problem where you are given a
knapsack with a limited weight capacity and a set of items with different weights and
values. The goal is to determine which items to include in the knapsack so that the total
value of the items is maximized while the total weight does not exceed the knapsack’s
capacity. In this problem, you are allowed to break items to maximize the total value of
the knapsack.

The main differences between the Fractional Knapsack Problem and the 0/1 Knapsack
Problem are:

Fractional Knapsack Problem 0/1 Knapsack Problem


Solved using a greedy approach Solved using dynamic programming
approach
Items can be broken for maximizing the Items cannot be broken
total value of the knapsack
Has an optimal structure Also has an optimal structure
Generally faster and simpler Generally slower and more complex

4. Consider the matrices P, Q and R which are 10 x 20, 20 x 30 and 30 x 40 matrices


respectively. What is the minimum number of multiplications required to multiply the
three matrices?

Given, Matrices can be multiplied in two ways:

Case I:

First, we multiply QR, then we multiply P to the result of QR. Now, we will need to find
number of scalar multiplication for Q20x30 and R30x40 = 20 x 30 x 40 = 24000

Now, multiply P10x20 to QR20x40 = 10 x 20 x 40 = 8000


Therefore, total number of multiplications = 24000+8000 = 32000

Case II:

First, we multiply PQ, then we multiply R to the result of PQ. Now, we will need to find
number of scalar multiplication for P10x20 and Q20x30 = 10 x 20 x 30 = 6000

Now, multiply PQ10x30 to R30x40 = 10 x 30 x 40 = 12000

Therefore, total number of multiplications = 12000+6000 = 18000

From both case I and case II we conclude that minimum number of multiplications is
18000.

Group – C
(Long Answer Type Questions)
Each of 10 marks

5. Explain TSP Problem. Find out the shortest path of Travelling Salesman problem (TSP) of
the following graph with starting vertex 1 of Salesman.

Given a set of cities and distances between every pair of cities, the problem is to find the
shortest possible route that visits every city exactly once and returns to the starting point.

Let 𝜑 be the source vertex, i.e., vertex 1 then distance of other vertex from the sources
are:
𝑔(𝜑, 2) = 10
𝑔(𝜑, 3) = 15
𝑔(𝜑, 4) = 20

Now, for minimum distances for between two vertices:


𝑔(2, {3}) = 𝐶2,3 + 𝑔( 𝜑, 3) = 35 + 15 = 50
𝑔(2, {4}) = 𝐶2,4 + 𝑔( 𝜑, 4) = 25 + 20 = 45
𝑔(3, {2}) = 𝐶3,2 + 𝑔( 𝜑, 2) = 35 + 10 = 45
𝑔(3, {4}) = 𝐶3,4 + 𝑔( 𝜑, 4) = 30 + 20 = 50
𝑔(4, {3}) = 𝐶4,3 + 𝑔( 𝜑, 3) = 30 + 15 = 45
𝑔(4, {2}) = 𝐶4,2 + 𝑔( 𝜑, 2) = 25 + 10 = 35

Now for minimum distances between three vertices:


𝐶2,3 + 𝑔( 3, {4}) 35 + 50
𝑔(2, {3,4}) = min { } = min { } = 70
𝐶2,4 + 𝑔( 4, {3}) 25 + 45

𝐶3,2 + 𝑔( 2, {4}) 35 + 45
𝑔(3, {2,4}) = min { } = min { } = 65
𝐶3,4 + 𝑔( 4, {2}) 30 + 35
𝐶4,2 + 𝑔( 2, {3}) 25 + 50
𝑔(4, {2,3}) = min { } = min { } = 75
𝐶4,3 + 𝑔( 3, {2}) 30 + 45

Now for minimum distances between four vertices:


𝐶1,2 + 𝑔( 2, {3,4}) 10 + 70
𝑔(1, {2,3,4}) = min { 𝐶1,3 + 𝑔( 3, {2,4}) } = min {15 + 65} = 80
𝐶1,4 + 𝑔( 4, {2,3}) 20 + 75

6. Define Matrix Chain Multiplication. Give the algorithm for matrix chain multiplication.
Find the time complexity of the algorithm. 2+6+2

Matrix Chain Multiplication is an optimization problem that can be solved using dynamic
programming. Given a sequence of matrices, the goal is to find the most efficient way to
multiply these matrices together. The problem is not actually to perform the
multiplications, but merely to decide the sequence of the matrix multiplications involved.

The time complexity of the Matrix Chain Multiplication problem, solved using dynamic
programming, is O(n^3). This is because there are O(n^2) unique sub-problems to any
given problem and for every such sub-problem there could be O(n) splits possible.

7. State n-queens problem and Explain 8-queens problem using backtracking. 4+6
N - Queens problem is to place n - queens in such a manner on an n x n
chessboard that no queens attack each other by being in the same row, column or
diagonal.

The 8-queens problem is a classic puzzle that involves placing eight queens on a
chessboard in such a way that no two queens can attack each other. A queen can attack
another queen if they are on the same row, column, or diagonal. One way to solve this
problem is to use a backtracking algorithm, which tries different positions for the queens
and discards those that are not valid.

The basic idea of the backtracking algorithm is to start from the leftmost column and try
to place a queen in each row of that column. If the queen can be placed safely, meaning it
does not conflict with any previously placed queen, then we move to the next column and
repeat the process. If we reach the last column and all queens are placed safely, then we
have found a valid solution and we can print it or store it. If we cannot place a queen in
any row of a column, then we backtrack to the previous column and try a different row for
the last placed queen. We keep doing this until we find all possible solutions or we
exhaust all options.

The following pseudocode illustrates the backtracking algorithm for the 8-queens
problem:

// A global array to store the positions of the queens


// board[i] = j means there is a queen at row i and column j
int board[8];

// A function to check if a queen can be placed at row i and column j


// It returns true if it is safe, false otherwise
bool isSafe(int i, int j) {
// Check if there is a queen in the same column
for (int k = 0; k < i; k++) {
if (board[k] == j) {
return false;
}
}
// Check if there is a queen in the same diagonal
for (int k = 0; k < i; k++) {
if (abs(board[k] - j) == abs(k - i)) {
return false;
}
}
// If none of the above conditions are true, then it is safe to place the queen
return true;
}
// A recursive function to find all solutions for n queens starting from column i
void solveNQueens(int i, int n) {
// Base case: if i == n, then we have reached the last column and found a solution
if (i == n) {
// Print or store the solution
for (int k = 0; k < n; k++) {
cout << board[k] << " ";
}
cout << endl;
return;
}
// Recursive case: try each row in column i
for (int j = 0; j < n; j++) {
// Check if it is safe to place a queen at row i and column j
if (isSafe(i, j)) {
// Place the queen and move to the next column
board[i] = j;
solveNQueens(i + 1, n);
// Remove the queen and backtrack to try another row
board[i] = -1;
}
}
}

Answer Keys:

Group – A
(Multiple Choice Type Questions)
1. (i) d (ii) c (iii) c (iv) b (v) a (vi) b

(vii) d (viii) c (ix) d (x) d

Group – B (Short
Answer Type Questions)
2.

3.

4.18000

Group – C
(Long Answer Type Questions)
5. 80
6.

7.

MODULE - III

Group – A
(Multiple Choice Type Questions)
Each of 1 marks

1. (i) The Depth First Search traversal of a graph will result into?

a) Linked List b) Tree c) Graph with back edges d) Array

(ii) What is running time of Dijkstra’s algorithm using Binary min- heap method?

a) O(V) b) O (V log V) c) O(E) d) O (E log V)

(iii) Dijkstra’s Algorithm cannot be applied on ______________

a) Directed and weighted graphs b) Graphs having negative weight function c)


Unweighted graphs d) Undirected and unweighted graphs

(iv) Which of the following is false in the case of a spanning tree of a graph G?
a) It is tree that spans G b) It is a subgraph of the G c) It includes every vertex of the G
d) It can be either cyclic or acyclic

(v) Dijkstra’s Algorithm is the prime example for ___________

a) Greedy algorithm b) Branch and bound c) Backtracking d) Dynamic programming

(vi) Which of the following algorithms can be used to most efficiently determine the presence of a
cycle in a given graph

a) Depth First Search b) Breadth First Search c) Prim's Minimum Spanning Tree Algorithm
d) Kruskal's Minimum Spanning Tree Algorithm

(vii) A person wants to visit some places. He starts from a vertex and then wants to visit every place
connected to this vertex and so on. What algorithm he should use?

a) Depth First Search b) Prim's algorithm c) Kruskal's algorithm d) Breadth First Search

(viii) Which of the following algorithm can be used to solve the Hamiltonian path problem
efficiently?
a) branch and bound b) iterative improvement c) divide and conquer d) greedy
algorithm

(ix) Backtracking algorithm is implemented by constructing a tree of choices called as?

a) State-space tree b) State-chart tree c) Node tree d) Backtracking tree

(x) What approach is being followed in Floyd Warshall Algorithm? a)


Greedy Technique
b) Dynamic Programming
c) Linear Programming
d) Backtracking

Group – B
(Short Answer Type Questions)
Each of 5 marks
2. Explain the basic principle of Backtracking and list the applications of Backtracking.

Backtracking is a technique for solving problems that involve searching for a solution
among many possible options. Backtracking tries to build a solution incrementally, one
step at a time, and discards any partial solution that does not satisfy some constraints.
Backtracking can be used to solve problems such as finding all possible ways to arrange n
queens on a chessboard, solving Sudoku puzzles, or generating all permutations of a given
string.

3. What is principle difference between dynamic programming and divide and Conquer
techniques? What is meant by principle of optimality? 3+2

The principle difference between dynamic programming and divide and conquer
techniques is that dynamic programming uses overlapping subproblems to optimize the
solution, while divide and conquer uses non-overlapping subproblems to simplify the
problem. Dynamic programming stores the results of subproblems in a table or an array,
and reuses them whenever needed. Divide and conquer divides the problem into smaller
and independent subproblems, and combines their solutions to obtain the final answer.

The principle of optimality is a concept in dynamic programming that states that an


optimal solution to a problem can be obtained by recursively solving smaller subproblems
in an optimal way. In other words, the principle of optimality implies that if a problem has
an optimal solution, then any subproblem of that problem also has an optimal solution,
and the optimal solution of the original problem can be constructed from the optimal
solutions of the subproblems.

4. Explain Floyd’s Algorithm for all pair shortest path algorithm with example and analyze its
efficiency

Floyd's algorithm is a method for finding the shortest paths between all pairs of vertices in
a weighted graph. It works by iteratively updating a matrix of distances with the minimum
distance between any two vertices using any intermediate vertex. The algorithm can
handle positive or negative edge weights, but not negative cycles.

An example of Floyd's algorithm is shown below for a graph with four vertices and eight
edges.

Initial matrix of distances:

| |0|1|2|3|
|---|---|---|---|---|
| 0 | 0 | 5 | ∞ |10 |
|1|∞|0|3|∞|
|2|∞|∞|0|1|
|3|∞|∞|∞|0|

Iteration k = 0:

| |0|1|2|3|
|---|---|---|---|---|
| 0 | 0 | 5 | ∞ |10 |
|1|∞|0|3|∞|
|2|∞|∞|0|1|
|3|∞|∞|∞|0|

Iteration k = 1:

| |0|1|2|3|
|---|---|---|---|---|
| 0 | 0 | **5** | **8** | **10** |
| 1 | ∞ | **0** | **3** | **∞** |
| 2 | ∞ | **∞** | **0** | **1** |
| 3 | ∞ | **∞** | **∞** | **0** |

Iteration k = 2:

| |0 | 1 | 2 | 3 |
|--- |--- |--- |--- |--- |
| 0 | 0 | 5 | 8 | 9 |
| 1 | ∞ | 0 | 3 | 4 |
| 2 | ∞ | ∞ | 0 | 1 |
| 3 | ∞ | ∞ | ∞ |

Iteration k = 3:

Final matrix of distances:

The efficiency of Floyd's algorithm is O(n^3), where n is the number of vertices in the
graph. This is because it performs n iterations, and each iteration takes O(n^2) time to
update the matrix.

Group – C
(Long Answer Type Questions)
Each of 10 marks

5. What is minimum spanning tree?

Generate and compute the cost of minimum cost spanning tree for the following graph using
Prim's algorithm.

What are the differences between Prims and Kruskal algorithm? 2+5+3

A minimum spanning tree (MST) is a way of connecting all the nodes of a graph with the
least possible total edge weight. A graph is a set of points (nodes) and lines (edges) that link
them. An edge-weighted graph assigns a numerical value (weight) to each edge, which could
represent distance, cost, or any other quantity. A spanning tree is a subset of edges that
forms a tree (no cycles) and includes all the nodes. A MST is a spanning tree that has the
smallest sum of edge weights among all possible spanning trees .

…[missing]…
Prims and Kruskal algorithm are two methods for finding a minimum spanning tree in a
graph. The main differences between them are:

Prims algorithm Kruskal algorithm


Starts from a single Starts from all
vertex and grows vertices and grows
the tree by adding the tree by adding
the cheapest edge the cheapest edge
connected to it among all edges
Uses a priority Uses a disjoint-set
queue to store the data structure to
edges check for cycles
More efficient for More efficient for
dense graphs sparse graphs

6. Write down Dijkstra algorithm. Using Dijkstra’s Algorithm, find the shortest distance from
source vertex ‘S’ to remaining vertices in the following graph-

4+6

7. Find the optimal solution for the fractional knapsack problem making use of greedy
approach. Consider-
n=5

w=60 kg

(w1, w2, w3, w4, w5) = (5, 10, 15, 22, 25)
(b1, b2, b3, b4, b5) = (30, 40, 45, 77, 90)
What are the differences between Fractional Knapsack and 0/1 Knapsack? What is
Topological sorting? 5+3+2

---[missing]---
Sr. 0/1 knapsack problem Fractional knapsack problem
No
1. The 0/1 knapsack problem is solved Fractional knapsack problem is solved
using dynamic programming approach. using a greedy approach.
2. The 0/1 knapsack problem has an The fractional knapsack problem also
optimal structure. has an optimal structure.
3. In the 0/1 knapsack problem, we are Fractional knapsack problem, we can
not allowed to break items. break items for maximizing the total
value of the knapsack.
4. 0/1 knapsack problem, finds a most In the fractional knapsack problem,
valuable subset item with a total value finds a most valuable subset item with
less than equal to weight. a total value equal to the weight.
5. In the 0/1 knapsack problem we can In the fractional knapsack problem, we
take objects in an integer value. can take objects in fractions in floating
points.
6. The 0/1 knapsack does not have The fraction knapsack do have greedy
greedy choice property choice property.

Topological sorting is a way of ordering the nodes of a directed acyclic graph (DAG) such that
for every edge from node u to node v, u comes before v in the ordering. Topological sorting
can be used to find a valid sequence of tasks that depend on each other, such as courses in a
curriculum or jobs in a pipeline.
Answer Keys:

Group – A
(Multiple Choice Type Questions)
1. (i) b (ii) d (iii) b (iv) d (v) a (vi) a (vii) d

(viii) a (ix) a (x) b

Group – B
(Short Answer Type Questions)
2.

3.

4.

Group – C
(Long Answer Type Questions)
5. 99

6.

7.
MODULE - IV

Group – A
(Multiple Choice Type Questions) Each
of 1 marks

1. (i) Hamiltonian path problem is _________

a) NP problem b) N class problem c) P class problem d) NP complete problem

(ii) Which of the following shortest path algorithm cannot detect presence of negative
weight cycle graph?

a) Bellman Ford Algorithm b) Floyd Warshall Algorithm c) Dijsktra’s Algorithm d) None

(iii) Minimum spanning tree is an attribute of

a) Arrays b) Weighted graphs c) Unweighted graphs d) None

(iv) If minimum 3 colors are needed to proper color a graph then chromatic number is

a) 3 b) 1 c) 2 d) none

(v) Of the following given options, which one of the following is a correct option that
provides an optimal solution for 4-queens problem?

a) (3,1,4,2) b) (2,3,1,4) c) (4,3,2,1) d) (4,2,3,1)

(vi) The BFS G=(V,E) has running time

a) O(|V|+|E|) b) O(|V|) c) O(|E|) d) None

(vii) Which of the following algorithms solves the all-pair shortest path problem?

a) Dijkstra’s Algorithm b) Floyd’s Warshall’s c) Prim’s d) Kruskal’s

(viii) Minimum number of unique colors required for vertex coloring of a graph is called? a)

vertex matching b) chromatic index c) chromatic number d) color number

(ix) What is vertex coloring of a graph?


a) A condition where any two vertices having a common edge should always have same
color

b) A condition where all vertices should have a different color

c) A condition where all vertices should have same color

d) A condition where any two vertices having a common edge should not have same color

(x) Which design strategy stops the execution when it finds the solution otherwise starts the
problem from top

a) Divide and conquer b) Backtracking c) Branch and bound d) Dynamic programming

Group – B
(Short Answer Type Questions)
Each of 5 marks
2. Distinguish between breadth first search and depth first search with proper example.

S. Parameters BFS DFS


No.
1. Stands for BFS stands for Breadth First DFS stands for Depth First Search.
Search.
2. Data Structure BFS(Breadth First Search) uses DFS(Depth First Search) uses
Queue data structure for finding Stack data structure.
the shortest path.
3. Definition BFS is a traversal approach in DFS is also a traversal approach
which we first walk through all in which the traverse begins at
nodes on the same level before the root node and proceeds
moving on to the next level. through the nodes as far as
possible until we reach the node
with no unvisited nearby nodes.
4. Technique BFS can be used to find a single In DFS, we might traverse
source shortest path in an through more edges to reach a
unweighted graph because, in destination vertex from a source.
BFS, we reach a vertex with a
minimum number of edges from
a source vertex.
5. Conceptual BFS builds the tree level by level. DFS builds the tree sub-tree by
Difference sub-tree.
6. Approach It works on the concept of FIFO It works on the concept of LIFO
used (First In First Out). (Last In First Out).
7. Suitable for BFS is more suitable for DFS is more suitable when there
searching vertices closer to the are solutions away from source.
given source.
8. Suitability for BFS considers all neighbors first DFS is more suitable for game or
Decision-Trees and therefore not suitable for puzzle problems. We make a
decision, and the then explore all
decision-making trees used in paths through this decision. And
games or puzzles. if this decision leads to win
situation, we stop.
9. Time The Time complexity of BFS is The Time complexity of DFS is
Complexity O(V + E) when Adjacency List is also O(V + E) when Adjacency List
used and O(V^2) when is used and O(V^2) when
Adjacency Matrix is used, where Adjacency Matrix is used, where
V stands for vertices and E V stands for vertices and E stands
stands for edges. for edges.
10. Visiting of Here, siblings are visited before Here, children are visited before
Siblings/ the children. the siblings.
Children
11. Removal of Nodes that are traversed several The visited nodes are added to
Traversed times are deleted from the the stack and then removed
Nodes queue. when there are no more nodes to
visit.
12. Backtracking In BFS there is no concept of DFS algorithm is a recursive
backtracking. algorithm that uses the idea of
backtracking
13. Applications BFS is used in various DFS is used in various
applications such as bipartite applications such as acyclic
graphs, shortest paths, etc. graphs and topological order etc.
14. Memory BFS requires more memory. DFS requires less memory.
15. Optimality BFS is optimal for finding the DFS is not optimal for finding the
shortest path. shortest path.
16. Space In BFS, the space complexity is DFS has lesser space complexity
complexity more critical as compared to because at a time it needs to
time complexity. store only a single path from the
root to the leaf node.
17. Speed BFS is slow as compared to DFS. DFS is fast as compared to BFS.
18, Tapping in In BFS, there is no problem of In DFS, we may be trapped in
loops trapping into finite loops. infinite loops.
19. When to use? When the target is close to the When the target is far from the
source, BFS performs better. source, DFS is preferable.

3. Differentiate Feasible and Optimal Solution. Write short notes on Brute Force Algorithm.
3+2

Feasible Solution Optimal Solution

A solution that satisfies all the constraints of a A solution that maximizes or minimizes
problem. the objective function of a problem.

There can be more than one feasible solution There can be only one optimal solution
for a given problem. or multiple equivalent optimal solutions
for a given problem.
A feasible solution may or may not be optimal. An optimal solution is always feasible.
Finding a feasible solution is usually easier Finding an optimal solution is usually
than finding an optimal solution. harder than finding a feasible solution.
A feasible solution can be improved by An optimal solution cannot be
applying optimization techniques. improved further by applying
optimization techniques.

A brute force approach is an approach that finds all the possible solutions to find a
satisfactory solution to a given problem. The brute force algorithm tries out all the
possibilities till a satisfactory solution is not found.

4. Compare backtracking and branch bound techniques. What are the searching techniques
that are commonly used in Branch-and-Bound method? 4+1

Parameter Backtracking Branch and Bound


Approach Backtracking is used to find all Branch-and-Bound is used to solve
possible solutions available to a optimisation problems. When it realises
problem. When it realises that it has that it already has a better optimal
made a bad choice, it undoes the last solution that the pre-solution leads to, it
choice by backing it up. It searches abandons that pre-solution. It completely
the state space tree until it has found searches the state space tree to get
a solution for the problem. optimal solution.
Traversal Backtracking traverses the state Branch-and-Bound traverse the tree in
space tree by DFS(Depth First any manner, DFS or BFS.
Search) manner.
Function Backtracking involves feasibility Branch-and-Bound involves a bounding
function. function.
Problems Backtracking is used for solving Branch-and-Bound is used for solving
Decision Problem. Optimisation Problem.
Searching In backtracking, the state space tree In Branch-and-Bound as the optimum
is searched until the solution is solution may be present any where in the
obtained. state space tree, so the tree need to be
searched completely.
Efficiency Backtracking is more efficient. Branch-and-Bound is less efficient.
Applications Useful in solving N-Queen Useful in solving Knapsack
Problem, Sum of subset, Hamilton Problem, Travelling Salesman Problem.
cycle problem, graph coloring
problem
Solve Backtracking can solve almost any Branch-and-Bound can not solve almost
problem. (chess, sudoku, etc ). any problem.
Used for Typically backtracking is used to Branch and bound is used to solve
solve decision problems. optimization problems.
Nodes Nodes in stat space tree are Nodes in tree may be explored in depth-
explored in depth first tree. first or breadth-first order.
Next move Next move from current state can Next move is always towards better
lead to bad choice. solution.
Solution On successful search of solution in Entire state space tree is search in order
state space tree, search stops. to find optimal solution.

Group – C
(Long Answer Type Questions)
Each of 10 marks

5. Describe graph coloring problem and write an algorithm for m-coloring problem?
Describe the backtracking algorithm to colour a graph. 6+4

Graph coloring problem is a way of assigning colors to the vertices of a graph such that no
two adjacent vertices share the same color. This problem has many applications in
scheduling, map coloring, register allocation, etc. One of the variations of this problem is
the m-coloring problem, which asks whether it is possible to color a graph using at most
m colors.

Algorithm for graph coloring:

6. Why do we perform topological sorts only on DAGs? Explain. Write the applications of
depth first search algorithm. Explain Hamiltonian cycles with examples. 5+2+3

We perform topological sorts only on DAGs (directed acyclic graphs) because they have a
unique property: they do not contain any cycles. This means that there is a clear ordering
of the vertices such that for every edge (u, v), u comes before v in the ordering. A
topological sort can be used to determine the dependencies of tasks or events in a DAG,
such as scheduling courses or compiling modules.

Applications of Depth First Search:


1. Detecting cycle in a graph
2. Path Finding
3. Topological Sorting
4. To test if a graph is bipartite
5. Finding Strongly Connected Components of a graph
6. Solving puzzles with only one solution
7. Web crawlers
8. Maze generation.
9. Model checking

A Hamiltonian cycle (or Hamiltonian circuit) is a path that visits each vertex exactly once
such that there is an edge (in the graph) from the last vertex to the first vertex.
e.g. {0, 3, 4, 2, 1, 0 } is a Hamiltonian cycle.

(0)--(1)--(2)
| /\ |
| / \ |
|/ \|
(3)--------(4)

7. What are the steps for dynamic programming? What are the advantages of dynamic
programming method over divide & conquer method? Explain Kruskal’s Algorithm to find
MST with example. 3+2+5

The steps for dynamic programming are:

1. Define the subproblems and the optimal substructure of the original problem.
2. Find a recurrence relation that relates the optimal solutions of the subproblems to the
optimal solution of the original problem.
3. Solve the subproblems using bottom-up approach and store the results in a table or an
array.
4. Construct the optimal solution of the original problem from the results of the
subproblems.

Dynamic programming has some advantages over divide and conquer, such as:

1. It avoids recomputing the same subproblems multiple times, which can save time
and space.
2. It can handle problems that have overlapping subproblems, which divide and
conquer cannot.
3. It can find the optimal solution of a problem, while divide and conquer may only
find a suboptimal solution.

Kruskal's algorithm is a greedy method to find a minimum spanning tree (MST) of a


weighted graph. A MST is a subset of edges that connects all the vertices with the
minimum total edge weight. Kruskal's algorithm works by sorting the edges in ascending
order of their weights and adding them to the MST one by one, as long as they do not
create a cycle. For example, consider the following graph:

A---5---B
| \ |
3 6 4
| \|
C---2---D
The sorted edges are: CD (2), AC (3), BD (4), AB (5), AD (6). The algorithm starts with an
empty MST and adds the edges in order:

1. CD: no cycle, add to MST


2. AC: no cycle, add to MST
3. BD: no cycle, add to MST
4. AB: creates a cycle with CD and BD, skip
5. AD: creates a cycle with AC and CD, skip

The final MST is CD, AC, BD with a total weight of 9.

Answer Keys:

Group – A
(Multiple Choice Type Questions)
1. (i) d (ii) c (iii) b (iv) a (v) a (vi) a (vii) b (viii) c (ix) d (x)

Group – B
(Short Answer Type Questions)
2.

3.

4.

Group – C
(Long Answer Type Questions)
5.

6.

7.

MODULE - V

Group – A
(Multiple Choice Type Questions)
Each of 1 marks

1. (i) What does a polynomial time complexity mean?

a) The amount of time taken to complete an algorithm is independent to the number of


inputted elements
b) The amount of time taken to complete an algorithm is independent from the number of
elements inputted

c) The amount of time taken to complete an algorithm is proportional to the power of 2 of


the number of items inputted.

d) The time taken to complete an algorithm will increase at a smaller rate as the number of
elements inputted.

(ii) Travelling Salesman Problem is

a) NP Hard b) NP c) NP Complete d) None of these

(iii) A Problem L is NP Complete if and only if

a) L is NP-Hard b) L is NP and NP-Hard c) L is NP d) L is Non- Polynomial

(iv) Problems that cannot be solved by any algorithm are called?

a) Tractable problems b) Intractable problems c) Undecidable problems d) Decidable


problems

(v) Which of the following is known to be not an NP-Hard Problem? a)


Vertex Cover Problem
b) 0/1 Knapsack problem
c) Maximal Independent Set Problem
d) Travelling Salesman Problem

(vi) Which one is true of the following?


a) All NP hard problems are NP complete
b) All NP complete problems are NP hard
c) Some NP complete problems are NP hard
d) None of these

(vii) _________ is the class of decision problems that can be solved by non-deterministic
polynomial algorithms. a) NP
b) P
c) NP-Hard
d) NP-Complete

(viii) How many conditions have to be met if an NP- complete problem is polynomially
reducible?

a) 1 b)2 c)3 d)4

(ix) Which of the following problems is not NP complete?


a) hamiltonian circuit b) bin packing c) partition problem d) halting problem
(x) To which of the following class does a CNF-satisfiability problem belong?

a) np class. B) p class c) np complete d) np hard

Group – B
(Short Answer Type Questions)
Each of 5 marks
2. Define the classes NP Hard and NP Complete? Discuss what you mean by polynomial
reductions? 3+2
NP Hard
Here you need to satisfy the following to decision of NP Hard
problem:
1. If we can solve this problem in polynomial time, then we can
solve all NP problems in polynomial time.
2. You can convert this issue from one form to another in
polynomial time then that problem will be NP Hard problem.

NP Complete
A problem is NP complete:
1. It is an NP.
2. It is an NP Hard problem.

Polynomial reductions are a way of comparing the difficulty of


different computational problems. A problem A is polynomially
reducible to a problem B if there is an algorithm that can
transform any instance of A into an instance of B in polynomial
time.

3. A≤ pBc and B€P then prove that A€P

4. Explain with an example vertex cover problem.


A vertex cover of a graph is a set of vertices that includes at least one endpoint of every
edge in the graph. For example, consider the following graph:

A---B
/\/\
C---D---E
One possible vertex cover is {A, B, D}, since these three vertices touch all the edges in the
graph. Another possible vertex cover is {B, C, D, E}, which is larger but still valid. The vertex
cover problem is to find the minimum size of a vertex cover for a given graph, which is NP-
hard in general.

Group – C
(Long Answer Type Questions)
Each of 10 marks

5. Distinguish between NP-hard and NP-complete problems. Explore the strategy to prove
that a problem is NP-hard. State and proof Cook’s theorem. 3+3+4

NP-hard NP-Complete
NP-Hard problems(say X) can be NP-Complete problems can be solved by a
solved if and only if there is a NP- non-deterministic Algorithm/Turing Machine
Complete problem(say Y) that can be in polynomial time.
reducible into X in polynomial time.
To solve this problem, it do not have to To solve this problem, it must be both NP
be in NP . and NP-hard problems.
Time is unknown in NP-Hard. Time is known as it is fixed in NP-Hard.
NP-hard is not a decision problem. NP-Complete is exclusively a decision
problem.
Not all NP-hard problems are NP- All NP-complete problems are NP-hard
complete.
Do not have to be a Decision problem. It is exclusively a Decision problem.
It is optimization problem used. It is Decision problem used.
Example: Halting problem, Vertex Example: Determine whether a graph has a
cover problem, etc. Hamiltonian cycle, Determine whether a
Boolean formula is satisfiable or not, Circuit-
satisfiability problem, etc.

To prove that a problem is NP-hard, you need to find a polynomial-time reduction from a
known NP-hard problem to your problem. This means that you can transform any instance
of the NP-hard problem into an instance of your problem in polynomial time, such that
solving your problem would also solve the NP-hard problem. For example, you can reduce
the subset sum problem, which is NP-hard, to your problem by following some steps. This
would show that your problem is at least as hard as the subset sum problem, and therefore
NP-hard.
Cook's theorem states that the Boolean satisfiability problem (SAT) is NP-complete, meaning
that any problem in NP can be reduced to SAT in polynomial time. A proof sketch of Cook's
theorem is as follows:

1. Assume that there is a deterministic Turing machine M that decides an arbitrary problem
in NP in time p(n), where n is the size of the input and p is some polynomial function.
2. Construct a Boolean formula F that encodes the computation of M on any input x of
length n, such that F is satisfiable if and only if M accepts x.
3. The formula F has variables corresponding to the tape cells, the state, and the head
position of M at each time step from 0 to p(n).
4. The formula F has clauses that enforce the following constraints:
a. The initial configuration of M on x is correct.
b. The transition function of M is respected at each time step.
c. The final configuration of M is accepting.
d. The formula F can be constructed in polynomial time from M and x, and has size
polynomial in p(n).
5. Therefore, SAT is NP-hard, and since SAT is also in NP, it follows that SAT is NP-complete.

6. Explain in detail about Maximum Flow Problem. Define Ford – Fulkerson Method. 7+3

The maximum flow problem is a type of optimization problem in graph theory. It involves
finding the largest amount of flow that can go from a source node to a sink node in a
network, where each edge has a limit on how much flow it can carry. The problem can be
used to model many real-world situations, such as transportation, communication, and
resource allocation. There are different algorithms to solve the maximum flow problem,
such as Ford-Fulkerson, Edmonds-Karp, and push-relabel.

The Ford – Fulkerson Method is an algorithm for finding the maximum flow in a network.
It works by repeatedly finding augmenting paths from the source to the sink and adding
their flow values to the total flow. The algorithm terminates when there are no more
augmenting paths.

7. Explain the satisifiability problem and write the algorithm.

The satisfiability problem (SAT) is the problem of determining if a Boolean formula is


satisfiable or unsatisfiable. Satisfiable means that the Boolean variables can be assigned
values such that the formula turns out to be true. Unsatisfiable means that it is not
possible to assign such values.

One algorithm to solve SAT is based on the idea of converting the formula into conjunctive
normal form (CNF), which is a conjunction (AND) of clauses, where every clause is a
disjunction (OR) of literals. A literal is a variable or its negation. For example, (x1 + x2)(-x1
+ -x2) is a CNF with two clauses and four literals.

The algorithm works as follows:

1. Convert the formula into CNF using logical equivalences and De Morgan's laws.
2. For each clause in the CNF, do the following:
- If the clause contains both a literal and its negation, then the clause is true and can be
ignored.
- If the clause contains only one literal, then assign the value that makes it true and
propagate this value to the rest of the formula using unit propagation. This may simplify
or eliminate some clauses.
- If the clause is empty, then the formula is unsatisfiable and the algorithm terminates
with a negative answer.
3. If there are no more clauses left, then the formula is satisfiable and the algorithm
terminates with a positive answer and a satisfying assignment.
4. If there are still clauses left, but none of them can be simplified or eliminated by the
previous steps, then choose a literal that appears in some clause and assign it a value
arbitrarily. This creates two branches: one where the literal is true and one where it is
false. Apply unit propagation to each branch and recursively apply the algorithm to each
branch until either one of them returns a positive answer or both of them return a
negative answer.
5. If one branch returns a positive answer, then return that answer and assignment. If
both branches return a negative answer, then backtrack to the previous choice point and
try the opposite value for the literal. If there are no more choices left, then return a
negative answer.

Answer Keys:
Group – A
(Multiple Choice Type Questions)
1. (i) c (ii) c (iii) b (iv) b (v) b (vi) b
(vii) a (viii) b (ix) d (x) c

Group – B
(Short Answer Type Questions)
2.
3.
4.

Group – C
(Long Answer Type
Questions)
5.
6.

7.

You might also like