Credit goes to ZigZag
1 point possible (graded)
The ONLY thing we are interested in when designing programs is that it returns the correct answer.
- True
- False
The anwer is false because we are also intrested in the program’s efficiency. Even, sometimes we will prefer a program that is less precise but more efficient (see the salesman algorithm).
1 point possible (graded)
When determining asymptotic complexity, we discard all terms except for the one with the largest growth rate.
- True
- False
The anwer is true because the term asymptotic refers to a very big value of our variable (often called n, referring either as a given integer, or as the length of a given list). As n grows, other growth rate will become insignificant.
1 point possible (graded)
Bisection search is an example of linear time complexity
- True
- False
The answer is false because bisection search divides the searched array by 2 for every iteration of its loop. A program that does so won’t have a linear time complexity because the number of time we can divide something by two does not grow linearly but logarithmically. The time complexity of bisection search is logarithmic. 1 = n/(2^i) to find i (the number of times we divide n by two) we must use a logarithm: In this case i = log2(n).
1 point possible (graded)
For large values of n, an algorithm that takes 20000n^2 steps has better time complexity (takes less time) than one that takes 0.001n^5 steps
- True
- False
The answer is true because when we talk about time complexity we discard any constant additional or multiplicational terms. Here we just have to compare n^2 to n^5. They are two n^c cases but one with a larger c (n^5), the more c gets large the faster it grows thus making it worse in terms of time complexity.
1 point possible (graded)
Indirection, as talked about in lecture, means you have to traverse the list more than once.
- True
- False
The answer is false because as we’ve seen in lecture, indirection is a way of referring to something using a name, reference, or container instead of the values itself.
1 point possible (graded)
The complexity of binary search on a sorted list of n items is O(log n).
- True
- False
The answer is true because binary search does the same computation as bisection search (see Problem 1-3).
1 point possible (graded)
The worst case time complexity for selection sort is O(n^2).
- True
- False
The answer is true because selection sort has a loop that takes n time to complete inside a loop that takes n time to complete (nested loops). This gives us O(n^2) complexity time.
1 point possible (graded)
The base case for the recursive version of merge sort from lecture is checking ONLY for the list being empty.
- True
- False
The answer is false because the base case for merge sort is checking for a list of length 0 (empty) or 1, because such a list is already sorted.
For each of the following expressions, select the order of growth class that best describes it from the following list: O(1), O(log (n)), O(n), O(n*log(n)), O(n^c) or O(c^n). Assume c is some constant.
- O(n) most important term : 0.0000001n
- O(n^c) most important term : 0.0001n^2
- O(n) most important term : 20n
- O(n^c) most important term : 5n^7
- O(n^c) most important term : n^200
- O(n^c) most important term : 30n^2
- O(n*log(n)) most important term : n*log(n)
- O(1) most important term : 3
- O(c^n) most important term : 5n
- O(c^n) most important term : 2n
1 point possible (graded)
Consider the following Python procedure. Specify its order of growth.
def modten(n):
return n%10
The answer is O(1). It will take one operation, the size of n is irrelevant.
1 point possible (graded)
Consider the following Python procedure. Specify its order of growth.
def multlist(m, n):
'''
m is the multiplication factor
n is a list.
'''
result = []
for i in range(len(n)):
result.append(m*n[i])
return result
The answer is O(n) this procedure goes through the list n times and then sequentially does the append procedure (which we assume tales constant times) n times. That is n + n + 2 (the + 2 is for the return statement and the assignment of result at the beginning). 2n + 2 is O(n) time complexity.
1 point possible (graded)
Consider the following Python procedure. Specify its order of growth.
def foo(n):
if n <= 1:
return 1
return foo(n/2) + 1
The answer is O(log (n)) it follows the same mathematical procedure explained in Problem 1-3.
1 point possible (graded)
Consider the following Python procedure. Specify its order of growth.
def recur(n):
if n <= 0:
return 1
else:
return n*recur(n-1)
The answer is O(n) even if this a recursion call it diminishes of 1 each time meaning that as n grows the number of calls will grow linearly.
1 point possible (graded)
Consider the following Python procedure. Specify its order of growth.
def baz(n):
for i in range(n):
for j in range(n):
print(i, j)
The answer is O(n^2). This is a typical case of quadratic complexity because these are nested loops that operates n times each. As they are not sequentially executed we cannot add there respective complexity time and have to multiply them. n*n equals n^2.
1 point possible (graded) You have 2 attempts for this problem.
Here is code for linear search that uses the fact that a set of elements is sorted in increasing order:
def search(L, e):
for i in range(len(L)):
if L[i] == e:
return True
if L[i] > e:
return False
return False
Consider the following code, which is an alternative version of search.
def newsearch(L, e):
size = len(L)
for i in range(size):
if L[size-i-1] == e:
return True
if L[i] < e:
return False
return False
Which of the following statements is correct? You may assume that each function is tested with a list L whose elements are sorted in increasing order; for simplicity, assume L is a list of positive integers.
With this one you had to be careful to read really meticulously the respective codes. The answer is : search and newsearch return the same answers for lists L of length 0, 1, or 2.
The trick is that in newsearch the if statements are not the same, so they won’t work as if they’re complementary if else statements. The second one is if L[i] < e: not if L[size-i-1] < e: ! So they won’t be checking the same element. The first statement will check the last element in the list, the second will check the first element in the list. So this will work only for empty lists, singleton lists, and lists of two elements.
1 point possible (graded)
Answer the questions below based on the following sorting function. If it helps, you may paste the code in your programming environment. Study the output to make sure you understand the way it sorts.
def swapSort(L):
""" L is a list on integers """
print("Original L: ", L)
for i in range(len(L)):
for j in range(i+1, len(L)):
if L[j] < L[i]:
# the next line is a short
# form for swap L[i] and L[j]
L[j], L[i] = L[i], L[j]
print(L)
print("Final L: ", L)
Does this function sort the list in increasing or decreasing order? (items at lower indices being smaller means it sorts in increasing order, and vice versa)
- Increasing
- Decreasing
Here you just have to concentrate on one line of the code: if L[j] < L[i]: Knowing that the range for the j for loop is (i+1, len(L)), it iterates on the same list but one item after the i for loop. Knowing that, if you swap the items only when L[j] is smaller than L[i] it means that you want to have a list where every next item cannot be smaller than the one before, thus meaning that the smaller one will be first, then the second smaller one and so on.
1 point possible (graded)
What is the worst case time complexity of swapSort? Consider different kinds of lists when the length of the list is large.
The answer here is O(n²). We have nested loops that iterate n times each thus giving us a complexity time of O(n²). (See Problem 4-5).
1 point possible (graded)
If we make a small change to the line for j in range(i+1, len(L)): such that the code becomes:
def modSwapSort(L):
""" L is a list on integers """
print("Original L: ", L)
for i in range(len(L)):
for j in range(len(L)):
if L[j] < L[i]:
# the next line is a short
# form for swap L[i] and L[j]
L[j], L[i] = L[i], L[j]
print(L)
print("Final L: ", L)
What happens to the behavior of swapSort with this new code?
Answer : modSwapSort sorts in descending order. The code bubbles up the largest element in the beginning by comparing each element including the element itself with the current element. If element is smaller than current index swap it. Now compare rest of elements with that element.
To make it more clear, it can be good to write the array with a pen then with a pencil write and erase at which index i and j are placed for each iteration of each loops.