I need help understanding how the author got the answer of problem 11 in the Big O chapter.
The problem goes like this:
The following code prints
In general the printing of a constant length string is considered constant as well, but if we want to be precise let's consider the print of a single character as the basic operation: this means that to print a k length string we have O(k)
.
Since we have O(ck) possible strings and for each of them we have to check if it is sorted (with O(k)) and to print them (another O(k)), the total complexity became O(ck(k + k)) = O(2ckk).
But multiplying a function for a constant factor doesn't change it's complexity, and therefore the answer remains O(ckk).