Pages

Partitioning Souvenirs

Given a list of integers, we want to know if there is a way to split it evenly in three parts, so that the sum of each part is the same than the other ones.

Problem given in week six of the edX MOOC Algs200x Algorithmic Design and Techniques by the UC San Diego.

This 3-partition problem is not too different from the classic 2-partition one, for which I have described the well known dynamic programming solution in the previous post. As before, we build a table where the rows represents the sums we want to get and the columns the elements in the collection we are about to consider.
However, we have to change a bit the meaning of the value that we push in each cell. This time we check two of the three tentative subcollections, and we want to keep track of how many of them could have as sum the row index, given the elements of the input list available in that column.

Consider as example this input:
[3, 1, 1, 2, 2]
We are looking for three subcollections having all a sum of three. The table is having six columns and four rows, including a first dummy one. We initialize all its cells to zero, and we loop on all the "real" cells applying rules close to the ones we have seen for the 2-partition problem, with slight variations.
(a) If the column element matches the row index, I increase the value of the left-hand cell, up to reach 2.
(b) If there is not a match, but the column element added to the previous one matches it, I still increase the value of the left-hand cell, up to reach 2.
(c) Otherwise, I copy the value in the left-hand cell to the current one.
The result should be reflected by this table:
And the answer to the original question is yes only if the bottom-left value in the table is two.

Here is my python code to implement this algorithm.
def solution(values):
    total = sum(values)
    if len(values) < 3 or total % 3:  # 1
        return False
    third = total // 3
    table = [[0] * (len(values) + 1) for _ in range(third + 1)]  # 2

    for i in range(1, third + 1):
        for j in range(1, len(values) + 1):  # 3
            ii = i - values[j - 1]  # 4
            if values[j - 1] == i or (ii > 0 and table[ii][j - 1]):  # 5
                table[i][j] = 1 if table[i][j - 1] == 0 else 2
            else:
                table[i][j] = table[i][j - 1]  # 6

    return True if table[-1][-1] == 2 else False
1. If dividing the sum of values by three I get a remainder, or if there are less than three elements in the list, for sure there is no way of 3-partition my list.
2. Build the table as discussed above. Note the zero as default value, even in the dummy top row - it is not formally correct, but those values are not used anywhere.
3. Loop on all the "real" cells.
4. Precalculate the row for the (b) check described above.
5. The first part of the condition is the (a) check above. If it fails, we pass to the second part, using the row calculate in (4). If one of the two conditions is true, the value of the current cell is increased up to 2.
6. Vanilla case, moving to the right we keep the value already calculate for the previous cell.

It looks easy, once one see it, doesn't it?

Actually, a tad too easy, as Shuai Zhao pointed out - see below in the comments. The problem is that the (b) check, as described above, is too simple. Before using a value I have to ensure it has not already used on the same line. Things are getting complicated, better to explain them in another post.

I pushed my python code and a few test cases to GitHub. The latest version is the patched code, working also for the Shuai test. Get back in the history if you want to see the solution described here.

Go to the full post

2-partition problem

Having a list of integers, we want to know if we can split it evenly in two parts.

There is a well known, elegant and relatively fast dynamic programming solution to this problem.

Say that this is the list
[3, 1, 1, 2, 2, 1]
Being the sum of its elements ten, we'll have a positive answer to the problem if we could find a subcollection with a sum of five.

To check it, we build a table having rows from zero to the sum of the subcollection we are looking for - five in this case. Actually, the zeroth row is pretty useless here, I keep it just because it makes indices less confusing in the code. The columns represents the partial sum of elements in the list we have in input, column zero is for the empty collection, one contains just the first element (3 in the example), column two the first two items (3 and 1 here), up to the last one that keep all.

The content in each cell is the answer to the question: is there a combination of elements in the subcollection specified by the column that have a sum specified by the row?

So, for instance, table[2][3] means: could I get 2 as a sum from [3, 1, 1]? The answer is yes, because of latter two elements.

The bottom-right cell in the table is the answer for the original problem.

Let's construct the table. Whatever I put in the topmost row is alright, since I won't use it in any way. They would represent the answer to the question if I could get a sum zero from a collection that could be empty (leftmost cell) up to including all the element in the original input (rightmost cell). Logically, we should put a True inside each of them but, since we don't care, I leave instead a bit misleading False. Forgive me, but this let me initialize with more ease the table, considering that each first cell in any row (but the zeroth one) should be initialized with False, since it is impossible having a sum different from zero from an empty collection.

Now let's scan all the cell in the table, from (1, 1) to the bottom-right one, moving from left to right, row by row.
If the currently added element in the list has the same value of the row index (that is, the total we are looking for), we can put a True in it.
If the cell on the immediate left contains a True, we can, again, safely put a True in it. Adding an element to the collection won't change the positive answer we already get.
If the first two checks don't hold, I try to get the total adding up the current value to the previous one. If so, bang, True again.

At the end of looping, we should get a table like the one shown here below.
(a) The cell (1, 2) is set to True because the column represent the subcollection {3,1}, having as latest element the row index.
(b) The cell (1, 4) is True because (1, 3) is True
(c) The cell (4, 2) is True because of cell (3, 1), checked because being the left adjacent column, moving up 1 (from the latest element in current subcollection {3,1}).

Checking the bottom-right cell we have a True, so the answer to our original question is yes.

Here is my python implementation of this algorithm:
def solution(values):
    total = sum(values)  # 1
    if total % 2:
        return False

    half = total // 2
    table = [[False] * (len(values) + 1) for _ in range(half + 1)]  # 2

    for i in range(1, half + 1):
        for j in range(1, len(values) + 1):  # 3
            if values[j-1] == i or table[i][j-1]:  # 4
                table[i][j] = True
            else:  # 5
                ii = i-values[j-1]
                if ii > 0 and table[ii][j-1]:
                    table[i][j] = True

    return table[-1][-1]  # 6
1. If the sum of values is not an even number, we already know that the list can't be split evenly.
2. Build the table as described above. Don't pay attention to the topmost row, it's just a dummy.
3. Loop on all the "real" cell, skipping the leftmost ones, that are left initialized to False.
4. See above, case (a) and (b) as described and visualized in the picture
5. This code implements the case (c). I get the the tentative row index in ii. If the relative cell on the left adjacent column is available and it is set to True, the current cell is set to True too.
6. Get the solution to the problem.

I pushed my python code and a few test cases on GitHub.

Go to the full post

Other Dynamic Programming problems

Sixth and last week of the edX MOOC Algs200x Algorithmic Design and Techniques by the UC San Diego, again on Dynamic Programming. Just three problems, fully described in this pdf.

The first one, named "Maximum Amount of Gold", states that you have a bag of given capacity, and you see n gold bars of (possibly) different weights. Push as much gold as you can in the bag.

It is easy to say that it is a variation on the classic 0/1 knapsack problem. Here the bars have all the same unitary value, so we just need to know their weight to build our solution. Not much sweat to solve it, anyway I pushed to GitHub first a python script to solve the generic problem, then one tailored on the specific requirements of the problem.

Much more challenging the other two problems. "Partitioning Souvenirs" is a 3-Partition problem. Before solving it, I practiced with the more common 2-partition version. "Maximizing the Value of an Arithmetic Expression" is well described, step by step, in the course, and I guess that I solved it easily only because of this intensive training. You could see my python implementation on GitHub.

Go to the full post

Longest Common Subsequence of Three Sequences

And finally, the last one of this group of Dynamic Programming problems. Actually, from the algorithmic point of view this is the less interesting one, being just a variation on the previous one. Now we have in input three sequences instead of two, still we have to get the longest subsequence among them.

The interest here is all in extending the algorithm to work with a three-dimensional cache. Basically just an implementation issue, that each programming language could solve in its own way.

Here is how I did it in python:
def solution_dp(a, b, c):
    cube = []
    for m in range(len(c) + 1):  # 1
        sheet = [[0] * (len(b) + 1)]
        for n in range(1, len(a) + 1):
            sheet.append([0] * (len(b) + 1))
        cube.append(sheet)

    for i in range(1, len(cube)):
        for j in range(1, len(cube[0])):
            for k in range(1, len(cube[0][0])):
                if a[j - 1] == b[k - 1] == c[i - 1]:  # 2
                    cube[i][j][k] = cube[i - 1][j - 1][k - 1] + 1
                else:
                    cube[i][j][k] = max(cube[i - 1][j][k], cube[i][j - 1][k], cube[i][j][k - 1])

    return cube[-1][-1][-1]
1. If you compare this code with the one for the 2-sequences problem, you would see how the difference is all in this extra for-loop. Now the cache is a three dimensional matrix (actually, it is not a cube but a parallelepiped, you could guess why I used a wrong name here).
2. The comparisons get now three way. Luckily, python helps us keeping them readable.

Once you manage correctly the three-level loop, the job is done.

I have pushed the complete python script and its test case to GitHub.

Go to the full post

Longest Common Subsequence of Two Sequences

Close to the previous problem, where we had to compute the minimum edit distance between two strings, here we have to get the maximum number of common elements in the same order between two sequences.

The similarity drives us to look again for a Dynamic Programming solution (adding up to the hint that these problems are in the same lot).

Here is my python solution:
def solution_dp(lhs, rhs):
    table = [[0] * (len(rhs) + 1)]  # 1
    for _ in range(1, len(lhs) + 1):
        table.append([0] * (len(rhs) + 1))

    for i in range(1, len(table)):
        for j in range(1, len(table[0])):  # 2
            if lhs[i - 1] == rhs[j - 1]:
                table[i][j] = table[i-1][j-1] + 1  # 3
            else:
                table[i][j] = max(table[i - 1][j], table[i][j - 1])  # 4
    return table[-1][-1]
1. The cache is created as in the previous problem, bidimensional, with extra dummy row and column to keep the code simple.
2. Again, we loop on all the "real" cells in the cache, from left to right, up to down.
3. The code change in the algorithm. If the corresponding elements in the input sequences match, we put as current value the counter stored in the top-left cell, increased it by one.
4. If it is a mismatch, we don't increase anything, just get the bigger value coming from the two possible alternatives.

And that's all. Richard Bellman, who found out this algorithm, was a genius.

Python script and testcase pushed to GitHub.

Go to the full post

Computing the Edit Distance Between Two Strings

Given two strings, we should compute their edit distance. It is a well know problem, commonly solved by Dynamic Programming.

As we should expect, the idea is very close to the one seen in the previous problem, with the noticeable difference that here we are working on two lists, so our cache is going to be a bidimensional matrix and the complexity of the algorithm is moving to the O(n * m) realm, being n and m the sizes of the two strings in input.

def solution_dp(lhs, rhs):
    table = [[x for x in range(len(rhs) + 1)]]  # 1
    for k in range(1, len(lhs) + 1):
        table.append([k] + [0] * len(rhs))

    for i in range(1, len(table)):
        for j in range(1, len(table[0])):  # 2
            if lhs[i - 1] == rhs[j - 1]:
                table[i][j] = table[i-1][j-1]  # 3
            else:
                table[i][j] = min(table[i - 1][j], table[i][j - 1], table[i - 1][j - 1]) + 1  # 4

    return table[-1][-1]  # 5
1. This is our cache. Instead of having a single dummy cell, here we have both zeroth row and column just filled with zeroes and not touched anymore. Again, not strictly a necessity, still the code is much more readable in this way.
2. Let's loop on all "real" elements in the matrix.
3. If the corresponding characters in the strings are the same, we have a match. Meaning the edit distance won't change, so we just copy in the current cell the value of the one on the left top corner.
4. Otherwise we have seen a change. Since we are looking to the minimal distance, we get the lowest value in the top / left cells, and increase it by one.
5. At the end of the loop, the bottom-right cell contains the result.

Incredible simple, isn't it?

Python code and testcase on GitHub.

Go to the full post

Primitive Calculator

We always start from 1, and we get the positive integer we should get to. We could apply just three operations, multiply by 2, by 3, or adding one. Which is the minimum number of operations that gives us the expected result? Which sequence will be generated?

This problem is close to the previous one, about changing money. After all, are all part of the same lot about Dynamic Programming.

The first part of the code follows almost naturally from looking at the money changer:
cache = [0] * (target + 1)  # 1
for i in range(1, len(cache)):  # 2
    cache[i] = cache[i-1] + 1
    if i % 2 == 0:
        cache[i] = min(cache[i], cache[i // 2] + 1)
    if i % 3 == 0:
        cache[i] = min(cache[i], cache[i // 3] + 1)
1. Reserve a cache for all the intermediate results, again, a dummy zeroth element would make our code simpler.
2. Loop an all the elements, checking for all the possible alternatives. The minimum local solution would be kept an used to the next steps.

Now in the last element of the cache we have the answer to the first question. To get the second answer we need to backtrack the cache, identify each choice we did at each step.
result = [1] * cache[-1]  # 1
for i in range(1, cache[-1]):  # 2
    result[-i] = target  # 3
    if cache[target-1] == cache[target] - 1:  # 4
        target -= 1
    elif target % 2 == 0 and (cache[target // 2] == cache[target] - 1):  # 5
        target //= 2
    else:  # 6 # target % 3 == 0 and (cache[target // 3] == cache[target] - 1):
        target //= 3
return result
1. This is the list we are going to return. We know its size, stored in the last element of the cache, since I have to provide an initialization value, I use 1, the right value for its leftmost element.
2. I am going to set the result list, each value but the leftmost one, already correctly set.
3. I know the rightmost value would be the target passed by the user.
4. If the previous element in the cache is the current value of the cache minus one, I got there adding one, so I here apply the inverse operator, decreasing target by one
5. If the current target is divisible by 2, and the cache at position current divided by two is actually the value of the current element of the cache minus one, we got there multiplying by two. So I apply the inverse to backtrace.
6. Otherwise we got there multiplying by three. I could have written the full elif statement as shown in the comment. The point is that by construction we have only three choices to get to an element in the cache and this must be the third one.

Python code and testcase on GitHub.

Go to the full post

Money Change Again

Already seen as greedy problem, now we have to approach it with Dynamic Programming.

We have an integer, representing a total amount that we want to get adding coins of given denominations. Our job is finding the minimum number of required coins.

If the denominations are such that we can always guarantee a safe greedy choice, the greedy solution is better. However, if, as in this case, the denominations are [1, 3, 4] we have to look somewhere else.

A recursive approach is safe, but it would lead to an exponential time complexity.

Let's solve it instead using a Dynamic Programming approach:
DENOMINATIONS = [1, 3, 4]

# ...

cache = [0] * (target + 1)  # 1

for i in range(1, target + 1):  # 2
    cache[i] = cache[i-1] + 1  # 3
    for coin in DENOMINATIONS[1:]:  # 4
        if coin <= i:
            other = cache[i-coin] + 1
            cache[i] = min(cache[i], other)

return cache[-1]  # 5
1. Each step would need to have a look at previous results, so we keep them in a cache. Notice that we keep also a dummy extra value, not a strict necessity, but keeps our code simpler.
2. Let's loop on all the "real" elements - the zeroth one, dummy, is left untouched with is original value of 0.
3. Assuming that among the available denominations there is "1", we can always select it. This is usually a safe assumption. If we can't do it, our code should be more complex, and we should also consider the case where a result can't be generated correctly.
4. Let's check all the other denominations. If the current one could be used, go and fetch in the cache the previous step, add one to it, and put it in the cache at the current position, if this is less than the currently calculated value.
5. Finally return the last calculated value.

I pushed test case and python script to GitHub.

Go to the full post

Almost five Dynamic Programming problems

I'm following edX MOOC Algs200x Algorithmic Design and Techniques by the UC San Diego, and I've just completed week five, that is about Dynamic Programming. To pass it, I had to solve a few problems, presented in this pdf.

Here I present them, my annotated python solutions in following posts.

Money Change Again

Given an integer amount, and an (infinite) numbers of coins of given denomination, return the minimum number of coins needed to change the full amount.

In the (sort of disappointing) previous week, they presented a greedy approach to this problem. When it works, it is cool and fast. However, to use it we have to prove that each greedy selection is safe. And that it is true only for some set of available denominations. Otherwise we must follow a different approach. And here enters Dynamic Programming.

It is a simple problem, good introduction to this technique.

Primitive Calculator

We should get to a given integer, starting from 1 and applying just a limited number of operations. Can we minimize this number?

If you see the similarity with the previous problem you have already done large part of the job.

Edit Distance Between Two Strings

Given two strings, returns their edit distance. This is a classic problem that could be solved by Dynamic Programming. The two previous one need a linear cache, here we should push our intermediate solutions in a matrix. Things are a bit more complicated, however you could find lot of documentation about this problem on the net.

Longest Common Subsequence

Given two sequences, find the largest shared subsequences. It is close to the previous problem, here we are interested just in commonalities between the two input sequences. Similar structure, some implementation differences. Also this problem is well known and studied.

Longest Common Subsequence of Three Sequences

Extension of the previous one, now we have three sequences to be compared. The point is not so much in Dynamic Programming anymore - since the structure is essentially the same - but how the programming language you use for implementing the solution manages a three-dimensional matrix.

Go to the full post

The closest pair of points

Given n points on a plane, find the smallest distance between a pair of two (different) points.

Let's see first a naive solution:
def solution_naive(points):
    size = len(points)
    if size < 2:  # 1
        return 0.0

    result = float('inf')  # 2
    for i in range(size - 1):
        for j in range(i+1, size):
            result = min(result, distance(points[i], points[j]))
    return result
1. If only one point, or no point at all, is passed, the minimum distance is defaulted to zero.
2. Otherwise initialize it to infinite, then loop on all the possible couples of points to find the one with minimum distance.

Where the distance function could be implemented as:
def distance(a, b):
    return math.sqrt((a[0]-b[0])**2 + (a[1]-b[1])**2)

The flaw of this naive implementation is that it has an obvious O(n^2) time complexity. Meaning that it rapidly becomes useless with the growing of the number points in input.

Let's apply a divide and conquer approach to this problem. The thing is that to do that we need a bit of mathematics that is beyond my knowledge. You could read this document by Subhash Suri from UC Santa Barbara for more details.

Firstly we sort the points for their x component, then we call the recursive function:
def solution_dac(points):
    points.sort()
    return closest_distance(points)
We firstly divide the problem up to get pieces that are so simple that they can be solved with the naive algorithm, and apply a merging part to "conquer" that step:
def closest_distance(points):
    size = len(points)
    if size < 4:  # 1
        return solution_naive(points)

    lhs = points[:size // 2]  # 2
    rhs = points[size // 2:]

    left = closest_distance(lhs)
    right = closest_distance(rhs)
    result = min(left, right)
    if result == 0.0:
        return result

    median = (lhs[-1][0] + rhs[0][0]) / 2  # 3
    closers = []
    for i in range(len(lhs)):
        if abs(lhs[i][0] - median) < result:
            closers.append(lhs[i])
    for i in range(len(rhs)):
        if abs(rhs[i][0] - median) < result:
            closers.append(rhs[i])
    closers.sort(key=lambda p: p[1])  # 4

    for i in range(len(closers) - 1):  # 5
        for j in range(i + 1, min(i + 6, len(closers))):
            if abs(closers[i][1] - closers[j][1]) < result:
                result = min(result, distance(closers[i], closers[j]))
    return result
1. For up to 3 points we could use the naive approach.
2. Split the list of points in two parts, left hand side and right hand side, and iterate on them the algorithm. If we see a minimal distance of zero, meaning two overlapping points, there is no use in going on.
3. Draw a line between the rightmost point in the left part of point collection and the leftmost in right part. Then add to a buffer list all the points that have a distance to this median line that is less than the currently found closest distance.
4. Sort the points on their "y" component.
5. And here happens the magic. There is a theorem stating that we could stop checking at 6 when looking for a closest distance in this condition.

The cutoff at 6 ensures that the time complexity of this algorithm if O(n log n).

Full python code on GitHub.

Go to the full post

Majority element

Given a list of integers, return 1 if there is an element that appears more than half the time in the list, otherwise 0.

Naive solution

Loop on all the items in the list, and count each of them. If we find a result that is bigger than half the size of the list, we have a positive result.

We can try to apply some improvement to this algorithm, for instance, we can stop checking as soon as we reach the first element after the half size, and we don't need to check each element from the beginning of the list, since if there is an instance of that element before the current point we have already count it, and if there is not, there is no use in check it. Still, the time complexity is O(n^2), so we can use it only for very short input lists.

Divide and conquer

We divide the problem until we reach the base case of one or two elements - that could be solved immediately. Then we "conquer" this step determining which element is majority in the current subinterval. I described in detail my python code that implements this approach in the previous post.

Sorting

This is a simple improvement of the naive solution, still it is powerful enough to give better results than the divide and conquer approach.

The idea is sorting the input data, and then compare the i-th element against the i-th + half size of the list. If it is the same, that is a positive result. The most expensive part of this approach is in the sorting phase, O(n log n).

Hash table

If we don't mind to use some relevant extra space, we can scan the input list pushing in a hash table each new element that we find, increasing its value each time we see it again. Than we scan the values in the hash table looking for a more than half list size one.

The python Counter collection allows us to solve this problem as a one-liner:
def solution_hash(data):
    return True if Counter(data).most_common(1)[0][1] > len(data) // 2 else False
The time complexity is linear.

Boyer–Moore

The Boyer–Moore algorithm leads to a fun, elegant, and superfast solution.

We perform a first scan of the list, looking for a candidate:
def bm_candidate(data):
    result = None
    count = 0
    for element in data:  # 1
        if count == 0:  # 2
            result = element
            count = 1
        elif element == result:  # 3
            count += 1
        else:
            count -= 1

    return result
1. Loop on all the elements
2. If the candidate counter is zero, we get the current element as candidate, setting the count to one.
3. If we see another instance of the current candidate, we increase the counter. Otherwise we decrease the counter.

This scan ensures to identify the majority element, if it exists. However, if there is not a majority element, it returns the last candidate seen. So we need to perform another linear scan to ensure that the candidate is actually the majority element.

Nonetheless, Boyer–Moore wins easily against the other seen algorithms.

See my full python code for this problem, and the other from the same lot, on GitHub.

Go to the full post

A few divide and conquer problems

The exercises for week four of edX MOOC Algs200x Algorithmic Design and Techniques by the UC San Diego are, in my opinion, a bit disappointing. I would have preferred if they pushed me to think more to how the divide and conquer strategy apply to different situation. Still, there are a few interesting points than we can get from them.

Binary Search

Task: Given in input two list of integers, being assured that the first one is sorted, output a list of integers where each value is the index of the positional matching element in the second list, as found in the first one, or -1 if missing.

There is almost nothing to say about to this problem. It is just a matter of implementing binary search.

Majority Element

Task: Given a list of integers, return true if it contains a majority element, that is, there is an element repeated more than half the size of the list.

Solving this problem with a divide and conquer approach, is not the best idea one could have. Even though it is certainly better than a naive approach, the common alternatives that should spring to the mind of any programmer (just a couple of hints, sorting, hashing) look more attractive. And I haven't even started talking about the amazing Boyer–Moore algorithm.

My python solution is a recursive function that returns the majority element in the closed subinterval [left..right] on the passed data list, or None if there is not such a beast:
def majority(data, left, right):
    size = right - left + 1
    if size < 3:  # 1
        return data[left] if data[left] == data[right] else None

    pivot = left + size // 2  # 2
    lhs = majority(data, left, pivot)
    rhs = majority(data, pivot + 1, right)

    if not lhs or not rhs:  # 3
        return lhs if lhs else rhs
    if lhs == rhs:
        return lhs

    for candidate in (lhs, rhs):  # 4
        count = 0
        for i in range(left, right + 1):
            if data[i] == candidate:
                count += 1
        if count > size // 2:
            return candidate
    return None
1. Base case, when we have just one or two elements, it is immediate to get the majority.
2. The divide part of the algorithm. Split the list in two, call recursively the function on the left and right side
3. Base cases for the conquering. If we have at least a None, return the definite element, if any, or None. Otherwise, if both the candidate majority are the same, return it.
4. The expensive part of the algorithm. We should decide which candidate choose, and to do that we just count the instances of both on the entire interval. If none of them has the majority here, we return None.

In the next post I provide different approaches to this problem.

Improving Quick Sort

Task: Improve the "plain" quick sort algorithm adapting it to support a 3-way partitioning.

I didn't spend any time at all for selecting a decent pivot, I just picked anytime the leftmost element. Don't mind about it, just focus on the partitioning.

Number of Inversions

Task: Count the number of inversions in a (partially sorted) list of integers.

Here the point is tweaking a merge sort algorithm to keep track of the detected inversions.

Organizing a Lottery

Given in input a list of segments on the x axis, each item being a 2-tuple of integers, begin and end, and a list of integers, representing points on the same axis, count for each point how many segments it intersects.

I spent quite a long time trying to figure out how to solve this problem using a divide and conquer approach. After a while, I decided to implement a solution based on sorting instead. Then I asked on the course forum about the right way to solve this exercise. Well, it looks like it was meant to be solved with sorting. How disappointing.

My solution is based on the consideration that I could get the number of intersecting segments in a point simply counting all the begin-end of segments up to there.

Here is my python code:
def solution_sort(segments, points):
    s_open = []  # 1
    s_close = []

    for segment in segments:
        s_open.append(segment[0])
        s_close.append(segment[1])

    s_open.sort()  # 2
    s_close.sort()
    points.sort()

    results = []
    count = 0  # 3
    oi = 0  # 4
    ci = 0
    size = len(segments)
    for point in points:  # 5
        while oi < size and s_open[oi] <= point:  # 6
            oi += 1
            count += 1
        while ci < size and s_close[ci] < point:  # 7
            ci += 1
            count -= 1
        results.append(count)

    return results
1. I put in the list s_open all the beginnings of segments, and in s_close their endings.
2. Segments and point had better to be sorted.
3. Current count for the number of segments currently open.
4. I am going to linearly scan open and close segment lists, oi and ci are the indexes for their current positions.
5. The scan is ruled by the points, being sorted, I start from the leftmost.
6. Each opening segment detected up to the point is added to the count of the currently open segments.
7. Decrease the count of open segments when a end of segment is seen. But be careful, the close should be strictly minor than the point.

Finding the Closest Pair of Points

Task: Given n points in a bidimensional space, find the smallest distance between a pair of them.

This is not an easy problem. Fortunately, it is also a well known one. See, for instance, this paper by Subhash Suri. In this other post, I show my python solution to this problem following that algorithm.

I pushed on GitHub the full python code for these problems.

Go to the full post

Half a dozen of greedy problems

A small collection of greedy problems, as found in week three of edX MOOC Algs200x Algorithmic Design and Techniques by the UC San Diego.

Kind of disappointing this third week of the course, since they removed the access to their online problem verifier for the unregistered user. Besides, these kind of problems loose large part of their appeal if you already know they could be solved following a greedy approach. The spoiler is too big, you just have to verify the greedy step is safe. Still, let's have a look of them. As usual, I provide my full python code on GitHub.

Changing Money

Task: Find the minimum number of coins needed to change the input value (an integer) into coins with denominations 1, 5, and 10.

Solution: Start from considering the higher denomination, it is safe to remove that value from the input until we can, since obviously that leads to the minimum possible result.

This is the core loop of the algorithm:
for coin in DENOMINATIONS:
    counter = leftover // coin  # 1
    if counter > 0:
        result += counter
        leftover %= coin   # 2

    if leftover == 0:
        break
1. It would be boring subtracting one coin after the other till we can, better to integer-divide the amount for the current denomination.
2. The modulo of the above division gives the amount that still have to be reduced to coins.

Maximizing the Value of a Loot

Task: Implement an algorithm for the fractional knapsack problem

Solution: Sort the input values, a list of tuples of value and weight, accordingly to their value per unit. Then get the required quantity from it.

There is a simple way to sort it in that way when python is you implementing language:
loot = sorted(items, key= lambda item: item[0]/item[1])
Notice that I'm keeping the natural ordering, form smallest to greater, so I would check the values from right to left.

Maximizing Revenue in Online Ad Placement

Task: Given two same sized integer sequences, partition them into pairs such that the sum of their products is maximized, and return such value.

Solution: There is no trick with negative values, you could just sort both sequences and multiply the items with the same index between them. Conceptually, we should start from the highest values down to the lowest one. Actually, it is all the same.

To extract the same-index components from two (or more) iterables in python we could use the handy zip() function:
for p, c in zip(profits, clicks):
    result += p * c

Collecting Signatures

Task: Given a list of 2-tuples representing segments, find the minimum number of points such that each segment contains at least one point.

Solution:
segments.sort(key=lambda item: item[1])  # 1

results = [segments[0][1]]  # 2
for segment in segments:
    if segment[0] > results[-1]:
        results.append(segment[1])  # 3
1. Sort the segments in natural order for the ending point of each segment. Notice the python way of specifying which component has to be used to sort the list.
2. Initialize the result list with the first point, ending of the first (sorted) segment.
3. When we find an ending point of a segment that is more to the right of the last one, we add it to the result list. We can easily prove that this is a safe move, so our greedy algorithm is OK.

Maximizing the Number of Prize Places in a Competition

Task: Given a positive integer, find the sum of integers largest in size equals to the given input and containing only unique values.

Solution: The simpler case is when we can use the natural sequence until the input value is reached. This approach works for 6
6 = 1 + 2 + 3
but it doesn't work, for instance, for 2 or 8
2 = 2
8 = 1 + 2 + 5
The key point is that, before adding an element to the result list, we should ensure we keep enough space for another element that should be bigger that the current one.
So, for instance. We get 2 in input, we would like to put 1 in the result list, but if we do that we are left with another 1 that we can't add, since no duplicated are allowed, so we skip it and add 2 instead, completing our job.
In case of 8, 1 and 2 could be safely added to the result list. When we consider 3, our leftovers are 5, if we put 3 in the list, we are left with a 2, that is useless, so we discard 3 and put in the result list 5 instead.

So we modify our pushing in the result list of all the elements in the natural sequence with this check:
if unassigned - i <= i:
    results.append(unassigned)
    break
When we enter the "if", we have reached the last element in the result list.

Maximizing Your Salary

Task: Compose the largest number out of a set of integers.

Solution: The trick is that the integers in input could be 1, 2, or 3 digits wide. If they were all one-digit numbers, the job would have been easy, just sort them in reversed natural order, and join them up.

Here we have to sort them in a way that '2' is to the left of '21', so that the input ['21', '2'] would lead to a result of 221.

My idea was to compare the numbers in the input list as all of them had 3 digits, extending the shorter ones duplicating the rightmost digit as required. So, I would think of the previous example as ['211', '222'], and that would lead to the expected result.

To that in python, I have created a filler function and used it in sorting in this way:
def filler(token):
    while len(token) < 3:
        token += token[-1]
    return token

# ...

tokens.sort(key=filler, reverse=True)
Again, please have a look on GitHub for details.

Go to the full post

Last Digit of the Sum of Fibonacci Numbers

Another couple of problems in the same lot of the one previously discussed. Now we have to calculate the last digit of the (full or partial) sum of Fibonacci numbers.

Also these problems are presented in the the edX MOOC Algs200x Algorithmic Design and Techniques by the UC San Diego, week 2, so I suggest you to give them a try and then comparing your solutions with my python scripts.

Last Digit of the Sum of Fibonacci Numbers

Given an integer 𝑛, find the last digit of the sum 𝐹0 + 𝐹1 + · · · + 𝐹𝑛.

Considering that n could be as big as 10^14, the naive solution of summing up all the Fibonacci numbers as long as we calculate them is leading too slowly to the result.

We'd better take notice of the hint about experimenting a bit with small test cases looking if we can extrapolate anything interesting. Actually, after a while I find out that the sum of the first n Fibonacci number is just one shorter than the sum of Fibonacci of n + 2.

Sum of Fib(1) + Fib(2) + ... + Fib(n) = Fib(n+2) - 1

We know that Fib(3) = Fib(2) + Fib(1), we can rewrite it as
Fib(1) = Fib(3) - Fib(2)
Let's do the same for all the subsequent Fibonacci numbers up to n.
Fib(2) = Fib(4) - Fib(3)
Fib(3) = Fib(5) - Fib(4)
...
Fib(n) = Fib(n+2) - Fib(n+1)
If we add up all the elements on the left side we get that the required sum is
Fib(3) - Fib(2) + Fib(4) - Fib(3) + 
Fib(5) - Fib(4) + ... + Fib(n+2) - Fib(n+1)
Considering that Fib(2) is 1, it simplifies to
Fib(n+2) - 1
Another useful consideration is that, since we need just the last digit, we could use the result of the previous exercise and limit our loop following the Pisano period.

Putting all together, this is my solution:
PISANO = 60  # 1


def solution(number):
    if number < 2:
        return number

    number %= PISANO

    results = [1, 1]
    for _ in range(number):  # 2
        results.append((results[-1] + results[-2]) % 10)  # 3

    return (results[-1] - 1) % 10  # 4
1. Instead of giving the result for the requested number, I check the modulo 60, being that the Pisano period for 10. (See below, the part in green, for a clarification on this point)
2. I have initialized the list named 'results' with the values of Fib(1) and Fib(2), now I loop n times, since I am looking for Fib(n+2).
3. I'm not interested in the actual Fibonacci number, just it's last digit, so here I apply modulo 10 to the brand new calculated one. In this way I avoid scaling to big numbers.
4. I apply the modulo operator once again to get rid of the unfortunate case of possibly negative results.

Naive, Pisano script, and test case for both, are on GitHub.

Last Digit of the Partial Sum of Fibonacci Numbers

Given two non-negative integers 𝑚 and 𝑛, where 𝑚 ≤ 𝑛, find the last digit of the sum 𝐹𝑚 + 𝐹𝑚+1 + ... + 𝐹𝑛.

The trick used in the previous problem won't work here, and I couldn't think of any similar clever approach. So I resorted to make full use of Pisano, hoping that it was enough.
PISANO = 60


def solution(left, right):
    assert not left > right  # 1

    fib_mods = [0, 1]  # 2
    for _ in range(PISANO - 2):
        fib_mods.append((fib_mods[-1] + fib_mods[-2]) % 10)

    left %= PISANO  # 3
    right %= PISANO
    if right < left:
        right += PISANO

    result = 0
    for i in range(left, right + 1):
        result += fib_mods[i % PISANO]  # 4
    return result % 10  # 5
1. Not strictly required by the problem, where we can assume the input data is clean.
2. I fill this list with all the Fibonacci number modulo 10 in the range of the Pisano period.
3. I don't need to loop on all the actual requested numbers, I can safely stay in the limit of the Pisano period. With one major caveat, the right limit should not become smaller that left.
4. Loop on the interval, ensuring that I stay in the Pisano limits.
5. Return the last digit of the sum, as required.

Let's try and clarify point 3.

The idea of the algorithm is working with the Pisano period for 10. So instead of calculating all the Fibonacci numbers in the range, adding them up, and finally extract modulo ten from the result, we would work with the small numbers in the Pisano 60 period.

Calculating the Pisano number for any value in [m, n], adding all them up, and the returning its modulo 10 could be already a good solution. However, let's consider the fact that n - m could be huge. It's not a good idea adding up all those numbers, when we could get rid of all the repetition, since they won't be relevant. We could limit them to the bare minimum, looping, in the worst case 60 times.

That's the ratio for considering m and n modulo 60. Still, there is an issue. What if m % 60 is bigger than n % 60.

Say that we want to know the result for m = 57 and n = 123.

58 % 60 is 58, but 123 % 60 is 3. We need to adjust the end value in the loop. Let's add 60 to the right value, now we are sure that it is bigger than left.

Also for this problem I have pushed naive, Pisano script, and test case for both, on GitHub. See the github folder for all the stuff about week 2 exercises.

Go to the full post

Fibonacci modulo with Pisano period

Dealing with Fibonacci numbers becomes easily unmanageable for big numbers. However, if we are not interested to the full number, but just to its modulo, there is a handy trick that would help us.

The thing is that if we have a look at the Fibonacci sequence after applying to each element a modulo operator, we would notice that we always get periodic sequences.
Modulo 2 leads to a infinite repetition of three elements: 0, 1, 1
Modulo 3 to eight elements: 0, 1, 1, 2, 0, 2, 2, 1
Modulo 10 to sixty, et cetera.

There is no known way to get the length period given the modulo - called Pisano period - but we know that each Pisano period starts with "0, 1" and this signature is found just at the beginning of a period.

On this fact, the edX MOOC Algs200x Algorithmic Design and Techniques by the UC San Diego, week 2, proposes three problems. I strongly suggest you to take the course before going on reading this post, getting back here to compare your solution with mine.

Here is the first one, Calculate Fibonacci modulo m.

Given two integers 𝑛 and 𝑚, output 𝐹𝑛 mod 𝑚.

The issue is that n could be up to 10^18. Less preoccupying the upper limit for m, fixed in 10^5.

A first naive solution would consist in calculating the required Fibonacci number, applying the modulo operator to it, return the result. Easy-peasy. However my naive python implementation started to get too slow for mere values about 10^5.

To use the Pisano period we have firstly to get its length. The only reasonable way I could think of, was looking for the modulo sequence checking for first repetition of the starting signature "0, 1".
def pisano(modulo):
    previous = 1
    current = 1

    result = 1
    while not (previous == 0 and current == 1):  # 1
        buffer = (previous + current) % modulo  # 2
        previous = current
        current = buffer

        result += 1

    return result
1. Loop until the starting signature is found
2. Get the next Fibonacci-modulo number

Let's now write a function to get a Fibonacci-modulo number. We could use instead a plain Fibonacci function and then apply the modulo operator to its result. This custom function saves the cost of adding big numbers between them.
def fibonacci(number, modulo):
    if number < 2:
        return number

    results = [1, 1]
    for _ in range(number - 2):
        results.append((results[-1] + results[-2]) % modulo)

    return results[-1]
Now, the solution itself is almost trivial.
def solution(number, modulo):
    return fibonacci(number % pisano(modulo), modulo)
Full python code, for both the trivial and Pisano case, and test case on GitHub.

Go to the full post

HackerRank DFS: Connected Cell in a Grid

In this HackerRack problem, we are given in input an n x m matrix containing as elements just 0s and 1s. We should give as output the size of the largest available region.

As suggested in the name of the problem, we should think of a solution that refers to graphs, and more specifically on the Depth First Search algorithm on them.

To better understand what are we required to get, let's have a look at the provided example:
[1, 1, 0, 0]
[0, 1, 1, 0]
[0, 0, 1, 0]
[1, 0, 0, 0]
This matrix has two so called "regions", one takes just the 1 in the bottom left corner, the other all the remaining ones. Obviously, the largest one is the latter, so we are expected to return 5.

Notice that two 'alive' cells are considered connected if they have distance one, horizontally, vertically, or diagonally.

Following the hint, I implemented my python solution applying the DFS on each cell in the matrix.
def solution(matrix):
    result = 0
    for i in range(len(matrix)):
        for j in range(len(matrix[0])):
            result = max(result, connected(matrix, i, j))  # 1
    return result
1. The result is initialized to zero, when connected() return a value higher than the current result, it takes the place of the current better solution found.

Now I just have to write the connected() function. Based on DFS, it would recursively call itself looking for all the cell connected to the current one. Slight variation, it would return the size of the group. For efficiency reasons, I decided to implement it in a disruptive way. The matrix would be modified to keep track of the visited cells. This is usually not a good idea, but in a problem like this, we can live with it.
def connected(matrix, i, j):
    if not (0 <= i < len(matrix)) or not (0 <= j < len(matrix[0])):
        return 0  # 1

    if matrix[i][j] != 1:
        return 0  # 2

    result = 1  # 3
    matrix[i][j] = 0  # 4

    for ii in range(i-1, i+2):
        for jj in range(j-1, j+2):
            if i != ii or j != jj:
                result += connected(matrix, ii, jj)  # 5

    return result
1. If we are stepping out the matrix borders, there's nothing to do. Just return zero.
2. If the current cell is not "alive", it is not part of any group, return zero.
3. Otherwise, count it for the current group.
4. This is the soft spot I talked about above. Instead of keeping track of the visited node, I simply set it to "dead". It won't harm to the algorithm, since we would count each node just once, still it could be surprising for the caller seeing its matrix changed.
5. Go and visit all the nodes connected to the current one. The two for-loops select the indices in the current cell adjacency, the "if" excludes the current one. We will check if the selected position refers to an actual connected node in (1) and (2) and then apply the recursion.

And that's it. Full python code and test case I used to help me getting the solution is on GitHub.

Go to the full post

HackerRank Recursion: Davis' Staircase

The point of this HackerRank problem is calculating a sort of uber-Fibonacci number. Since we want to have an efficient solution, we should immediately think to a dynamic programming approach, or at least to some kind of memoization.

Let's call Davis number the number of ways we can climb a staircase, with the rule that we can take 1, 2, or 3 steps at a time.

First thing, we want to identify the base cases. That's pretty easy.
  1. Our staircase has just one step, we have only one way to climb it, taking 1 step.
  2. We can climb it in a single two-step jump, or we can take 1 step, then falling back to the previous case. Total of two ways of climbing.
  3. A single three-step jump is a way to get on top, otherwise we can take one step and falling back to case (2) or take a two-step jump and fall back to case (1). This means 1 + 2 + 1 = 4 ways of climbing this staircase.
The generic case is determined by the three precedent ones, something like this:
result = davis_number(n-1) + davis_number(n-2) + davis_number(n-3)
Given that, I skipped the step of writing a plain recursive solution to the problem, and I moved directly to write a Python script that uses a cache to get the result via DP.

I cached the basic step results as identified above in a list (and I define a constant that make explicit the requirement stating that a staircase could not have more that 36 steps):
cache = [1, 2, 4]
MAX_STEPS = 36
And then I wrote my solution:
def solution(steps):
    assert 0 < steps <= MAX_STEPS  # 1
    if steps <= len(cache):  # 2
        return cache[steps - 1]

    for _ in range(steps - len(cache)):  #3
        cache.append(cache[-1] + cache[-2] + cache[-3])

    return cache[-1]  # 4
1. Without this assertion, the code would become unstable for negative input values (maybe an IndexError exception, maybe simply a wrong result) and would take a very long time for large positive input values. Better a predictable AssertionError instead.
2. The result has been already calculated, and it is available in the cache. Just return it.
3. Let's calculate the required value. To do that, we need to calculate all the previous values, so we extend the cache calculating in order all of them.
4. Finally, we return the rightmost element in the cache, that is, the required Davis' number.

If MAX_STEPS was a bigger number, it could be a good idea to predetermine the full size for the cache, filling it with zeroes, and pushing the actual values when the solutions are created. This would require an extra variable to keep track of the current last Davis number in cache, and would make the code slightly more complicated. Here I found better to keep the code simple.

Full Python script and testcase on GitHub.

Go to the full post