Pages

From GIF to PNG with ImageMagick

Once you have installed ImageMagick in your environment (on Ubuntu, you could simply install the imagemagick and libmagick++-dev packages via apt-get), it is quite easy write a simple application to test that you have anything ready to work with this handy library.

One of the easiest thing we can do, is converting an existing image from its original format to another one. Let's use this functionality to write a sort of "hello world" application.

I am working in C++ (that's why I have installed the libmagick++-dev package) on a 64 bit Linux-Debian machine (and the Debian part of this explains why I installed the package using apt-get). I am using the GCC C++ compiler, and Eclipse as IDE. So, be prepared to apply some changes to fit the example to your requirements.

I created a new C++ project, adding some information to let the IDE find depending includes and libraries.

In Properties, C/C++ Build, Settings

- In the GCC C++ Compiler, Includes page I added "/usr/include/ImageMagick"
- In the GCC C++ Linker, Libraries page I added:
-L: /usr/lib/x86_64-linux-gnu/
-l: Magick++

So, now Eclipse knows where to find the Magick include directories, and where to find the shareable object libMagick++.so that would be needed to my executable to properly run. If you run ldd on the resulting program file, you will see that it depends also on other Magick shareable objects, namely libMagickWand.so and libMagickCore.so.

I have put in my tmp directory a GIF file, and I want my (silly) little program to convert it to PNG. Here is the code.
#include <Magick++.h>

int main()
{
    Magick::Image image; // 1
    image.read("/tmp/simple.gif"); // 2
    image.write("/tmp/simple.png"); // 3
}
1. An empty Magick image.
2. ImageMagick loads the passed file in an image. What if such file does not exists? A Magick::ErrorBlob exception would be thrown. And since I don't have try-caught my code, this would result in a (small) catastrophe.
3. The image is saved to the new specified file (if the application can actually write it). Notice that Magick is smart enough to understand which format to use accordingly to the passed file name extension.

Go to the full post

Number of disc intersections - linear solution

I described this Codility problem in previous post. There I took an alluring path that drove me to a weak O(N square) solution. Then I tried to save the day, tweaking that algorithm so that, in the best case, it could lead to an acceptable O(N log N) result. But we could not avoid to acknowledge that we still have a Big Oh N Square time complexity in the worst case.

Thinking a bit more on the problem nature, we should find a much more elegant algorithm, that gives us a shiny O(N) time complexity.

A different approach

Instead of focusing on the discs, let's check how a generic point contributes to the total number of intersections. We should just know how many other discs have started before, and how may are starting in that point. An example should clarify the matter.

The red vertical line in the picture represent the current point.

We have three discs (A, B, C - represented by black lines) that have already started, and a new one (D - the blue line) that starts there.

It is easy to see that we'll have three new intersections, AD, BD, CD. The blue line could be accounted for one intersection for each black line.

Things get a bit more complicated if we have many new discs starting in a point.
Say that we have three running discs (A, B, C - black lines), and three new ones (D, E, F - blue lines) are starting there (the red line).

Similarly to the previous example, we'll have a new bunch of intersections deriving from the contact between the first group and the second, namely, AD, AE, AF; BD, BE, BF; CD, CE, CF. That is 3 * 3.

But this is just part of the story, now we need to consider also of the intersection internal to the second group. In this case they are three: DE, DF, EF.

It is easy to count them by hand, but, what is the rule that determine this number?

Mathematically speaking, we are looking for the k-combinations of n-elements, where k is the number of parts involved in the intersection (in our case it is always 2), and n the number of elements we are dealing to (the new discs, represented by blue lines). This n-choose-k value is formally named binomial coefficient (n k) and is calculated using the formula n! / (k! * (n-k)!).

Being k = 2, we want to get the number of intersection between couple of discs, we have just n, the number of new discs, as variable. And we can simplify the formula to n! / (2! * (n-2)!) = n * (n-1) / 2

Let's get the number of intersections using it.

In the first case, there is just one new disc, so we have 3 intersection, 1 blue against 3 blacks, plus 1-choose-2 for the blue: n * (n - 1) / 2 = 1 * 0 / 2 = 0. That means, in the first case we have just three intersections.

What happens in the case of 2 new discs against 3 already existing? We'll have 6 intersections, 2 blues * 3 blacks, plus 2-choose-2 for the blues: 2 * 1 / 2 = 1. A total of seven intersections.

If we have three blues and three blacks, we'll have 3 * 3 = 9 intersections, and 3-choose-2 for the blues alone: (3 2) = 3 * 2 / 2 = 3. Twelve intersections.

The code
int beta(const std::vector<int>& input)
{
  const SIZE_T SIZE = input.size();

  if(SIZE < 2)
    return 0;
  if(SIZE > MAX_ELEMENTS)
    return -1;

  std::pair<int, int> disks[SIZE]; // 1
  for(SIZE_T i = 0; i < input.size(); ++i)
  {
    ++disks[(int)i < input[i] ? 0 : i - input[i]].first;
    ++disks[i+input[i] >= input.size() ? input.size() - 1 : i + input[i]].second;
  }

  int result = 0;
  int open = 0; // 2
  for(SIZE_T i = 0; i < input.size(); ++i)
  {
    result += open * disks[i].first + (disks[i].first * (disks[i].first - 1)) / 2; // 3
    if(result > MAX_INTERSECTING_PAIRS)
      return -1;

    open += disks[i].first - disks[i].second; // 4
  }

  return result;
}
1. Same data structure as used in the previous implementation, but here it has a different meaning. Each pair represents a point in the interval [0, input size); its first component keep tracks of how may discs start in that point (note that I cut off to zero this value, for the same reason discussed in the previous post); its second component represents the number of discs ending there.
2. Initially we have no "open" discs.
3. This is the core of the algorithm. The first part calculates the intersections between the already existing discs and the number of discs starting in this point. The second part is the binomial coefficient discussed above.
4. Add to "open" the number of discs starting here, and subtract the one that terminates here.

Go to the full post

Number of disc intersections - O(N*M) solution

This Codility problem, and a first inefficient solution, is described in the previous post. In few words, we have a vector of elements, each of them is representing a segment on the x axis, we want to calculate how many intersections all these segments have among them. Looping on all the elements, and comparing each one to the others is a shiny Big Oh N Square algorithm, that would lead us to horrible performances as the size of the input grows.

Here I try to make that naive solution more acceptable, tweaking a few algorithm details. In some circumstances, this could even lead to an O(N log N) solution that would make anyone happy. Still, the worst case has again a dismaying O(N square) time complexity.

Some considerations

There's no need to check values on the left of the first disc center (i.e.: zero) and on the right of the last one. We can simply cut the trespassing extremes to match that limits, no information is lost.

If we sort the discs, so that the first is the one "more to the left", and the last is "more to the right", we can simplify the comparison among the current one and its next ones, stopping when we find the first non-intersecting couple.

To sort the discs, we should use a buffer container so that we can use the STL sort() function. I am going to use a raw array of std::pair's, where first is the left limit of a disc and second the right one. Notice that the standard states that the default way an interval of pair's is ordered is just what we need.

Refactoring the code
int beta(const std::vector<int>& input)
{
  const SIZE_T SIZE = input.size(); // 1

  if(SIZE < 2)
    return 0;
  if(SIZE > MAX_ELEMENTS)
    return -1;

  std::pair<nt, int> disks[SIZE]; // 2
  for(SIZE_T i = 0; i < SIZE; ++i)
  {
    disks[i].first = i - input[i] < 0 ? 0 : i - input[i]; // 3
    disks[i].second = (int64_t)input[i] + i >= input.size() ? input.size() - 1 : input[i] + i; // 4
  }

  std::sort(disks, disks + SIZE);

  int result = 0;
  for(SIZE_T i = 0; i < SIZE - 1; ++i) // 5
  {
    for(SIZE_T j = i + 1; j < SIZE; ++j) // 6
    {
      if(disks[i].second >= disks[j].first)
      {
        if(result == MAX_INTERSECTING_PAIRS)
          return -1;
        ++result;
      }
      else // 7
      {
        break;
      }
    }
  }
  return result;
}
1. In theory, any C++ compiler should be aggressive enough to optimize off a call to size(), but I got the impression the Codility compiler isn't. And this algorithm is very sensitive, losing a few millis here and there could be fatal.
2. Prepare to put the discs in this raw array, so that we can sort them.
3. Any first represents the left limit of a disc. As said above, we cut it to zero.
4. Beware of overflow. A value could be (close to) INT_MAX, so increasing it requires a 64 bit integer.
5. Looping on all the discs (but the last one).
6. Looping on all the next discs but ...
7. ... we are dealing with an ordered copy of the original collection of discs, when we see the first one not intersecting, we can stop looping.

Algorithm complexity

The data buffer for sorting costs us an extra O(N) space complexity, but this is allowed by the problem.

The sorting costs O(N log N) in terms of time complexity. In our best case this is the complexity of the algorithm.

Normally we have a O(N*M) time complexity, where M is the average distance between the right limit of any disc to its relative most left one. The best case, happens when there is no intersection at all, and this part of the algorithm becomes less expensive of the sorting. In the worst case, M tends to N/2, so the complexity is again O(N square).

It is easy to write a test case that checks the worst case scenario for this algorithm:
TEST(TestBeta, CaseLastHuge)
{
  std::vector<int> input;
  input.resize(MAX_ELEMENTS); // 1
  input.back() = MAX_VALUE;

  boost::posix_time::ptime start = boost::posix_time::microsec_clock::local_time();

  ASSERT_EQ(MAX_ELEMENTS - 1, beta(input));

  boost::posix_time::time_duration diff = boost::posix_time::microsec_clock::local_time() - start;
  ASSERT_LT(diff.total_milliseconds(), 7); // 2
}
1. We have the biggest possible vector of elements, all of them are zero, but the last one, that it is huge.
2. I'd like the case to be solved in a few millis, but it wouldn't be the case (at least on my machine).

Even if this solution gets a whooping 100/100 on Codility, it is not the best solution that you can achieve. We'll see in the next post a much more elegant and performant algorithm.

Go to the full post

Number of disc intersections - O(N square) solution

The beta 2010 test from Codility training requires knowing, at least at intuitive level, about binomial coefficients. Otherwise, considering the short time available for pondering on such problems, and the pressure you could feel if this is part of an interview process, you are doomed. Or you should fall back to a sub-optimal solution.

The problem looks quite innocent. We have an array of integers, each element representing a one-dimensional "disc" having as center the index of the element itself, and as radius the element's value. So, if A[3] is 1, this means that we have a "disc" centered on three with radius of one. You can visualize it as a segment [2, 4]. Our job is writing a function that counts the intersections among these "discs".

Setup

As usual, I started working on the problem thinking for a C++ implementation, and writing a few test cases (I use gtest) to help me to design the code. Here is three of the tests I have written, with a few constants and a type definition that would help keeping the code readable:
const int MAX_INTERSECTING_PAIRS = 10000000; // 1
const unsigned int MAX_ELEMENTS = 100000; // 2
const int MAX_VALUE = 2147483647; // 3

typedef std::vector<int>::size_type SIZE_T;

int beta(const std::vector<int>& input); // 4

TEST(TestBeta, CaseSimple) // 5
{
  std::vector<int> input = {1, 5, 2, 1, 4, 0};
  ASSERT_EQ(11, beta(input));
}

TEST(TestBeta, CaseOverflow) // 6
{
  std::vector<int> input = { 0, MAX_VALUE, 0 };
  ASSERT_EQ(2, beta(input));
}

TEST(TestBeta, CaseTooMany)
{
  std::vector<int> input;
  input.resize(MAX_ELEMENTS + 1);

  ASSERT_EQ(-1, beta(input));
}
1. If the solution is getting huge, we are allowed to (actually, we have to) abort the job and return an error code (minus one).
2. Ditto if there are too many elements in the array.
3. We can assume that the values in input are 32 bit signed integer, that means, this is the max value we could get.
4. Declaration for the function we have to implement.
5. Test case provided by Codility. We have six segments, [-1, 1], [-4, 6], [0, 4], [2, 4], [0, 8], [5, 5], that should give eleven intersections. Note that the last element has length zero, so it collapses to a single point, on x = 5.
6. Codility problems are often written to lead to surprises on the extreme cases. Better to think of a few test cases on that neighborhood. Here I ensure that a huge "disk" is checked correctly. I expect just two intersections here.
7. Let's check what happens when the input is too big.

An inefficient solution

The first solution that jumped to my mind is naive but effective, at least for a small number of elements. Just loop on all the disks, compare the right limit of the current one against the left one of all the other ones. The good thing about this algorithm is that is pretty easy to write. You should have it written and tested in a whiff. On the flip side, it has a Big Oh N Square time complexity. Meaning, it would take forever to run, in case of a large vector in input.
int beta(const std::vector<int>& input)
{
  if(input.size() < 2) // 1
    return 0;
  if(input.size() > MAX_ELEMENTS) // 2
    return -1;

  int result = 0;
  for(int64_t i = 0; i < static_cast<int64_t>(input.size() - 1); ++i) // 3
    for(int64_t j = i+1; j < static_cast<int64_t>(input.size()); ++j) // 4
      if(i + input[i] >= j - input[j]) // 5
      {
        if(result == MAX_INTERSECTING_PAIRS) // 6
          return -1;
        ++result;
      }

  return result;
}
1. If there are less than two "discs", we can't have any intersection.
2. No more than MAX_ELEMENTS are allowed in input.
3. Let's loop on all the discs (but the last one, all the job would be already done at that point).
4. As said above, I compare each disc against all the other. More precisely, I compare it against all the next ones, since the intersections with the previous ones have been already checked by the previous iterations.
5. Check if the "i" disc is overlapping the "j" one. If you wondered why I have used int64_t integers to represent the vector's indices, here is the answer. If not, in case of huge values stored as vector values, we could get here an overflow. I could have used a more natural type for the indices, and write this line something like "if((int64_t)input[i] + input[j] + i - j >= 0)". Use whichever version looks more readable to you.
6. We have an intersection, before keeping track of it, we should check if we haven't already reached the limit.

Testing time duration

To see how this implementation is bound to fail, we could write a test case that checks how long it takes to run it. Boost has a time library that comes handy for this task:
#include <boost/date_time/posix_time/posix_time.hpp>

// ...

TEST(TestBeta, Case20K)
{
  std::vector<int> input;
  input.resize(20000); // 1

  boost::posix_time::ptime start = boost::posix_time::microsec_clock::local_time();

  ASSERT_EQ(0, beta(input));

  boost::posix_time::time_duration diff = boost::posix_time::microsec_clock::local_time() - start;
  ASSERT_LT(diff.total_milliseconds(), 5); // 2
}
1. In this test I don't care much on the vector content, a twenty thousand zeroes company is good enough to test the time complexity of my function.
2. I want that checking this vector would take less than five milliseconds. Tune this value to your system and expectations.

Improve the result

We have got a solution, and this is sweet, but it is a weak one (it scores 65/100). Now we have a couple of alternative. Working on this code to make it more performant, or looking for a better algorithm that would solve the structural issues of the current one. More about it in the following posts.

Go to the full post

Covering prefix

Another Codility training test is the alpha 2010 one, also known as prefix_set.

We have an array of integers, we want to identify its prefix subset that contains all the values of the complete lot, if it exists. To get the sense of the problem, there is nothing better than a few test cases.

I am coding in C++, and I use gtest as test framework, but you should be able to port this stuff to you preferred developing platform with a limited effort:
typedef std::vector<int>::size_type SIZE;
const SIZE MAX_SIZE = 1000000; // 1

int alpha(const std::vector<int>& input); // 2

TEST(TestAlpha, CaseSimple) // 3
{
  std::vector<int> input = {2, 2, 1, 0, 1};

  ASSERT_EQ(3, alpha(input)); 
}

TEST(TestAlpha, CaseSimple2) // 4
{
  std::vector<int> input = {2, 2, 1, 2, 1};

  ASSERT_EQ(2, alpha(input));
}

TEST(TestAlpha, CaseHugeFlat) // 5
{
  std::vector<int> input;
  input.resize(MAX_SIZE, 0);

  ASSERT_EQ(0, alpha(input));
}

TEST(TestAlpha, CaseHuge) // 6
{
  std::vector<int> input;
  input.reserve(MAX_SIZE);
  input.resize(MAX_SIZE - 1, 1);
  input.push_back(42);

  ASSERT_EQ(MAX_SIZE - 1, alpha(input));
}
1. The biggest beast we could meet has a one million element size.
2. Our function gets in input a vector of int, and returns the index of the last element we need to complete the required subset.
3. Test case provided by Codility. Given in input {2, 2, 1, 0, 1}, we should return three, the index of the element containing the value 0, because the next element contains a 1, that has already showed up.
4. A slight variation of (3). For {2, 2, 1, 2, 1} a return value of two is expected.
5. A huge vector containing just copies of the same value. We should return zero.
6. Variation on (5), the last element is the only different one, we should return its index.

The point (1) leads to a couple of constraints that I didn't rigorized here, to keep the solution as simple as possible (but see below). The vector size should be in the interval (0 .. 1,000,000], the values in the vector in [0 .. 1,000,000).

We are looking for something that is more difficult to say than see, the position of the first copy of the last value in vector. Let's see it in CaseSimple. There are three values in its input vector, zero, one and two. The last one we see is zero, and it appears in only one instance, at the element at position three. So three is the value that we have to return.

The algorithm should be something like this: loop on all the items in the vector; if the current item is a value that we have not already met, its index could be the one that we should return. Pretty simple, isn't it?

We just need to keep track of each possible vector value, to have a way to detect which value we have already checked. I used a vector of boolean for that:
int alpha(const std::vector<int>& input)
{
  std::vector<bool> check;
  check.resize(input.size(), false); // 1

  SIZE candidate = MAX_SIZE; // 2
  for(SIZE i = 0; i < input.size() ; ++i)
  {
    SIZE cur = input[i];
    if(!check[cur]) // 3
    {
      check[cur].flip(); // 4
      candidate = i; // 5
    }
  }

  return candidate;
}
1. The support vector, initialized with all-false values.
2. Initialize the candidate solution to a bad value, so that the caller could know if anything weird happened.
3. I consider only the values that have not been checked before.
4. Mark the current value as already checked.
5. The current index will be returned as solution, if we won't find any better candidate.

This code is fast and it is accepted by Codility with full marks. Still, we could do better, even if only for personal fun. Let's add some check in the code to enforce the limits on size and values for the input vector. I want my function throwing an int exception when it detects something wrong, like showed by these test cases:
TEST(TestAlpha, CaseEmpty) // 1
{
  std::vector<int> input;

  ASSERT_THROW(alpha(input), int);
}

TEST(TestAlpha, CaseTooBig) // 2
{
  std::vector<int> input;
  input.resize(MAX_SIZE + 1);

  ASSERT_THROW(alpha(input), int);
}

TEST(TestAlpha, CaseOverflow) // 3
{
  std::vector<int> input;
  input.resize(1, 1);

  ASSERT_THROW(alpha(input), int);
}
1. There should be at least an item in the vector. Passing an empty vector should result in an (int) exception.
2. The vector size should be smaller than one million.
3. Any value in the vector should be smaller than its size.

It is easy (and cheap) to enforce these requirements in the code:
int alpha(const std::vector<int>& input)
{
  if(input.empty() || input.size() > MAX_SIZE) // 1
    throw 0;

  std::vector<bool> check;
  check.resize(input.size(), false);

  SIZE candidate = MAX_SIZE;
  for(SIZE i = 0; i < input.size() ; ++i)
  {
    SIZE cur = input[i];
    if(cur >= input.size()) // 2
      throw 1;

    if(!check[cur])
    {
      check[cur].flip();
      candidate = i;
    }
  }

  return candidate;
}
1. Check the vector input size.
2. Check each value against the vector size.

Go to the full post

Codility equi

Interviews for software developer positions are a sort of Russian roulette, you never know which kind of question could be shoot at you.
Codility is a site claiming to "filtering out job candidates who cannot write correct programs". I'd say they have a too high opinion of themselves, they just filter candidates who are not accustomed to their tests. In any case, it is worthy to check them out.

There is a training page on Codility, where a few sample problem are listed, and you can try to solve them. I have checked out their Equi one or, as they call it, the mother of all tasks. As I already noted, they are on the overstating side of life.

My solution is for C++ but, being a conceptually simple problem, could be easily ported to many other programming languages.

In half an hour, we have to write a function that takes in input a STL vector of int and returns, again as an int, its "equilibrium index", if such a beast exists, otherwise -1.

That index is the position of an element such that the sum of the elements on its left would match the sum of the ones on its right. So, for instance, passing { 1, 2, 42, 1, 1, 1 } we should get back 2, since 1 + 2 equals 1 + 1 + 1. Simple, isn't it? There are a couple of things we should pay attention to, though. Most interestingly, they say that the "expected worst-case time complexity is O(N)", alluding to the fact that we can get in input a vector with a size in the order of millions, and that would lead a Big-Oh N Square algorithm to fail miserably.

As any TDD developer, I started writing a fake implementation for the required function, and a few test cases. Something like this:
int equi(const std::vector<int>& input)
{
  if(input.empty()) // 1
    return -1;

  // 2

  return -1;
}

TEST(TestEqui, CaseSimple) // 3
{
  std::vector<int> input = { 1, 2, 42, 1, 1, 1 };
  int pivot = equi(input);
  ASSERT_EQ(2, pivot);
}

TEST(TestEqui, CasePlain) // 4
{
  std::vector<int> input = { -7, 1, 5, 2, -4, 3, 0 };
  int pivot = equi(input);
  ASSERT_EQ(3, pivot);
}

TEST(TestEqui, CaseNot) // 5
{
  std::vector<int> input = { 7, 1, 5, 2, 4, 3, 0 };
  int pivot = equi(input);
  ASSERT_EQ(-1, pivot);
}

TEST(TestEqui, CaseBigNum) // 6
{
  std::vector<int> input = { INT_MAX, 1, 0, INT_MIN };
  int pivot = equi(input);
  ASSERT_EQ(-1, pivot);
}
1. If I get in input an empty vector, there is nothing to do, return the "NOT FOUND" value.
2. Here will go the meat.
3. The test case I have informally described above.
4. Test case gently provided by the Codility guys.
5. Let's check also a case where the result should be "NOT FOUND".
6. I have found out that Codility problems are often fond of paying attention to limit cases. Here I check the case of overflow.

It is difficult to test the time complexity for this problem, I should have written a test case where the input vector had a few millions elements, and I should have put a time constrain on it. Kind of an overkill in this situation. Considering that we are given a mere half an hour to provide a solution, we have no time for it. We should rely instead on our software designer common sense.

Firstly, I wrote an intuitive solution, that would work fine in normal cases, but would fail if we push a bit on it:
typedef std::vector<int>::size_type SIZE;

int equi(const std::vector<int>& input) // !!! SLOPPY VERSION !!!
{
  if(input.empty())
    return -1;

  for(SIZE index = 0; index < input.size(); ++index) // 1
  {
    int left = 0; // 2
    for(SIZE i = 0; i < index; ++i)
      left += input[i];

    int right = 0; // 3
    for(SIZE i = index + 1; i < input.size(); ++i)
      right += input[i];

    if(left == right) // 4
      return index;
  }
  return -1;
}
1. We are looking an the index of the element in the vector. So I loop on all of them.
2. Let's sum up all the element on the left of the candidate index, then (on point 3) I'll get the right sum, and finally (on point 4) I'll compare them.
3. I have two for loops inside another for loop. The external loop has time complexity O(N), both internal one O(N/2). The result is a Big Oh N Square time complexity algorithm. No problem at all for small N's, but we can foresee a catastrophe for large N's.
4. Notices that I stored the left and right values in plain int's. This is alright for normal cases, but what if I play with big numbers? As the test CaseBigNum will show, overflow is lurking behind the internal loops.

One could ask, why bother with such a puny solution? Wouldn't be better to go straight to a more robust one? Well, a better solution could not exist, or just we could come up with it in a reasonable time. And a solution that works in a fair number of cases is usually considered better than nothing at all. Besides, this first puny solution helped me to write a few test cases, leading to a better knowledge of the problem.

We need to refactor our solution. First issue is easily solved, instead of int, we should use larger data type to store the sums of elements. The biggest standard C/C++ integer type is "long long", let's go for it.

Second issue is tougher. We don't want a loop in a loop, that's for sure. That means that we need to change approach, we need to be smarter. Instead of thinking the vector like a fixed single collection of elements, let's think it as two changing collections. Initially, lefty is empty and righty contains all the elements but the first. We compare them, if they are not matching, we pass considering the next element:
int equi(const std::vector<int>& input)
{
  if(input.empty())
    return -1;

  long long sum = 0; // 1
  for(SIZE i = 0; i < input.size(); ++i) // 2
    sum += input[i];

  long long left = 0;
  long long right = sum;
  for(SIZE pivot = 0; pivot < input.size(); ++pivot) // 3
  {
    right -= input[pivot]; // 3
    if(left == right)
      return pivot;
    left += input[pivot]; // 4
  }

  return -1;
}
1. Notice that I'm using long long.
2. First O(N) loop, I get the sum of all the elements in the collection.
3. Another O(N) loop. We could expect to usually find the pivot in the middle of our collection, so we can estimate an average O(N/2) time complexity for this step.
4. Initially, right is the total sum, each time I check a pivot, I remove its value from the right collection.
5. I'm shifting to the right, let's insert the previous candidate pivot to the left collection.

We still have two loops, in the worst case both O(N), but now they are on the same level, we have to sum, and not multiply them to get the function complexity. Meaning that we can officially enjoy an O(N) solution.

Go to the full post

Adding reversed numbers

One of the most popular SPOJ classical problems is number 42 (a good one indeed), namecode ADDREV, that is about reverting numbers.

We are asked to read couples of (small) positive integers, revert them, add them up, revert the result and return it.

So, if we get in input 24 and 1, we should add 42 and 1, and return 34. Easy, isn't it?

There shouldn't be any issue on solving this problem. In any case, here is a possible implementation for its central part, a function (so simple that I have wrote it thinking in C++, but for the reader it could be C/C++, Java, and easily ported in many other languages with minimal, if any, changes) that gets in input a integer and returns it reverted:
int revert(int input) // 1
{
    int result = 0;
    while(input > 0)
    {
        result = result * 10 + (input % 10); // 2
        input /= 10;
    }
    return result;
}
1. Using a plain int instead of its unsigned cousin is sloppy. What if the user passes a negative number? He gets back a zero! Maybe this behavior makes sense, maybe not.
2. A problem I recently solved (Alien numbers) required a similar approach. Here I divide the current input variable value by 10, the modulo is the cipher I am interested in, I add it to the current resulting value (not before multiplying it by ten) and so on till I have processed the complete number.

If you are not sure of what it is going on here, you should try to execute the function by hand. It helps a lot.

Go to the full post

SPOJ Prime generator

Another well known source of programming problems is SPOJ, SPhere Online Judge. Among the SPOJ classical problems, the first (interesting) one is about building a prime generator, with a few specific requirements.

I guess the C++ solution I am about to describe here won't be considered a SPOJ top notch one, because instead of focusing on writing the most performant piece of code one can do, I have been more interested in writing something that is easy to understand. Anyone has his own priority, I guess.

The code should give as output all the prime numbers in a close interval [low, high] with a span less or equal to 100,000. The biggest number we are interested in is a mere 1,000,000,000. So no big performance issues, if you are using a decent machine.

Given the relatively small numbers involved, we can safely use the sieve of Eratosthenes to check if a given number is a prime or not, as already discussed in a previous post.

I found interesting to design a solution based on a class, like this:
class PrimeGenerator
{
private:
    enum { MAX_VALUE = 1000000000, MAX_DELTA = 100000 }; // 1
    std::vector<int> primes_; // 2
public:
    PrimeGenerator(); // 3
    std::vector<int> findPrimes(int low, int high); // 4
};
1. In the real world, I'd prefer to have these value configurable, but for our toy generator a couple of constants would suffice.
2. Here I'm going to store a few ten thousands prime numbers, enough of them to let the sieve working fast on any expected input interval. Given the requested MAX_VALUE, working with int (on a 64 bit architecture) is more than enough.
3. The constructor would take care of the boring and time expensive job of setting up the private primes vector.
4. So that this function would be (relatively) blazing fast.

My solution would make sense if we plan an extensive use of this prime generator, otherwise the pain that I take in the constructor to calculate all those prime numbers could be a sheer waste of time.

Here is the constructor:
PrimeGenerator::PrimeGenerator()
{
    primes_.reserve(3401); // 1
    primes_.push_back(2);
    primes_.push_back(3); // 2

    const int MAX_DIV = static_cast<const int>(std::ceil(std::sqrt(static_cast<double>(MAX_VALUE)))); // 3
    for(int i = 5; i <= MAX_DIV; i += 2) // 4
    {
        bool prime = true;
        for(std::vector<int>::iterator it = primes_.begin(); it != primes_.end(); ++it)
        {
            if(i % *it == 0)
            {
                prime = false;
                break;
            }
        }
        if(prime)
            primes_.push_back(i);
    }
}
1. What is this 3401 magic number? Well, I cheated. The sqrt(1,000,000,000) is a bit less than 31,623, and in [1 .. 31,623] there are 3401 prime numbers. So, knowing that these are the primes number that we need (if you wonder why, you could check my previous post on the sieve of Eratosthenes to clear your doubts - hopefully), I reserved this exact number of elements for the primes vector. This would save some memory (re)allocation time.
2. The first cases are so trivial, I decided to resolve them by hand.
3. Whats?! Don't be fooled by this longish line, it is actually saying only that I take the square root of the biggest value my toy generator manages, round to the next integer and converted to an int (counterintuitively the ceiling function ceil() returns a double value, if you think about it, it makes sense, still could be perplexing at first sight). This value is 31623, but I would have felt bad using another magic number in so few lines, being it so easy to calculate.
4. Check only odd numbers till square root max value, doesn't make sense checking for even numbers, since we already know they are not primes. When I found a prime, I add it to the private vector of primes. And so on till I checked all 31K+ values. A smarter prime calculator would cache those values in a file, so that I would have this pain just once. But let's assume this piece of code is meant to be loaded once in memory and stay there alive and running for a long time, so that optimizing its startup won't be much useful.

Once the vector containing that bunch of primes number is ready, I could call the only (other) method in the class public interface to get all the primes number in a given interval:
std::vector<int> PrimeGenerator::findPrimes(int low, int high)
{
    if(low < 1 || high < 1 || low > high || high > MAX_VALUE || high - low > MAX_DELTA) // 1
    {
        std::cout << "Bad input values" << std::endl;
        return std::vector<int>();
    }

    std::vector<int> output;
    output.reserve((high - low) / 2); // 2

    int actualLow = low; // 3
    if(low <= primes_.back())
        for(std::vector<int>::iterator it = primes_.begin(); it != primes_.end() && *it <= high; ++it)
            if(*it >= low)
                output.push_back(*it);
    if(!output.empty()) // 4
        actualLow = output.back() + 1;

    int maxRoot = static_cast<int>(std::ceil(std::sqrt(static_cast<double>(high)))); // 5
    for(int i = actualLow; i <= high; ++i)
    {
        bool prime = true;
        for(std::vector<int>::iterator it = primes_.begin(); it != primes_.end() && *it <= maxRoot; ++it)
        {
            if(i % *it == 0)
            {
                prime = false;
                break;
            }
        }
        if(prime)
            output.push_back(i);
    }

    return output;
}
1. Let's check the input value before start doing the real job. You might want to throw an exception in case of unexpected values. I take here a milder approach, returning no prime number (and logging a message to standard output console).
2. Saving some memory allocation time. The number of prime numbers in a given interval is not a number that you can estimate easily. But we can safely assume they are usually less than half the size of the interval, and the exceptions to this empirical rule are about meaningless.
3. We have already calculated a bunch of prime numbers. It is worthy to check among them before doing again the same expensive job.
4. If we have found at least an element in the cached vector of primes, we start checking the values in the interval above it.
5. Time to apply Eratosthenes, in a restricted version, given what we already know. We don't need to check the values against all the primes we have, it is enough to check them till the root of the actual max value.

As usual, I wrote a number of tests to drive the code development. This post is already too long to include them, but I guess you can figure them out.

Go to the full post