Minimum Scalar Product

In its small dataset input version, the 2008 Code Jam Round 1A problem A, known as Minimum Scalar Product, was solved by almost all the users. Still, the large dataset one was an harder nut to crack, with less than half the participant getting it right. This probably means that the real issue on this problem is about the time complexity of the solving algorithm, and the involved data type. That makes sense, because the question looks easy to solve, at least in its abstract form, but we need to put some attention on its special cases.

In few words, the problem boils down to this request: we have in input two vectors, we want to reorganize them so to minimize its scalar product.

Some theory

Your mathematical background should tell you what the scalar product of vectors is. It has a couple of other names, dot product and inner product, it takes two vectors having the same size, and gives back a single value (just a "scalar"). The result is calculate summing the products of all the elements in the same place on the two vectors (an "inner" product). If you wonder why its "dot" name, is just a matter of notation, a dot is traditionally used to represent the operation.

Your geometrical background could give you an hint to understand how to reorganize the vector. You should remember that a square is the rectangle with maximum area among the ones having the same perimeter. Here we want to minimize the result, so we'll try to stay away as much as we can from that case. We can easily do it sorting the vectors, one in the ascending order, the other descendingly.

Algorithms and data types

Following the large dataset requirements, we need to be prepared to accept input values up to 100,000 (positive or negative). This doesn't look much, but remember we are talking about multiplying, so we need to be able to work with operations like 100,000 * 100,000 and we know that 10,000,000,000 does not fit in a 32 bit integer. Meaning that we should work with 64 bit integers.

We should also be prepared to work with sequences of hundreds of values, meaning that we can't spend too much time looking for the minimal product among elements. But this point is already covered by the bit of theory I have given above. We'll sort the vectors, and this is costing O(N Log N) for each one, before multiplying, a linear O(N) operation.


For what I said above, I am going to define a function following this prototype:
int64_t minScalarProduct(std::vector<int64_t> x, std::vector<int64_t> y)
Actually, you could be smarter than me, and use cheaper "naked" int in input and more expensive 64 bit int only in output. But this sloppiness helps me to write code more concise, readable and elegant, as we'll see in a moment.

I used the explicit int64_t to avoid portability issues. I let the compiler to match it to long or long long, accordingly to the actual platform.

Some tests (for googletest) I wrote for my C++11 code:
#include <vector>
#include <stdexcept>
#include <gtest/gtest.h>

TEST(MinScalar, Test1) // 1
    std::vector<int64_t> x { 1, 3, -5 };
    std::vector<int64_t> y { -2, 4, 1 };

    EXPECT_EQ(-25, minScalarProduct(x, y));

TEST(MinScalar, Test2)
    std::vector<int64_t> x { 1, 2, 3, 4, 5 };
    std::vector<int64_t> y { 1, 0, 1, 0, 1 };

    EXPECT_EQ(6, minScalarProduct(x, y));

TEST(MinScalar, BiggestNegativeOne) // 2
    std::vector<int64_t> x(800, 100000);
    std::vector<int64_t> y(800, -100000);

    EXPECT_EQ(-8000000000000, minScalarProduct(x, y));

TEST(MinScalar, BadSized) // 3
    std::vector<int64_t> x { 1, 2 };
    std::vector<int64_t> y { 1 };

    EXPECT_THROW(minScalarProduct(x, y), std::range_error);
1. A couple of tests are provided in the problem descriptions. I have simply rewritten them here.
2. I wrote a few test for the extreme cases, here is probably the most demanding one. The max size expected is 800, and the biggest value expected is |100,000|. I create one all-positive vector, one all-negative, and then I ask for its minimal scalar product. I expect -8,000,000,000,000 as a result.
3. This check was not required by the problem, but I couldn't stop myself of being a bit paranoid and ask what happens if the input vector have different sizes. I decided to let the function throw a C++ standard range_error exception. It would have been cleaner to create a specific exception type.


In the end, I came up writing this code:
#include <vector>
#include <algorithm>
#include <numeric>
#include <stdexcept>

int64_t minScalarProduct(std::vector<int64_t> x, std::vector<int64_t> y)
    if (x.size() != y.size())
        throw std::range_error("check input sizes"); // 1
    std::sort(x.begin(), x.end()); // 2
    std::sort(y.begin(), y.end(), std::greater<int64_t>()); // 3

    return std::inner_product(x.begin(), x.end(), y.begin(), static_cast<int64_t>(0)); // 4
1. As specified by the BadSized test case, I throw an exception to signal a weird input.
2. Sort "naturally" (ascendingly) the first vector.
3. Sort the second vector descendingly.
4. The STL provides inner_function to calculate the scalar product, using it makes the code sleeker, still it brings a couple of nuisances. It multiplies the container values accordingly to their type, and this is usually what we want to do. That's way I forced the vector underlying type to be a 64 bit integer. The result of the product is added to the last value passed to the function, so its type is fundamental, too. If I don't cast it to int64_t, I'll still have the overflow issue.

No comments:

Post a Comment