This problem is better (?) described in the Passing-cars Codility test, part of their train page, prefix sums section.

In my opinion a few test cases would be a better way to clarify it. They are written for C++ with the Google Test framework, however I guess you can easily adapt to your preferred language/xUnit environment:

TEST(PaCa, Given) // 1 { std::vector<int> input; input.push_back(0); input.push_back(1); input.push_back(0); input.push_back(1); input.push_back(1); ASSERT_EQ(5, solution(input)); } TEST(PaCa, Minimal1) // 2 { std::vector<int> input(2); input[0] = 1; ASSERT_EQ(0, solution(input)); } TEST(PaCa, Huge3) // 3 { std::vector<int> input(50000); std::vector<int> more(50000, 1); input.insert(input.end(), more.begin(), more.end()); ASSERT_EQ(-1, solution(input)); }1. This is the test case provided by Codility. Our input is { 0, 1, 0, 1, 1 }. The first zero is followed by three ones, the second zero by another couple. Expected result is five.

2. A first minimal test case I have written (I spare you the other ones). Having in input { 1, 0 }, there is no one following the only zero available. Expected result is zero.

3. A test case that works on the biggest possible input. The first half of the elements are all set to zero, the second half includes only ones. Each zero is followed by 50,000 ones. Having 50,000 zeros, the expected result is 2,500,000,000. Two billion and an half. A clause in the problems says that if we have an output bigger than one billion we should return a sort of error code, minus one.

**Inefficient solution**

Anyone should find easily this solution. It works fine for small inputs, but it has a O(N**2) time complexity that makes less usable as the input size grows.

int solution(std::vector<int>& input) // 1 { if(input.size() < 2) // 2 return 0; unsigned couples = 0; // 3 for(unsigned i = 0; i < input.size() - 1; ++i) // 4 { if(input[i]) // 5 continue; for(unsigned j = i + 1; j < input.size(); ++j) // 6 { if(input[j]) // 7 { ++couples; } } } return couples > 1000000000 ? -1 : couples; // 8 }1. Usually we would expect the input parameter to be passed as const reference. And here there is no real reason to to pass it as a non-const one, beside the fact that is a problem requirement. We'll see in the third proposed solution how this could be useful to save space complexity.

2. Trivial cases. In real life code, we should have also checked for size too big than expected - maybe throwing an exception.

3. The worst case tested in Huge3 tells us that a plain 32 bit signed int could be not enough to keep the number of couples. And for Codility compiler "int" means "32 bit int". The unsigned version of int, is more than enough for our purposes.

4. Loop on all the input elements but the last one. We are looking for zeros, and we don't care if the last element is one of them, since it can't be obviously followed by anything.

5. If the element is "not zero", it is not what we are looking for, get to the next iteration.

6. Loop on all the following elements looking for ones. Being an asymptotically O(N) loop in another O(N) loop, this is the weak instruction in this algorithm. We'll have to move it someway out of here.

7. If the current element is "not zero", we have found another couple. I could have put here the test on the one billionth couple. If you expect to have many huge inputs leading to a "-1" solution, that could be a better choice. I assumed instead that it is a relatively rare case, and I decided not to pay the price for this repeated check, and move it to (8). Another effect of having the check here, is that I could have given to couples the "plain int" status.

8. If the number of couples is too big, return the error value, otherwise implicitly cast couples to plain int (and I know I can do it) and return it.

**Time (and space) linear solution**

We have already seen how in previous problems how to trade time for space complexity. The idea here is creating a temporary buffer, same size of the input vector, where we are going to store the sum of all the ones to the right of each zero. Then it will only be a matter of adding the values we are interested in.

Here it comes handy the C++ STL numeric algorithm partial_sum(), that does exactly what we need, once that we pay attention to the fact that we have to look at the reversed input, starting from its rightmost element up to the leftmost one.

Here is the refactored solution:

int solution(std::vector<int>& input) { if(input.size() < 2) return 0; std::vector<int> buffer(input.size()); std::partial_sum(input.rbegin(), input.rend(), buffer.rbegin()); // 1 unsigned couples = 0; for(unsigned i = 0; i < input.size() - 1; ++i) // 2 { if(input[i] == 0) { couples += buffer[i]; } } return couples > 1000000000 ? -1 : couples; }1. Scan the input vector in reverse order, and put the calculated partial sum likewise in the local buffer.

2. Having extracted the internal loop to place it in (1), the original eternal loop gets much simpler. If the current input element is a zero, I should add the current partial sum (in buffer) to the value I am going to return.

This is a good solution, and it is going to get an 100% by Codility. Still it shouldn't be accepted, if you read carefully their requirements, where it is stated that a O(1) worst-case space complexity is expected.

**Dirtier but space constant**

Noticing that we are allowed to modify in-place the input (usually not a good idea, since it could cause unpleasant surprises to the caller), we could think a way to combine input and buffer behavior in a single place. Again, this is usually not a good idea. We are violating the single responsibility principle / separation of concerns, and this is going to make our code less readable, and hence less robust to changes. However, let's assume we are in a case where memory is at a premium, and refactor a second time the code, this time to get rid of the local buffer.

We could think to twist the partial sum algorithm so that we store in-place the calculated prefix sum only if the current value is zero, otherwise we wipe out the value in input. The change in the algorithm is so radical that we can't reuse the STL function, but we have to write an our variation.

int solution(std::vector<int>& input) { if(input.size() < 2) return 0; int sum = 0; // 1 for(std::vector<int>::reverse_iterator it = input.rbegin(); it != input.rend(); ++it) { sum += *it; // 2 *it = *it == 0 ? sum : 0; // 3 } unsigned couples = std::accumulate(input.begin(), input.end(), static_cast1. Keep track of the current partial sum value.(0)); // 4 return couples > 1000000000 ? -1 : couples; }

2. Add the current sequence value to the partial sum.

3. Here is the tricky bit. If the current value is zero, we put the partial sum in. Otherwise we flag it as uninteresting.

4. We need to sum up all the partial sums stored in the input vector. We are not interested in the uninteresting values, but since we have marked them with a zero, we could just sum all the elements in the vector. I'm using the STL function accumulate(), notice that I have explicitly said to it that it as to use as starting value zero as an unsigned value.

**The other way round**

As Wen-Kai suggests in his comment to this post, we get the same result counting the zeros instead. Think how each 1-element assumes a different weight accordingly to the number of zeros that are before it, since each of them is going to count it for its own sum.

If we perform a slightly customized version of partial sum on zeros, storing the results where the original one-elements were, we get a specular behavior of the previous solution:

int solution(std::vector<int>& input) { int sum = 0; // 1 for(std::vector<int>::iterator it = input.begin(); it != input.end(); ++it) { if(*it == 0) // 2 ++sum; else *it = sum; // 3 } unsigned couples = std::accumulate(input.begin(), input.end(), static_cast<unsigned>(0)); // 5 return couples > 1000000000 ? -1 : couples; }1. Now "sum" is the number of zeros that we have already scanned.

2. A new zero detected.

3. The weight of the current element grows to keep track of the zeros that are before it.

4. As before, now is just a matter of summing up the partial results.

You can get a sleeker code combining the partial sum loop with the accumulate one, following the Wen-Kai suggestion here below.

For this problem, we can count the times the 1 is counted after each 0 and summarize the counts to get the result.

ReplyDeleteFor example, the array [ 0, 1, 0, 1, 1], the count of the 1 after each 0 is [ 0, 1, 0, 2, 2]. We summarize the counts for each 1 to get 5.

For reducing the space complexity, we can use a variable which is started from 0 to store the current count of 1 and a variable which is started from 0 to store the result. When we encounter 0 in the vector, we increment the count. When we encounter 1 in the vector, we add the count into the result.

Here is the code sample:

int solution( std::vector< int > const& input )

{

unsigned int cnt = 0;

unsigned int result = 0;

for ( size_t idx = 0; idx < input.size(); ++idx )

{

if ( input[ idx ] == 1 )

{

result += cnt;

}

else

{

++cnt;

}

}

return ( result > 1000000000 ) ? -1 : result;

}

Neat! Instead of counting the ones for each zero, you modify on the flight the weight of any single one accordingly to the number of zeros that are depending to it.

DeleteNice and elegant solution, thank you for share it.