# Find the number of set bits in the result of multiplication of big numbers

You are given a number that might be so big that it doesn’t fit in any standard type, however the number is provided in a form of [a, b, c, d, …] where every letter stands for the bit number that is set in the byte representation, so you can read it easily. In example [1, 3, 4] represents 1101 = 13. Then you multiply the number by 3. How many bits are set to 1 in the binary representation of the result?

This is a real interview question that has been asked to a friend of mine quite recently. To figure out the solution, we need to firstly consider what the multiplication by 3 really means. Let’s take an example.
```  01101 x00011 --------  01101 +11010 -------- 0100111 ```
As you can see the result is the sum of the number itself and the number shifted to left by 1. The former is obvious, as it is the result of multiplying by 1. The latter is what multiplication by 2 does. Multiplication by 2^n shifts the byte left by n positions in a general case.

Coming back to our result we get [1, 3, 4] x 3 = [1, 3, 4] << 1 + [1, 3, 4] = [2, 4, 5] + [1, 3, 4]. Now we may answer the question: what bytes will be set after this addition? Recalling the binary arithmetic and we may notice that the solution lays in calculating the result of the addition of the. In simpler words we just need to implement binary addition of numbers represented in the form of [2,4,5] and [1,3,4]. The answer will be the size of the result structure.

```int getNumberOfSetBits(vector<int> number)
{
// calculate the number shifted by 2
vector<int> shifted = number;
for(unsigned int i = 0; i < number.size(); i++)
shifted[i]++;

bool carry = false;
vector<int> result;

// the first element of number will surely be a part of the result
result.push_back(number[0]);

// start the addition
for(unsigned int i = 1; i < number.size(); i++)
{
if(number[i] == shifted[i-1])
{
if(carry)
{
result.push_back(number[i]);
}
carry = true;
}
else
{
result.push_back(number[i]);
result.push_back(shifted[i-1]);
carry = false;
}
}

// do not forget the carry the last one from shifted
int last = shifted.size() - 1;
if(carry)
result.push_back(shifted[last] + 1);
else result.push_back(shifted[last]);

return result.size();
}
```

The code could be optimized as actually the structure result is not necessary. But it keeps more visible what was the idea behind the algorithm.

Regards

Joe Kidd

Advertisements

# The length of the longest sequence of 0s in the bit representation

We are given an integer value and we are expected to provide the information about the length of the longest sequence of 0s in its bits representation, but the solution should be not worst than O(log N). In example 167 = 10100111, the answer is 2.

O(log N) for this problem? It’s gonna be freaking tricky outstanding solutions that moves the limits of algorithms up to the boundaries and even a step behind! Let’s take a look at the code firstly and then we will explain what are we doing.

```int maxZeroLen(int N)
{
int maxLen = 0;
int curLen = 0;
while(N)
{
int remainder = N % 2;
N = N >> 1;
if(!remainder)
curLen++;
else curLen = 0;
maxLen = max(maxLen, curLen);
}
return maxLen;
}
```

Yes it is that simple. Sorry if you expected some magic here. When we divide by 2, we actually decrease the size of the problem in the logarithmic fashion 1/2, 1/4, 1/8… etc. So the solution is O(log N). I really like this problem as it makes you think and explains what logarithmic complexity really is.

Best Regards

Joe Kidd

# Bit tricks and tricky questions

Those problem pretty often appears in interviews. I had them spread in lot’s of places, so just to order things a little bit I gathered them in one place. On the other hand they are small enough they don’t deserve for a seperate post.

Problem: Add two numbers without using arithmetic operations.

We need to note that XOR of two numbers contains it sum if there is no carry anywhere. I.e

3 = 101
2 = 010
————— XOR
5 = 111

So we may use this property, remembering to consider the carry at every step.

```int carry = 1;
// while we don't have carry anymore
while(carry)
{
// so we get all the positions where carry may appears
carry = x & y;
// we got the sum without a carry
x = x ^y;
// we move the carry, one position on the left
// and we try to add again
y = carry << 1;
}
```

Problem: The lowest of three numbers without using comparison

What about this solution?

```while(x && y && z)
{
x--; y--; z--;
c++;
}
return c;
```

The other solution uses only bit operation what makes it even more interesting. Obviously if we know a way to choose the smallest of two numbers, we can extend it to three of them. For two numbers the solution could be:

```// mask will contain only 0s, or only 1s, depending on
// if the difference between x and y is positive
// or negative (check U2 system).
int mask = ((x - y) >> (sizeof(int) * CHARBIT - 1));
// if the mask contained zeros it means that x >= y,
// so (x - y) dissapears.
// if the mask contains only 1s, then the difference
// remains and we get y + x - y = x.
int min = y + (x - y) & mask;</pre>
```

Problem: Multiply by seven without using multiplication operator.

```// multiply by 8
int x8 = x << 3;
int x7 = x8 - x;
```

Problem
: Swap of the numbers without using an extra variable

```
a = b - a;
b = b - a;
a = a + b; // because (b - a) + (b - b + a) = b

```

Problem
: Check if a number is a power of 2

```
return (N & (N - 1)) == 0;

```

# Check if a number is a power of 4

Input: A number N
Output: True if N is a power of 4, False otherwise
Problem: We are given a number, and we need to check if it’s a power of 4
Example: 64, result = True
Complexity: Time O(log log n), Memory 0(1)

This problem is very trivial and basically we could solve it, in a iterative way, just checking consecutives powers of four and compare them against N. But if it were that simple, it wouldn’t appear here. Moreover we have a complexity limit that is O(log log n). Checking every power of 4 would be efficient, especially if we would use bit shifting instead of normal power, but not efficient enough, as we would get O(log n) solution.

Before we will discuss the real solution let’s take a look at the binary representation of powers of 4.

```4^0 = 0000001
4^1 = 0000100
4^2 = 0010000
4^3 = 1000000
```

As we can see the bit is set only at the odd positions and only once. How can check this condition in a effective way? We can use a powerful binary search again.

1. We divide the number bit representation into two pieces.
2. Only one piece might be not zero, if this condition is not met, the number is not a power of 4.
3. Choose a non zero part and go to step 1. as long as you find number 1, then go to step 4.
4. Check if the bit is on the odd position, if not return false, as the number is not a power of 4 otherwise go to step 5.
5. Flip the bit from the step 4, if the number euqals zero now, it is a power of 4 , otherwise it’s not.

This algorithm would give us the expected O(log log n) complexity.

There is also another solution, that is way easier to implement and works in O(1) time. As we can see from the example above, the bits are set on the odd positions. But let’s go in a opposite way and think about even positions. We know, that no even position might be set. Secondly we know that power of 4 is also a power of 2. If this condition are met, then we have a power of 4 number.

```
bool isPowerOf4(int N)
{
int mask = 0xAAAAAAAA; // mask with set values at the even position
return (N & mask == 0) && (N & (N - 1) == 0);
}

```

This solution is prettier, but I like more the one with binary search ans it shows an interesting application of this algorithm.

Best Regards

Joe Kidd