In this lesson, we find the number of positions where the bits are different for the given input.
Introduction
In this question, we wil...
For further actions, you may consider blocking this person and/or reporting abuse
There is an easy solution to count the number of 1 bits in a value, with C++20:
See en.cppreference.com/w/cpp/numeric/...
Hey @pgradot,
This works fine, and the time is
O(1)
. but the built-inpopCount
function would takeO(k)
time wherek
is the number of bits for an integer number.So you code has to go through all of the bits present in a integer one-by-one.
But unlike Brian Kernighan’s Algorithm, which runs in 2 iterations to figure out the output using
&
operator forn
and(n-1)
elements.As Tony B mentionned below, I don't believe Big-O notation is relevant.
That consideration put aside, how do you know that std::popcount() is O(k)?
@pgradot,
popCount
returns the number of 1 bits in the value of x . How do you think it's doing? And how it knows if it's a1
bit or0
bit at any given position?The
popCount
function runs through all the bit positions, which meansk
bits in any integer. So the time is directly proportional to number of bits present in an integer.Also, Since it's an integer, and we knew that integers have fixed number of bits, hence at any point of time the time in running an algorithm is directly proportional to its bits.
**Note: **In small computations, big-O won't matter, but you should always write optimized algorithms in any software products.
In both of your implementations for
hammingDistance()
, you start witha ^ b
and then you count the bits, using 2 different techniques. Why on earth wouldstd::popcount()
would use a less optimized technique?Furthermore, you say:
Isn't that O(32) ?
Then you say:
The integer is still 32-bit wide, so why do you have a different conclusion?
Note: a smart guy said almost 50 years ago that "premature optimization if the root of all evil"
Hey @pgradot ,
You covered everything, except that my first approach is not optimized and
popCount
fn runs through all the bit positions which isn't an optimized one as well.Why O(1)?
Here is what LeetCode experts has to say about Hamming Distance, check out their Brian's algorithm solution leetcode.com/problems/hamming-dist....... You'll understand... why this is a O(32) or in simple terms O(1) time.
Hey Tony,
You need to understand that integers have fixed number of bits present, so at any point of time it won't change and hence it is always
O(1)
.Bubble sorting on an array is traversing the number of elements present in the array, which means if an array has 5 elements, it iterates 5 times and if it contains a millions items, then the iteration takes a millions times.. So the order or time is directly proportional to
O(n)
, wheren
is the number of elements present in the array.Note: Number of bits for an integer are fixed and never change, whichi is constant. This algorithm would require less iterations than the bit shift approach.
And as the approach is bit shift, since the size(i.e., the bit number) of integer number is fixed, we have a constant time complexity, which is
O(1)
.Hey Tony,
Yes, when
N
gets bigger, the iterations do as I've explained in my earlier comment.I never said there is a O(32) that exists in Big-O... But you aren't understanding big-O complexity properly.
O(32) mean -> 32 bits for an integer, and as this is a fixed number, we can round it to constant
O(1)
time.Check this - https://leetcode.com/problems/hamming-distance/discuss/?currentPage=1&orderBy=most_relevant&query=O%2832%29
If this won't convince you at all. My bad :(
Hey @leouofa,
Having knowledge on how bit level operations work will be a game changing, you can solve problems in lesser time.
Coding Interviews are intimidating to most of the programmers, learning these bit tricks, DSA and some problem solving can make you a better programmer and decision maker while choosing a better approach/algorithm for a software problem.
Ref - stackoverflow.com/questions/209691...