Daily Coding Problem is a website which will send you a programming challenge to your inbox every day. I want to show beginners how to solve some o...
For further actions, you may consider blocking this person and/or reporting abuse
I’d use a set and iterate over the list and for each element I’d check if the set contains that element and if it does, it means we found a positive answer and if not, add
k - arr[i]
to the set.Thank you for sharing!
What is the Big-O of this solution, though?
In the worst case, if we have...
2 elements: we will do a
contains()
on a set with 0 elements, then anadd()
, then acontains()
on a set with 1 element, then anadd()
.3 elements: all of the above, plus a
contains()
on a set with 2 elements, then anadd()
Assuming we skip the last
add()
when we know we've reached the end of the list, worst-case would be callingcontains()
onN
sets of length 0 throughN-1
, plus callingadd()
N-1
times.This is definitely less space-efficient than my solution, because of the set, but is it more time-efficient? What do you think?
I think it depends on how set works under the hood.
At first, because there is only one for loop, one might think it’s only
O(n)
.But how does the `contains()’ method actually work? I have just read here and it says that set will internally iterate its elements and compare each element to the object passed as parameter.
Another way of making sure that there isn’t another for loop going on behind the scenes is by using a
map
. Because accessing an item requiresO(1)
time.(as far as I know)So it could be that the
Set
solution is just as memory-efficient as the double-for
loop. I'd love to do a benchmark of this.If you do, please let me know what the results are!
Thanks!
Ah, here is the answer already.
It's O(N) for sure. Sets (based on HashMaps) in Java have O(1) amortized for lookup and insert.
Very clever! I like this answer.
yo!🔥
I thought the same and this is way more efficient runtime.
Hi
Why not sorting the data first? If it is sorted, you can easily do a dichotomic search for the second number and get O(n(log(n)). Because sorting can be done in the same complexity, you get something better than the O(n2) algorithm above, with a space complexity of O(1)
You could also sort using a binary tree, then it becomes trivial to search for the second number in the tree while keeping the same complexity (with a space complexity of O(n) though)
If you use the map solution of Andrei, you don't know how much space the map will take (it depends on the implementation) but with a time complexity of O(n). If you use the set, you will get the same thing as my first solution.
Joan and Andrei both provided alternative solutions to this problem. I wondered which of our solutions was the best in the worst-case, so I created a Java Microbenchmark Harness (JMH) example which runs each of these pieces of code to time them. So I present...
A mini-tutorial on using the Java Microbenchmark Harness to benchmark Java code
Setup
Use Maven to set up a simple JMH project:
My package is
daily
, so I use that in place of<org.sample>
, and I set theartifactId
to001
because this is the first Daily Coding Problem I'm solving in this fashion. This makes a directory called001
with the following structure:The contents of
MyBenchmark.java
are as follows:...but we're going to delete all of this and write our own code.
Benchmarking
Christophe Schreiber has a good example of using JMH on Dev.To. We need to send parameters to our methods, though. In the worst-case scenario, we'll have a very long array in which no two numbers add to
k
. These numbers should also all be unique so that the compiler can't do any optimisations and so that we need to continually add them to Andrei'sSet
.I will be using this file on GitHub by Bruno Doolaeghe as a template:
All code is available on my GitHub. The results of the benchmarking are below. Please note that I don't use JMH very often so this may not be the optimal way to lay out this benchmark. If you have any suggestions, be sure to let me (politely) know :)
Even in the worst-case scenario with a 10-million element array, I see no difference between the three methods in terms of time:
Because of the uncertainties on the three benchmarks, they could all be the same (they could all be
0.053 us/op
). That's anticlimactic!Follow-up with arrays with more elements:
Does the double-
for
win??Interesting stats though. But could it be that the N squared approach never really got to it's worst case scenario by not iterating through all the elements?
If you're crazy about complexity's O, I have the impression it would be better to start by sorting it and then going through the array starting both from the beggining and the end, but only once, so it becomes O(nlogn). It's probably less readable though.
Andrei's solution was very smart as well, and I guess depending on complexity of
contains
andadd
of the Set object, you could get an algorithm with the same complexity.I enjoyed this! I read the problem and decided to have a go myself and see what our different solutions looked like. I did the same except in the embedded loop i just started at array.length and went down to 0 instead.
You bring up a good point. I should hide my solutions for future coding challenges like this.
I would even check, if the outer number is greater equal than the number k. Thus we could even skip some for loops.
This sounds reasonable to me but note that the problem doesn't state whether the numbers in the array can be negative!
If I got this question in a coding interview, I'm not sure I would make that assumption.
I was actually thinking more in the line of:
Let n={3,5,10,15,17} and k=13. 15 and 17 should not be considered at all since they are greater.
The numbers are not supposed to be negative. The worst case remains the same.
But if n is meant to be sorted, we can then only check the left side of n up to the very closest number to k, which is still less than k. That would change the approach in general, I would think.