I recently learnt about this via a very practical example so sharing it here with you guys.
As you may already know, Dispatchers.IO is optimised for IO related operations like Network call or reading a file etc, and Dispatchers.Default is optimised for CPU intensive task.
But why is that?
How do they work under the hood?
Let me share with you an example. Take a look at the following code snippet:
fun dispatchersTester(){
//keep track of threads used
val threads = hashMapOf<Long,String>()
val job = GlobalScope.launch(Dispatchers.Default) {
repeat(100){
launch {
//storing the threads
threads[Thread.currentThread().id] = Thread.currentThread().name
//simulate a network call
Thread.sleep(1_000)
}
}
}
//Wait for the above job to finish and
//measure the duration
GlobalScope.launch {
val timeMs = measureTimeMillis {
job.join()
}
Log.d(TAG, "Took ${threads.keys.size} threads and $timeMs ms ")
}
}
In a nutshell, we are creating 100 coroutines and launching them in Default dispatcher and simulating the network call by adding a sleep of 1 sec.
After running for couple times, the following results were obtained:
Took 4 threads and 25075 ms
Took 4 threads and 25048 ms
Took 4 threads and 25060 ms
As you can see, it uses 4 threads at max and took whooping 25 seconds to complete this task.
It means the system launched a batch of 4 threads simultaneously. As we are creating 100 coroutines, 4 threads at a time with a delay of 1 sec takes around 25 seconds.
Now, let’s change the Dispatcher to IO. And check out the results:
Took 64 threads and 2011 ms
Took 64 threads and 2009 ms
Took 64 threads and 2005 ms
This time the system is using 64 threads to complete the task in just 2 seconds.
Which means it’s launching a batch of 64 threads at a time, and as we are only launching 100 coroutines, so it launched two batches, each with delay of 1 sec and completed the task in roughly 2 seconds
Notice how changing only dispatcher brought this huge difference.
Why is that?
IO Dispatcher is optimised for task where the threads go in a state where they trigger something and wait for the result, like a network request. The optimisation is done in a such a way that it allows them to create multiple threads and when a thread reach the pause state, the system resumes execution on the other thread.
Which is not the case with Default Dispatcher.
Let’s see what Dispatchers.Default is good at.
Instead of simulating the network call, let’s simulate a CPU intensive task.
Replace Thread.sleep(1000)
with (1..100_000).map{it*it}
It is going from 1 to 100K and computing square of each number.
Let’s run it with Dispatchers.IO:
Took 51 threads and 249 ms
Took 54 threads and 546 ms
Took 57 threads and 402 ms
Now, let’s change the Dispatcher to Default
Took 4 threads and 154 ms
Took 4 threads and 107 ms
Took 4 threads and 121 ms
Even though the actual difference is not much, but still it took half the amount of time than IO dispatcher.
Hope it helps,
Cheers.
Top comments (0)