It's not a secret that I'm pretty enthusiastic about Kotlin as a programming language, despite a few shortcomings and strange design choices. I got the chance to work on a medium-sized project using Kotlin, Kotlin Coroutines and the coroutine-driven server framework KTOR. Those technologies offer a lot of merit, however I've found them difficult to work with in comparison to e.g. plain old Spring Boot.
Disclaimer: I do not intend to "bash" any of those things, my intention is to provide my "user experience" and explain why I will refrain from using them in the future.
Debugging
Consider the following piece of code:
suspend fun retrieveData(): SomeData {
val request = createRequest()
val response = remoteCall(request)
return postProcess(response)
}
private suspend fun remoteCall(request: Request): Response {
// do suspending REST call
}
Let us assume we want to debug the retrieveData
function. We place a breakpoint in the first line. Then we start the debugger (in IntelliJ in my case), it stops at the breakpoint. Nice. Now we perform a Step Over
(skipping the call createRequest
). That works too. However, if we Step Over
again, the program will just run. It will NOT stop after remoteCall
.
Why is that? The JVM debugger is attached to a Thread
object. For all means and purposes, this is a very reasonable choice. However, when Coroutines enter the mix, one thread no longer does one thing. Look closely: remoteCall(request)
calls a suspend
ing function - we don't see it in the syntax when we call it though. So what happens? We tell the debugger to "step over" the method call. The debugger runs the code for the remote call and waits.
This is where things get difficult: the current thread (to which our debugger is bound) is only an executor for our coroutine. What will happen when we call a suspending
function is that at some point, the suspending function will yield
. This means that a different Thread
will continue the execution of our method. We effectively tricked the debugger.
The only workaround I've found is to place a breakpoint on the line I want to go to, rather than using Step Over
. Needless to say, this is a major pain. And apparently it's not just me.
Furthermore, in general debugging it is very hard to pin down what a single coroutine is currently doing, as it jumps between threads. Sure, coroutines are named and you can enable the logging to print not only the thread but the coroutine name as well, but the mental effort required to debug coroutine-based code is a lot higher in my experience than thread-based code.
Binding data to the REST call
When working on a microservice, a common pattern is to receive a REST call with some form of authentication, and pass the same authentication along for all internal calls to other microservices. In the most trivial case, we at least want to keep the username of the caller.
However, what if those calls to other microservices are nested 10 levels deep in our call stacks? Surely we don't want to pass along an authentication
object as a parameter in each and every function. We require some form of "context" which is implicitly present.
In traditional thread-based frameworks such as Spring, the solution to this problem is to use a ThreadLocal
object. This allows us to bind any kind of data to the current thread. As long as one thread corresponds to the processing of one REST call (which you should always aim for), this is exactly what we need. A good example for this pattern is Spring's SecurityContextHolder
.
With coroutines, the situation is different. A ThreadLocal
will no longer do the trick, because your workload will jump from one thread to another; it is no longer the case that one thread will accompany a request during the entirety of its lifetime. In kotlin coroutines, there's the CoroutineContext
. In essence, it is nothing more than a HashMap
which is carried alongside the coroutine (no matter on which thread it mounts). It has a horribly over-engineered API and is cumbersome to use, but that is not the main issue here.
The real problem is that coroutines do not inherit the context automatically.
For example:
suspend fun sum(): Int {
val jobs = mutableListOf<Deferred<Int>>()
for(child in children){
jobs += async { // we lose our context here!
child.evaluate()
}
}
return jobs.awaitAll().sum()
}
Whenever you call a coroutine builder, such as async
, runBlocking
or launch
, you will - by default - lose your current coroutine context. You can avoid it by passing the context into the builder method explicitly, but god forbid you ever forget to do that (the compiler won't care!).
A child coroutine could start off with an empty context, and if a request for a context element comes in and nothing is found, the parent coroutine context could be asked for the element. However, that does not happen in Kotlin, the programmer is required to do that manually, every single time.
If you are interested in the details of this issue, I recommend having a look at this blog post.
synchronized
will no longer do what you think it does
When working with locks or synchronized
blocks in Java, the semantic I am thinking about is usually along the lines of "nobody else can enter while I'm in this block". The nobody else part of course implies that there is an "identity" of some sort, which in this case is the Thread
. That should raise a big red warning sign in your head by now.
Let's consider the following example:
val lock = ReentrantLock()
suspend fun doWithLock(){
lock.withLock {
callSuspendingFunction()
}
}
This is dangerous. Even if callSuspendingFunction()
does nothing harmful, the code will not behave as you might think it does:
- Enter the lock
- Call the suspending function
- The coroutine yields, the current thread still holds the lock
- Another thread picks up our coroutine
- We are the same coroutine, but we do not own the lock anymore!
The number of potentially conflicting, deadlocking or otherwise unsafe scenarios is staggering. You might argue that we "just" need to engineer our code to handle that. I would agree, however we are talking about the JVM. There is a vast ecosystem of libraries out there. And they are not prepared to handle it.
The upshot here is: the moment you start using coroutines, you forfeit the possibility to use a lot of Java libraries, simply because they expect a thread-based environment.
Throughput vs. Horizontal Scaling
A big advantage of coroutines for the server-side is that a single thread can handle a lot more requests; while one request waits for a database response, the same thread can happily serve another request. In particular for I/O bound tasks, this can increase the throughput.
However, as this blog post has hopefully demonstrated to you, there is a non-zero cost associated with using coroutines, on many levels.
The question which arises is: is the benefit worth that cost? And in my opinion the answer is no. In cloud- and microservice-environments, there should always be some scaling mechanism, whether it is Google AppEngine, AWS Beanstalk, or some form of Kubernetes. Those technologies will simply spawn new instances of your microservice on demand if the current load increases. Therefore, the throughput one individual instance can handle is much less important, considering all the hoops we would have to jump through for using coroutines. This reduces the value we get from using coroutines.
Coroutines have their place
Coroutines are still useful. When developing client-side UIs where there is only one UI thread, coroutines can help to improve your code structure while being compliant with the requirements of your UI framework. I've heard that this works pretty well on Android. Coroutines are an interesting topic, however for the server-side I feel that we are not quite there yet. The JVM developers are currently working on Fibers, which are in essence also coroutines, but they have the goal to play nicely with the JVM infrastructure. It will be interesting to see how this effort develops, and how Jetbrains will react to it with respect to their own coroutine solution. In the best possible scenario, Kotlin coroutines will just "map" to Fibers in the future, and debuggers will be smart enough to handle them properly.
Top comments (21)
Your arguments apply to basically every async framework / toolkit there is on JVM including event loop based ones, actor based ones or effect monad based ones. This applies to Spring WebFlux as well, so basically the only thing you have ascertained is that asynchronous programming on the JVM requires some trade-offs and/or progress in tooling space. Final argument about "just spawning new instances of your thread-per-request Spring based macroservices" is just plain misinformation and praise of laziness - you suggest spawning countless instances of JVM (of all runtimes probably the most resource-hungry) using thread blocking just because you feel mildly inconvenienced by a different programming paradigm when the same can be achieved with significantly smaller amount of running instances.
Source: I'm a big data / reactive systems engineer working in Scala.
That's not a real argument. If it were, all of us would still code in C, or even assembler, because it is "more efficient". We're not talking about "minor inconveniences" here; each of the problems listed in the blog post are potential dealbreakers in their own right.
What kind of information do you want to convey with your "sorce"? Is this the modern version of "I'm a pro so you better believe it"? lol
Are you seriously comparing using C vs a jvm language with using sync I/O vs async I/O? Because for IO bound apps a java non-blocking app can easily be more efficient than a blocking app in whichever compiled to native language. Your arguments are minor inconveniences - it's not a deal breaking problem that you can't use step over in debugger. It's not a deal breaking problem that your threads are shifted under your code. It's just inconvenient to your habits and it's not only possible but relatively simple to learn to debug async code on the jvm with current tooling.
Source clause was meant to convey information that I'm not pulling my opinion out of a hat. Asynchronicity, thread shifting and non blocking I/O are something completely normal in Scala ecosystem.
Having to add a new breakpoint is a potential deal breaker? Sorry, in my debugging sessions I've got anywhere from 10-20 going at a time and I add and remove them as the need arises.
Having to pass state along as a parameter to functions (unless you can get the coroutine context going) is a practice I'd encourage anyways over thread local. I've had to rewrite stuff at work because it depended on ThreadLocal instead of a supplied state, and reactive code has the same "issues" as coroutines. We had issues of our own figuring out how to get a reactive context working with Flux/Mono code so we wrote our own stuff to deal with it. Took a day, give or take, and we can use that elsewhere.
The only point you've made that's a potential deal breaker is regarding locks and synchronized blocks. That's not something I considered and thank you for pointing it out. Roman commented on this thread; I'd love to hear his take on that because I've never read anything about whether they're needed or how they're supposed to be dealt with (if at all). Maybe you came across a legitimate bug.
That said, I've been fortunate enough that in almost 20 years of Java work I've yet to work with threads to such a level I really needed synchronization or locks. I've found it's far easier to create immutable objects in spite of the added boilerplate they require; then you don't have to worry about race conditions. I have yet to come across an instance where that's not possible, but I'm sure they're out there and when that comes up this would be good to know.
Lukasz' argument applies perfectly. It has nothing to do with efficincy. You're having problems dealing with a different way of handling something you're used to doing. Instead of approaching this as "here's some gotchas I found in coroutines and how to work around them" you came it as they're crap and you recommend avoiding them. And because you legitimately think an extra breakpoint is a problem I'm not inclined to take your opinion very highly on this.
You should respond to Roman. He's got a bit of insight on Kotlin and coroutines. His feedback might be useful.
+1 to the counter-arguments in this thread.
I think Kotlin's coroutines are an impressive piece of engineering but if there's something I regret it's that it encourages to write code in an imperative style. Note that that's not a problem with coroutines per-se, they're low-level powerful constructs that can be used to build higher-level libraries.
With imperative style it's only a matter of time plus sufficient amount of different hands and a few deadlines that the code becomes spaghetti. Then you add some state to it and it becomes a non-parallelizable mess.
I'd encourage you to try to avoid sharing state and actually pass the arguments you need. Even though it might seem annoying initially, it's going to help make your functions pure, with all the advantages of that (parallelization, for example). Also, there are patterns to mitigate the problem (e.g. the
Reader
monad in the functional world). Studying functional programming would help you see things in a different way.Regarding locks, the documentation is clear that you should use Mutex, if you really have to. You could also try to use thread-safe data structures. Still, I believe that you could probably find a way to avoid sharing variables. Maybe look into actors?
I would have to agree with you on the debugger's issue though, but at least there's a workaround.
Thanks for sharing your experience. You write:
Can you please elaborate what does that mean? Can you give some kind of self-contained example that demonstrates this problem?
I've updated the blog post. Unfortunately this topic is quite complex and would require an article in its own right. I've added a link to an external blog post which discusses exactly this issue in detail.
Now it makes sense. The blogpost you've linked to describes a pre-release version of kotlinx.coroutines library which indeed used to have this problem. Before making a stable 1.0 release we had introduced the concept of “Structured Concurrency” makes inheritance of the coroutine context a default behavior and solves a host of other problems. That is why I could not understand how you could be still having this problem. What version of kotlinx.coroutines library were you using?
Btw, you can read more about structured concurrency here: medium.com/@elizarov/structured-co...
Can you provide an example for libraries that expect a thread-based environment without coroutines? Besides, in my experience with Android, the cumbersome debugger variables are more problematic than stepping over a suspend function.
For instance, I'm using guava caches a lot. Basically you provide a loading function which is executed on cache miss and will produce the missing value. The cache implementation takes care if eviction policies etc... It's really a powerful library. However, the function you pass in is called under a lock, and guava protects the programmer from recursive calls to loading the same key. Furthermore, concurrent requests to the same missing key always result in just a single call to the loader function. All of that is based on threads. Having the loader function use coroutines simply won't work.
I agree that this is something that should be fixed by the guava developers. Nevertheless, I still recommend that JVM developers should start using coroutines as soon as possible.
How would the guava developers ever go about "fixing" this? They can neither assume nor refute the presence of coroutines. Coroutines will split the JVM ecosystem in half - and I know on which side I'm on. I do have some hope for Project Fiber, a JVM extension which will bring JVM built-in coroutines. The big advantage here is that:
The big folly is to assume that in the presence of coroutines we do not need synchronization primitives any longer. It's quite the contrary, we need them more than ever before, except we disarmed ourselves by forfeiting the tools which have been working and well understood for years. As long as thread code and coroutine/fiber code don't play nice with each other - through JVM extensions, compiler verification or black magic - I'm not going to touch coroutines anymore. I've had my fair share of bad experiences.
kotlinx.coroutines.sync.mutex
is a mutex implementation that works with coroutines. If guava was using this mutex implementation, then it might work. Alternatively, guava could release all its internal locks when executing your loading function. If your loading function needs special synchronization, then you can still implement that without relying on the guava locking. Besides, recursively loading the same key seems like a strange edge case. One should not query guava from within the loading function that is supposed to retrieve the very same key. The protection from this error is nice to have but not a major dealbreaker.Indeed it is, and not what the developer wants. Which is why guava raises an exception if this case occurs. But take a step back and think about how guava accomplishes that. It checks internally if the current thread holds the loading lock, and if so, it checks the key it wants to load. If it is the same as during the previous call, the exception is thrown. Now, let's assume the loader is async. What would that mean? Well, the coroutine may yield during loading (e.g. when performing an HTTP request to fetch the data) and the host thread will merrily move along and pick up the next coroutine. It still holds the lock, however. This has two fatal consequences:
Using coroutines in an environment which isn't specifically crafted for them (read: 99.9% of all JVM libraries in existence) means opening pandoras box. This is precisely what the fancy presentations will not tell you. And the reason why I refactored a lot of code to eliminate coroutines entirely. I've never looked back.
Either you revert it, or you adapt your code to cope with new paradigms. As outlined, this should be easy to fix for the guava developers. In the meantime, a possible workaround is to wrap your loading functions inside a
runBlocking
scope. According to the docs:In fact, I believe that Kotlin enables a more gradual transition to coroutines than building coroutines into the JVM. Adding a few Kotlin coroutines might be easier than switching a myriad of libraries to a new JVM version.
These are all important issues to be aware of. I come to a different conclusion. Here's what has happened in other languages with coroutines: tools, frameworks, and best practices are developed and these problems either go away or become managed very naturally. I suspect that once that happens, we will all be using coroutines so much that junior developers will be using them without knowing that these issues ever existed.
In the meantime, we have to be careful about all these important issues you brought up... and we sometimes have to place another break-point. :D
You can find several counterarguments in Kotlin Slack channel, where I've posted a link to your article:
kotlinlang.slack.com/archives/C1CF...
Martin you should give it another try, use it with the best practices. I think guys at JB rushed a bit making all of these features available to people (though that why its called experimental).
Either way i would totally agree with you a while back, but now they figured out how to use it properly, things you can do with co routines i don't think there is any other tech out there that allows this... its far more capable than plain async or event loops.
For example i use it to make the code very straight forward for things like complex order executions, where i interact with various exchanges send orders cancel them await for cancellations, combine these simple executions to create more complex ones... or cancel them and see how other smaller orders get cancelled properly, it is very ugly to do it in blocking or future world... but its a poetry if you use coroutines.
Another thing i use it for is telegram bots... each users conversation is one very synchronous looking piece of code, which is very simple to understand and change. For me there is no performance chase its just code readability.
How often do you normally "suspend" in a coroutine? Are you required to suspend every time you perform some IO task? (In order to "give up" the thread to someone else).
I thought naïvely that you would just start your coroutine at the start of each new HTTP request entering the server, and then keep the coroutine until the request is done. But I guess that will block the thread, just like in JS or so?
I was hoping coroutines could be an escape from Futures/Promises. They have similar problems: for every time you call a function, you have to know whether it (or a child of it) is asynchronous. This makes, for instance, HTML template rendering in JS almost impossible.
I don't fully understand the issue with threads. I see that the problem might exist when e.g. you call coroutine1->coroutine2 and you have synchronized between them coroutine1->synchronized->coroutine2 in that case it is true the call to coroutine2 might end up on a different thread and the lock won't be unlocked. That issue might exist when one coroutine calls another or coroutine calls another async api which might be adjusted to it (e.g. CompletableFuture, Reactor, RxJava, etc).
But I don't understand your example with Guava cache. You won't be able to pass suspend function to guava hence the thread which returns to Guava will be the same. In order to call coroutines from non-coroutines you will need to have boundary call (runBlocking, GlobalScope.async, etc) which will protect you from thread flip. I cannot even image situation where I could get this issue (except when you call synchronized from coroutines (which you should probably never do)).
The other issues - yes debugging is quite annoying thing which you could overcome by having multiple breakpoints (I would describe it as medium level inconvenience). Also it is not really issue of the coroutines as concept or implementation, it is a one in Kotlin Idea Plugin debugging capabilities (which from my layman perspective shouldn't be very difficult to fix - just to have a hidden breakpoint at the next line). Passing down context isn't an issue anymore. We pass tenant related info using coroutine context.
Sounds like a pain, wonder why it is so much better in JavaScript, but especially C#; co-routines are the bread and butter there.