Right now, I implement this with:
class LazyValue<T>(private val inner: suspend () -> T) {
private val latch = Channel(Channel.RENDEZVOUS)
private val cond = AtomicBoolean()
private var value = null
suspend fun get(): T {
if (this.cond.compareAndSet(false, true)) {
this.value = this.inner()
this.latch.close()
} else {
this.latch.receiveOrNull()
}
return this.value
}
}
Although this kind of primitive seems reasonable enough to add by default.
You can use val lazyValue = GlobalScope.async(Dispatchers.Unconined, start = LAZY) { inner() }
And even provide your own dispatcher if you need one to offload the computation.
I don't think such shorthand worth its own primitive: we don't have suspend getters and thus can't have by lazy like API.
But let's keep this issue open and see whether we have a demand on suspendable lazy.
I guess that my main problem is that I'm trying to find a way to confine the evaluation to the scope of the first invocation, which I guess is probably not the way I should be doing this.
I'll think a bit more about this and then report back.
So, I've been thinking about this a decent amount, and I think that my general conclusion is that it would be nice to have efficient versions of Channel<Unit> and Channel<Nothing>.
I've managed to reduce most of my code to using channels, but I have a few strange cases:
Channel<Unit> sort of like a CyclicBarrier, i.e. a leader waits for N signals before it performs an action. In this case, we're not actually waiting for any particular data, just synchronisation. One particular case of this is waiting for N subscribers to a BroadcastChannel before broadcasting anything which seems like something that could be useful to support by default.Channel<Nothing> like a CountDownLatch(1), i.e. a leader closes the channel once it's done performing an action and followers will call receiveOrNull on that channel.AtomicIntegers and AtomicBooleans to ensure exactly one coroutine counts up or down to a particular threshold. I think that this is always going to be there and it's still my opinion that if a synchronisation primitive can be done efficiently with atomics, it should be.So, I guess that I can create separate issues for 1 and 2 and close this.
Wow, lazy blocks a thread while waiting? Are you kidding me?? I thought Kotlin is all about coroutines.
@alamothe
lazy is intended for expensive allocation/initialization, to defer it until needed.async and start = LAZYHaving this or memoization would be helpful for lazily connecting to services and databases. We have a command line app that sometimes needs to connect to everything and sometimes nothing. Something like by suspend lazy would be great.
@ConnorWGarvey You can already do something similar without delegation:
https://github.com/LouisCAD/Splitties/blob/49e2ee566730aaeb14b4fa9e395a677c3f214dba/modules/coroutines/src/commonMain/kotlin/splitties/coroutines/SuspendLazy.kt#L26
Example usage:
val heavyThing = myScope.suspendLazy { // Or GlobalScope if app global
val stuff = withContext(Dispatchers.IO) { getStuffFromStorage() }
initThatHeavyThingWithASuspendFunction(stuff)
}
heavingThing().doStuff()
...
heavingThing().doMoreStuff()
2.
lazyis intended for expensive allocation/initialization, to defer it until needed.
Precisely, so why would it block a thread? It makes zero sense.
@alamothe Because allocation always blocks a thread, it's actually the CPU looking for space in the memory, and moving stuff if needed.
Making it suspend by default would just make it a little more expensive.
So, it makes complete sense to me that Kotlin stdlib's lazy is the way it is.
If it still doesn't make sense to you, I encourage you to learn more about what happens when you want to allocate memory for some objects, and also, learn how coroutines work under the hood.
And again, I linked an implementation + example of a suspendLazy implementation, feel free to try it if it suits better your use cases. I definitely use it, and it leverages stdlib's own lazy, which I also use alone in some cases.
Hi @LouisCAD,
allocation does not block the thread always, you should reconsider the definition of "blocking operation".
Hi @alamothe,
the _default_ lazy implementation uses a lock, so if you use a blocking operation in the lazy block, then that lazy is blocking, that's all.
You can use this suggestion to build an asynchronous implementation.
@fvasco I disagree, allocation is always a blocking operation, and the time the thread requesting the allocation is blocked can vary depending on multiple factors like size requested and state of the memory.
I'm talking low-level here.
Now, I'm tired of this discussion, there's two solutions posted, nothing more to say, so I'm withdrawing.
Hi @LouisCAD,
I respect your thoughts, but I fear that the statement "allocation is always a blocking operation" lead to "use Dispatchers.IO for each allocation", and I think it is not the right way.
I'm talking low-level here.
"Thread" and "blocked thread" is an abstraction of Operating System, not a CPU's one, but please correct me if I wrong. Allocation can be -generally- performed by running thread, not by a blocked one.
Thank you.
I get your point now @fvasco and thought about that after writing my message:
Ultimately, every code has a "blocking" part, regardless of what is blocking it, but not all code blocks for long, and not all blocking code should be abstracted away to be supposedly less blocking.
Allocation is indeed performed by the calling thread, so, unless the object hierarchy being allocated is very big and you need the thread quickly (e.g. a UI thread), the most efficient way is to let it be run by the calling thread, not involving any coroutines that'd not improve performance at all.
I think @alamothe has a misunderstanding of the purpose of lazy. Its goal is not to avoid blocking a thread, but to do the computation (an allocation is a short one in a way), only once needed, and share it to future callers.
If the code in the lazy { } lambda takes a significant time to execute, significant enough that other threads are being blocked by the lock and it becomes an issue, then, it probably makes sense to use another strategy, like that solution you @fvasco already linked, or the one I shared before, building on top of it.
Defferred<T> is perfectly lazy itself, just use async(start = CoroutineStart.LAZY) { } to compute stuff
Same as Kotlin doesn't support suspending properties.
It's trivial to implement it on the compiler end, it's very painful for us using the language.
@LouisCAD You're getting too technical for something that's trivial to do. By your logic, suspending functions can't exist either, yet they do.
@alamothe I don't see evidence that it's "trivial to do". If it really is, then you know better, which means you can submit a KEEP.
By your logic, suspending functions can't exist either, yet they do.
My logic, when stretched by you, but then it's no longer my logic.
Same as Kotlin doesn't support suspending properties.
This is off topic, and there are valid reasons (API design related) to have coroutineContext be the only suspend val to be allowed. Other use cases just need to buy themselves a pair of parentheses.
This is off topic, and there are valid reasons (API design related) to have
coroutineContextbe the onlysuspend valto be allowed. Other use cases just need to buy themselves a pair of parentheses.
By that logic 馃檪 we don't need properties at all. It's just a pair of parens.
@alamothe You are completely ignoring the fact that when you read code, you expect a property to return immediately, while a suspending function is the opposite. But then, again, this is off-topic, so go on Kotlin's Slack if you want to debate that.
What does "immediately" mean? Does by lazy return immediately?
How about if it spends 10s doing CPU work vs 0.1s I/O work? Which one is immediate?
This is best left to code owners to decide. It's like saying we will forbid you to name variables with uppercase letters because you deemed that whoever reads the code expects lowercase letters. Not for you to make that judgement.
@LouisCAD we have been using your implementation of suspendLazy to a great success. Thank you sir!
Actually I have a question regarding the implementation. What if it's never called? Looks like it will hang coroutineScope. Is there an easy fix?
@Test
fun testSuspendLazy() = runBlocking {
coroutineScope {
val l = suspendLazy {
println("hello")
}
// Hangs here
}
}
Yes, use GlobalScope or equivalent in this case, or put it in a parent scope that will be cancelled or lives forever.
It's definitely a gotcha. I changed it not to call async until necessary:
private class SuspendLazySuspendingImpl<out T>(
val coroutineScope: CoroutineScope,
val context: CoroutineContext,
val initializer: suspend CoroutineScope.() -> T
) : SuspendLazy<T> {
private var deferred: Deferred<T>? = null
override suspend operator fun invoke(): T {
if (deferred == null) {
deferred = coroutineScope.async(context, block = initializer)
}
return deferred!!.await()
}
}
Do you see any problems here? (our code is single-threaded)
I think the problem with a scope that lives forever is that they will never get garbage collected i.e. it is a memory leak.
Most helpful comment
You can use
val lazyValue = GlobalScope.async(Dispatchers.Unconined, start = LAZY) { inner() }And even provide your own dispatcher if you need one to offload the computation.
I don't think such shorthand worth its own primitive: we don't have
suspendgetters and thus can't haveby lazylike API.But let's keep this issue open and see whether we have a demand on suspendable lazy.