In letalvoj/loonyssh/tree/bug-report I am getting either
[error] 216 | case `cast128-cbc`
[error] | ^
[error] |children of class EncryptionAlgorithm were already queried before value cast128-cbc was discovered.
[error] |As a remedy, you could move value cast128-cbc on the same nesting level as class EncryptionAlgorithm.
or java.lang.OutOfMemoryError.
The EncryptionAlgorithm is an enum and cast128-cbc is one of it's cases. It is used inside a case of yet another enum SSHMsg
The proposed remedy is not helpful much, since I can hardly do anything with nesting of the values in the enum.
I tried to minimize the example but it did not work.
Beware! Once the build passes due to some changes in code (like commenting out the cause) it continues to pass long even when it should not due to _incremental compilation_. It is enough to run ;clean;compile and the build starts failing again.
children of ...The following chain of givens is involved when the issue emerges:
SSHReader[SSHMsg.KexInit] Reader.scala#L101SSHReader[NameList[EncryptionAlgorithm]] Reader.scala#L118EnumSupport[EncryptionAlgorithm] EnumSupport.scala#L13Mirror.SumOf[EncryptionAlgorithm]Funny enough - if the NameList[EncryptionAlgorithm] is not nested inside SSHMsg.KexInit then the 2nd summon[SSHReader[NameList[EncryptionAlgorithm]]] compiles just fine.
EncryptionAlgorithm to the same file where SSHMsg.KexInit, which did not helpenums and other data structures to a separate module fixes the isue but it _hangs the compilation_. After several minutes I get Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "classloader-cache-cleanup-0"
Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "task-progress-report-thread"
java.lang.OutOfMemoryErrorThose are the objects which eat all the memory:

This seems to be the thread which produces them:

@letalvoj could you check if #9083 fixed this?
I will check as soon as these changes get to the nightly build, which seems to be currently lagging behind a week. (maybe ccing https://github.com/lampepfl/dotty/issues/8734?)
Thanks for the heads up, fixed in https://github.com/lampepfl/dotty/pull/9101
@anatoliykmetyuk well, using 0.25.0-bin-20200603-cc8d6c3-NIGHTLY it seems like I am still getting the OOME 馃挜
I'm guessing from the stack trace you have an inline def which recursively calls itself and ends up being recursively inlined an infinite amount of time (I didn't check but writeProduct looks like it could fit)
@smarter thx for the tip, it's actually the following bad boy which is causing the issue
https://github.com/letalvoj/loonyssh/blob/master/src/main/scala/in/vojt/loonyssh/enums/EnumSupport.scala#L19-L31
The following does not compile:
try
val value = mirror.fromProduct(Product0).asInstanceOf[E]
_fromName[E,ts,ls](i+1) + (key -> value)
catch
// ignoring parametric types...
// Tuple.Size[mirror.MirrorElemLabels] does not work since the type info gets lost
case _:IndexOutOfBoundsException => _fromName[E,ts,ls](i+1)
Please mention the wierd try-catch, which is an workaround for the Tuple.Size[mirror.MirrorElemLabels] not working at all. I am listing out parameter-less cases of an enum, since mirror does not have the valueOf function https://github.com/lampepfl/dotty-feature-requests/issues/121
Rewriting it to
val additional =
try
val value = mirror.fromProduct(Product0).asInstanceOf[E]
Map(key -> value)
catch
// // ignoring parametric types...
// // Tuple.Size[mirror.MirrorElemLabels] does not work since the type info gets lost
case _:IndexOutOfBoundsException => Map.empty
_fromName[E,ts,ls] ++ additional
Hence both
children of class EncryptionAlgorithm were already queriedare fixed now. 馃コ Thx. The Enum in question had 18 cases (hence the limit was not reached) yet 2^18 of copies of itself was apparently just .... waaay to much.
There could be a warning for "recursive inline method calls itself more than once". I might not be the last person doing this mistake.
Yeah, there's a -Xmax-inlines parameter but that only limit the number of recursive inline calls so it's not enough to prevent a blow-up when the same inline method calls itself multiple times, maybe we could additionally keep track of the total number of inlining done from one call site.