Dagger: Processor error on missing type while traversing too far up component dependency chain.

Created on 4 Dec 2017  路  46Comments  路  Source: google/dagger

Module 'test-a':

public final class A {
  @Inject A() {}
}

@Component
public interface ComponentA {
  A a();
}

Module 'test-b' which has implementation project(':test-a'):

public final class B {
  @Inject B(A a) {}
}

@Component(dependencies = ComponentA.class)
public interface ComponentB {
  B b();
}

Module 'test-c' which has implementation project(':test-b'):

public final class C {
  @Inject C(B b) {}
}

@Component(dependencies = ComponentB.class)
public interface ComponentC {
  C c();
}

fails with:

> Task :test-c:compileDebugJavaWithJavac FAILED
error: cannot access ComponentA
  class file for test.a.ComponentA not found
  Consult the following stack trace for details.
  com.sun.tools.javac.code.Symbol$CompletionFailure: class file for test.a.ComponentA not found
1 error

which makes sense, because 'test-c' isn't meant to see 'test-a' as it's an implementation detail of 'test-b', but why is Dagger in 'test-c' trying to do _anything_ with ComponentA? Once it reaches ComponentB and sees the required B type exposed shouldn't it stop?

P3 build dagger feature request

Most helpful comment

I'd like to bring this issue up again. I understand why it happens and why it isn't purely Dagger's fault. But the build tools (Gradle in this case) and Dagger don't work well together. There are only two solutions at the moment for Gradle: Add the dependencies with api to the library module and fully expose them (not great for modularization) or add missing dependencies as compileOnly (or maybe even annotationProcessor / kapt) to the modules that need them, what isn't good for modularization and isolation either.

I don't have any hands-on experience with Blaze or Bazel, but I heard it can solve this issue with its finer granularity. But for many people those tools aren't feasible.

How could Dagger avoid this issue? By turning off the cyclic dependency check?

All 46 comments

We check to make sure there's no scope cycle in the dependency chain. The following would be an error

@Blue
@Component
public interface ComponentA {
  A a();
}

@Yellow
@Component(dependencies = ComponentA.class)
public interface ComponentB {
  B b();
}

@Blue
// ^ cycle! Maybe you meant @Green?
@Component(dependencies = ComponentB.class)
public interface ComponentC {
  C c();
}

There may be other reasons - a full stack trace may help to find the one you're encountering but this is the first one that comes to mind.

Hmm that message is the entirety of the output from javac when invoked by itself outside of Gradle. No stacktrace from the processor.

This is hurting my team too (50+ developers). Let me know if you need any help reproducing or making progress.

I don't think there's much to do here in terms of making progress - this is working as intended.

You can subvert this by defining dummy base-interfaces for your components and using those as the dependencies. Something like this:

interface Dependency {
  MyType myType();
  void inject(Thing thing);
}

@Component(modules = ...)
interface DependencyImpl extends Dependency {}

@Component(dependencies = Dependency.class)
                          ^ note, not DependencyImpl.class
interface MyComponent {
  ...
}

But that's definitely a hack, and could expose you to weird scope cycles.

Hi Ron,
As you mentioned, the hack could cause weird scope cycles. I think Dagger should honor Gradle's definition of "implementation" dependency and do scope cycle check through some other way instead.

There is no way for dagger to know that information.

@ronshapiro I bumped into the same issue - trying to modularize our app and having a component in each module. We currently have the main app module's AppComponent depending on a library's component in datasource module and datasource by itself depends on another library module that also exposes a component. We get the same compilation error.

I tried to add scopes for each component, but that didn't change anything.

I've recreated this setup in a sample app https://gitlab.com/codepond/dagger-modules

It would be great if you could have a look.

Thanks and Happy TuBiShvat! :)

We fully understand the issue you're seeing, that's not the problem.

I think the best recommendation I have now is not to use implementation with dagger components in your component dependencies chain.

Thanks for the tip, but the main reason we are modularizing the app is to improve build time using implementation so that's not an option for us.

In a post you made a few days ago you said this is working as intended, is this still the case? So we're not going to see a fix for this?

Hi @ronshapiro , we are also modularizing our app with reducing build time as one of the main goals and I can imagine more developers will run into same situation. So hope you guys can figure out a solution. Thanks!

The implementation , api model also allows me to abstract the inner workings of a a module. I understand why Dagger would need to see the entire graph. But is there any way to prevent the app module from getting transitive dependencies (which would happen if we use api)which it does not require?

setting transitive as false while getting the dependency is not an option since I would like to make only a few dependencies(like internal custom modules) to not leak abstractions. External library dependencies would still remain transitive

@JakeWharton i also face a case like that. we use android-module to separate our codebase depend on their responsibility. instead we create dagger-module and dagger-component inside each
android-module. we create those dagger-dependency inside app, then we have flat-dependency.

each android-module has their implementation details. but, we create dagger-dependency inside app. that hack that we did.

this approach does not solve if we have a too deep implementation of dagger modularization.

Hi @radityagumay, are your "android-module" gradle modules, and by "flat-dependency", do you mean you only have 2 levels of modules, appreciate it if you can share a sample project. Thanks!

I'd like to bring this issue up again. I understand why it happens and why it isn't purely Dagger's fault. But the build tools (Gradle in this case) and Dagger don't work well together. There are only two solutions at the moment for Gradle: Add the dependencies with api to the library module and fully expose them (not great for modularization) or add missing dependencies as compileOnly (or maybe even annotationProcessor / kapt) to the modules that need them, what isn't good for modularization and isolation either.

I don't have any hands-on experience with Blaze or Bazel, but I heard it can solve this issue with its finer granularity. But for many people those tools aren't feasible.

How could Dagger avoid this issue? By turning off the cyclic dependency check?

This issue seems to be not fixed yet over a year. I agree with @vRallev this is not purely dagger's fault. I am not sure if we can fix this by adding more annotation to expose respectively object to the caller site.
Since using api will generate parents modules which means this will affect our build time.

Do you have any suggestions? @ronshapiro

Running into this again.

I think Dagger should support this case by giving up on validation when it reaches an annotation whose referenced types cannot be resolved. This means that the scope annotation of that referenced component dependency also almost certainly cannot be resolved in the current context as it's been entirely encapsulated which means there can be no cycle. Even _if_ there is a cycle, it's hidden from the consumer and therefore irrelevant for what the check is enforcing. Correct semantics are still maintained.

This check is already best effort. There doesn't seem to be a reason to make it fail in situations where the user is actually participating in the best practice of using implementation.

@JakeWharton I thought the whole problem was that the compiler was exiting in this case without even letting Dagger see the lack of existence of the type?

We can't catch Symbol$CompletionFailure as that's a javac internal type.

I would have thought you'd see an ErrorType or something prior to this and that trying to resolve that type is what caused by exception.

I just inserted some print statements and found that the exception is being thrown here: https://github.com/google/dagger/blob/9e0baea74aa2b2ef83f78898bac44851b9840f30/java/dagger/internal/codegen/ComponentDescriptorValidator.java#L163

I don't know of a good solution to this besides a compiler option to ignore this entirely, and even that seems like a best effort approach. We can't catch that exception (catching CompletionFailure does bizarre things), and we don't have any way to even know that the annotation might not complete correctly.

Pls consider adding support to perform graph level validations on demand when developers specify it. That would enable developers to add 1 library once at the top of the graph with the whole transitive classpath and pay the price of the validation traversal once.

It feels strange that Dagger would run validation that is only applicable in cases where the @Component.dependencies is a class that happens to be another Component.

To illustrate this point, here's an easy workaround for the issue in the original example:

// Module test-b
interface FooInterface {
  B b();
}

@Component(dependencies = ComponentA.class)
public interface ComponentB extends FooInterface {}

// Module test-c
@Component(dependencies = FooInterface.class)
public interface ComponentC {
  C c();
}

This prevents Dagger from traversing up the Component dependency chain, so we won't run into the missing class file error.

But stepping back a bit, since scope validation can be easily circumvented as shown above - would it be reasonable to not run it at all for Component dependencies? In this world we'd be saying that Component dependencies are simply a declaration of a Component's required dependencies, but does not represent a Component "graph", which I think makes sense since we don't require Component dependencies to be Dagger Components.

But stepping back a bit, since scope validation can be easily circumvented as shown above - would it be reasonable to not run it at all for Component dependencies?

The scope validation for component dependencies has caught numerous bugs, so we're not inclined to turn it off. There's only so much we can do, so the workaround _has_ to work, and you're on your own if you'd like to disable that check.

Furthermore, it's generally not common to compile against partial classpaths. I understand the desire, but I'm not sure we should be changing Dagger to support something that isn't standard.

This is indeed cryptic...it should at least tell us where (dagger module/component) it's trying to retrieve the class from...

If anyone needs to workaround the issue, here is kotlin-dsl snippet for android, which adds the RuntimeClasspath to the javac compilation task. You need to add this to your module which generates Dagger Component:

project.run {
    afterEvaluate {
      configure<AppExtension> {
        applicationVariants.forEach { variant ->
          val compileWithJavac = variant.javaCompileProvider
          val runtimeClasspath = variant.runtimeConfiguration

          compileWithJavac.configure {
            doFirst {
              val devDebugRuntimeClasspathJars = runtimeClasspath
                .copyRecursive()
                .apply {
                  val artifactType = AndroidArtifacts.ARTIFACT_TYPE
                  val jar = ArtifactTypeDefinition.JAR_TYPE
                  attributes.attribute(artifactType, jar)
                }
                .fileCollection { true }
              classpath = classpath.plus(devDebugRuntimeClasspathJars)
            }
          }
        }
      }
    }
  }

Hi Dkhusainov, I was trying to implement your work around in my project (using groovy) but didn't have much luck. Would you be able to share a more larger example.

I'm trying to refactor this into a generic solution that will work for Android applications as well as tests.

ronshapiro: what do you think about an opt-in flag to disable this validation. Would you be willing to upstream such a change?

Currently this error completely blocks using dagger with our multi-module application.

afterEvaluate {
  android {
//    libraryVariants.forEach { variant ->  //com.android.library
    applicationVariants.forEach { variant ->//com.android.application
      def compileWithJavac = variant.javaCompileProvider
      def runtimeClasspath = variant.runtimeConfiguration

      compileWithJavac.configure {
        doFirst {
          def copiedConfiguration = runtimeClasspath.copyRecursive()

          //filter for jars only
          def artifactType = AndroidArtifacts.ARTIFACT_TYPE
          def jar = ArtifactTypeDefinition.JAR_TYPE
          copiedConfiguration.attributes.attribute(artifactType, jar)

          def runtimeClasspathJars = copiedConfiguration.fileCollection { true }
          classpath = classpath.plus(runtimeClasspathJars)
        }
      }
    }
  }
}

Dkhusainov, thanks so much, that was really helpful.

I have a weird issue with your work around. When rebuilding the solution sometimes doesn't work. The same dagger error re-appears.

I've created a sample project to reproduce the issue:
https://github.com/snepalnetflix/dagger-transitive-dep

To reproduce do the following:

  1. build my project. See the error: "e: error: cannot access SingletonFeatureObject"
  2. change line 34 in home/build.gradle from "implementation project(":feature")" to "api project(":feature")"
  3. build and run the project.
  4. change line 34 back to "implementation"
  5. build and run the project, everything works this time

Maybe gradle or Android Studio is caching something?

Looks like jar from the feature project is not ready by the time javac task in the main project runs.

Try this script:

afterEvaluate {
  android {
//    libraryVariants.forEach { variant ->  //com.android.library
    applicationVariants.forEach { variant ->//com.android.application
      def compileWithJavac = variant.javaCompileProvider
      def runtimeClasspath = variant.runtimeConfiguration

      //filter for jars only
      def runtimeClasspathJars = runtimeClasspath.copyRecursive()
      def artifactType = com.android.build.gradle.internal.publishing.AndroidArtifacts.ARTIFACT_TYPE
      def jar = ArtifactTypeDefinition.JAR_TYPE
      runtimeClasspathJars.attributes.attribute(artifactType, jar)

      compileWithJavac.configure {
        doFirst {
          def runtimeClasspathJarsFiles = runtimeClasspathJars.fileCollection { true }
          classpath = classpath.plus(runtimeClasspathJarsFiles)
        }
      }

      def compileWithJavacTask = compileWithJavac.get()
      runtimeClasspathJars
        .buildDependencies
        .getDependencies(compileWithJavacTask)
        .forEach { compileWithJavacTask.dependsOn(it) }
    }
  }
}

I attempted to implement the above workaround, but ran into a snag in our all-kotlin project: the error in this case is thrown from the kaptDebugKotlin task (or other variant-specific tasks). I poked around a bit and couldn't find a decent way to manipulate this task's classpath in the same way, so I opted for a slightly different workaround (in kotlin DSL, in the top-level project's build.gradle.kts):

val appProject = project
gradle.afterProject {
    if (this != appProject && this.plugins.hasPlugin("com.android.library")) {
        appProject.dependencies {
            implementation(this@afterProject)
        }
    }
}

This was our old by-hand workaround (add every new gradle module in the project as an implementation dependency of the top-level app module); I simply automated it. From a build-time perspective, it seems to be neutral, but I haven't tested extensively.

That would only work if all the dependencies that you need to hack are in your project. What if there's a remote dependency which exposes Subcomonent and has implementation dependencies of it's own?

Fails with the same error even in the following case
Module 'app' which has implementation project(':test-a')

@Component(modules=[ModuleA::class])
interface AppComponent

Module 'test-a' which has implementation project(':test-b'):

@Module(includes = [ModuleB::class])
object ModuleA

Module 'test-b'

@Module
object ModuleB {
    @Provides
    fun b(): B = BImpl()
}

The workaround is to include test-b module as a compileOnly dependency in app module. Team dagger, can we expect a solution around this?

Looking at the original example from Jake:

public final class A {
  @Inject A() {}
}

@Component
public interface ComponentA {
  A a();
}
public final class B {
  @Inject B(A a) {}
}

@Component(dependencies = ComponentA.class)
public interface ComponentB {
  B b();
}

ComponentB references ComponentA with an annotation, which makes ComponentA part of its public signature. Even if it didn't reference ComponentA, it is referencing B and B has A in its public constructor signature. So using an implementation dependency is not right in this case, it should be an api dependency. All the other examples in this thread have the same issue. See also the user guide chapter on how to recognize api and implementation usage of a dependency.

@ronshapiro already provided a workaround of having an interface for your component and only making that interface part of your API while keeping the implementation separate. I don't see any other way around this. It's a price you pay for having everything statically analysed and generated, which requires having everything on the public signatures instead of making it an implementation detail (like you would inside a Guice module).

@oehme can you check if this makes sense in #1671

I have something similar there but it is a 3rd party dependency that never exposed in the public API.

That seems like a different issue if I understand it correctly - The class in question is completely hidden and Dagger doesn't need to know about it at all, but still fails. I think that's a case that should work.

Looking at the original example from Jake:

public final class A {
  @Inject A() {}
}

@Component
public interface ComponentA {
  A a();
}
public final class B {
  @Inject B(A a) {}
}

@Component(dependencies = ComponentA.class)
public interface ComponentB {
  B b();
}

ComponentB references ComponentA with an annotation, which makes ComponentA part of its public signature. Even if it didn't reference ComponentA, it is referencing B and B has A in its public constructor signature. So using an implementation dependency is not right in this case, it should be an api dependency. All the other examples in this thread have the same issue. See also the user guide chapter on how to recognize api and implementation usage of a dependency.

@ronshapiro already provided a workaround of having an interface for your component and only making that interface part of your API while keeping the implementation separate. I don't see any other way around this. It's a price you pay for having everything statically analysed and generated, which requires having everything on the public signatures instead of making it an implementation detail (like you would inside a Guice module).

Question clearly states that the test-c isn't meant to see test-a, then why should test-b expose class A to test-c by adding an api dependency to test-a in it's gradle?

Question clearly states that the test-c isn't meant to see test-a

But it does see it, because it's exposed on B's signatures (both constructor and annotations). You as a human might not consider the constructor part of the API (because it's not really meant to be called by others), but a machine can't know that. A type is either exposed or not. There is no "partly exposed". You'd have to hide it behind an interface to solve that. This is the workaround that Ron already pointed out.

Here's an updated workaround for Android.

Works for both apt and kapt and properly wires task dependnecies.
Works for AGP 3.5 and Kotlin 1.3.61

fun Project.aptRuntime2CompileClasspath() = afterEvaluate {

  @Suppress("UNCHECKED_CAST")
  val variants: DomainObjectSet<BaseVariant> = when (val android = the<BaseExtension>()) {
    is AppExtension     -> android.applicationVariants
    is LibraryExtension -> android.libraryVariants
    else                -> error("Unrecognized android extension $android")
  } as DomainObjectSet<BaseVariant>

  for (variant in variants) {
    val compileJavaWithJavac = variant.javaCompileProvider
    val runtimeClasspath = variant.runtimeConfiguration
    /**
     * jar inside intermediates/runtime_library_classes,
     * which produced by running bundleLibRuntime${targetFlavor}
     */
    val runtimeClasspathJars = runtimeClasspath.copyRecursive().apply {
      val attributeArtifactType = AndroidArtifacts.ARTIFACT_TYPE
      val runtimeClasspathArtifact = "android-classes"
      attributes.attribute(attributeArtifactType, runtimeClasspathArtifact)
    }
    val runtimeClasspathJarsTasks: TaskDependency = runtimeClasspathJars.buildDependencies
    val runtimeClasspathJarsFiles = runtimeClasspathJars.fileCollection { true }
    //javac apt
    compileJavaWithJavac.configure {
      dependsOn(runtimeClasspathJarsTasks)
      /** classpath supplement */
      doFirst {
        classpath = classpath.plus(runtimeClasspathJarsFiles)
      }
    }
    //kotlin kapt
    val variantKaptTaskName = "kapt${variant.name.capitalize()}Kotlin"
    if (variantKaptTaskName !in tasks.names) continue
    tasks.named(variantKaptTaskName).configure {
      dependsOn(runtimeClasspathJarsTasks)
      /** classpath supplement */
      val kaptTask = this
      val kotlinCompileTask: AbstractCompile = javaClass.getField("kotlinCompileTask").run {
        isAccessible = true
        get(kaptTask)
      } as AbstractCompile
      kaptTask.doFirst {
        kotlinCompileTask.classpath = kotlinCompileTask.classpath.plus(runtimeClasspathJarsFiles)
      }
    }
  }
}

It adds module runtime classpath to annotation processor classpath.

@Dkhusainov thanks for sharing this. It looks like really detailed and well put. Considering that it is a complex workaround, I would like to ask about the overhead it adds. Have you been using it fine? Does it break any Gradle lazy configuration?

No problems so far.

Overhead and performance depend on your project setup. In theory:
If you have only few root modules which generate @Component's(this is the only place where aptRuntime2CompileClasspath should be applied), and a module tree with jvm/aar which you switched to implementation(because you were forced to use api for dagger), overall build time of the whole project should slightly increase because of compilation avoidance and less compilation dependencies for each module.

@Dkhusainov Thanks for that. Can we run it somehow from Groovy?

@Dkhusainov Seems like it no longer work for AGP 4.0, Kotlin 1.3.72

Talking this over with @Chang-Eric, I think we're willing to add a flag to disable the validation.

It's kind of low on our priority list right now though since there's a couple workarounds on this issue already.

The workarounds are scary. A flag would be ideal, because it allows you to easily change the behavior between CI and local builds when needed.

Please add a flag!

We also really need this flag in our project, so please add the flag!

Was this page helpful?
0 / 5 - 0 ratings

Related issues

sagarwaghmare07 picture sagarwaghmare07  路  3Comments

blackberry2016 picture blackberry2016  路  3Comments

SAGARSURI picture SAGARSURI  路  3Comments

peter-tackage picture peter-tackage  路  3Comments

makaroffandrey picture makaroffandrey  路  3Comments