Composition over inheritance: Gradle vs Maven

01 December 2021

Tags: gradle maven composition inheritance


In general, when I read comments about Maven vs Gradle, I realize that people focus on the cosmetics (XML vs Groovy/Kotlin) when it’s from my point of view the least interesting aspect of the comparison. In this article, I want to focus on one particular aspect which differentiates the 2 build tools: the famous composition over inheritance paradigm. In different aspects (POM files, lifecycle), Apache Maven is using inheritance, while Gradle is using composition. It is a particularly important difference which completely changes the way we think about building software.

Inheritance in Maven builds

A typical Maven project is built with a pom.xml file, which declares everything the module needs: - the dependencies - the build plugins and their configuration

Very quickly, it turns out that there are common things that you want to share between modules:

  • they would use the same compiler options

  • they would use the same plugins and configuration

  • they would apply a number of common dependencies

  • etc.

Let’s imagine that we have a project which consists of 3 modules: - a library module, pure Java - an application module which uses the library and the Micronaut Framework - and a documentation module which provides a user manual for the application using Asciidoctor.

The idiomatic way to solve the problem of sharing the configuration of the library and application modules (which are both Java) in Maven is to define a so-called "parent POM" which declares all of these common things, for example:

<project xmlns="" xmlns:xsi=""


    <name>Common Config</name>





To simplify things, we could call this a "convention": by convention, all modules which will use this parent POM will apply all those plugins and dependencies (note, there are subtleties if you use <pluginManagement> or <dependencyManagement>).

A "child POM" like our application pom only has to declare the parent to "inherit" from it:

<project xmlns="" xmlns:xsi=""





This model works really well when all modules have a lot in common. The inheritance model also makes it simple to override things (child values override parent values). In the example above, we don’t have to specify the groupId and version of our module because it will be inherited from the parent.

However, this model comes with a number of drawbacks:

  • as soon as you have different modules which share different set of dependencies, or use different sets of plugins, you have to create different parents and have an inheritance model between parents. Unfortunately this is the case here, since only our library and application modules have something in common. It won’t be a surprise for many that you have to exclude dependencies just because they came through parent poms…

  • you can only have a single parent, meaning that you cannot inherit from a framework parent POM and from your own conventions.

  • it’s not great for performance, because you end up configuring a lot of things which will never be necessary for your particular "child" module.

  • overriding values is sometimes much more complicated and you have to start relying on obscure syntaxes like combine.children="append" (see this excellent blog post for details).

Those limitations are quickly reached when you are using a framework like Micronaut or Spring Boot. Because those frameworks are built with developer productivity in mind, they come with their own "parent POMs" which makes the lives of developers easier by avoiding copy and paste of hundreds of lines of XML. They also need to provide this parent POM because they would come with their own Maven plugin which works around the limitations of the lifecycle model.

But then, we have a problem: on one side, you have this "parent POM" which is provided by the framework, and on the other side, you have your own "parent POM" which is providing, say, the company-specific conventions (like checkstyle configuration, coordinates of Maven repositories for publication, etc.).

In order to be able to use both conventions, you have to create a new parent POM, and you have no choice but writing your company convention parent POM inheriting from the framework POM: obviously you can’t change the framework POM itself! This is problematic, because it means that for every release of the framework, you have to update your company convention parent POM. This is also problematic for another aspect: not all the modules of your multi-project build are "Spring Boot" or "Micronaut" applications. Some of them may be simple Java libraries which are used by your app, but do not require the framework. As a consequence, you have to create multiple parents, and duplicate the configuration in each of those POM files.

This inheritance problem surfaces in different places in Maven. Another one is, as I mentioned, the "lifecycle" which works in phases. Basically, in Maven everything is executed linearly: if you want to do install, then you have to execute everything which is before that phase, which includes, for example, test. This may sound reasonable, but this model completely falls apart: this is no surprise that every single plugin has to implement their own -DskipTest variant, in order to avoid doing work which shouldn’t be done. I had">an interesting use case when implementing the GraalVM native Maven plugin, which requires to configure the surefire plugin to pass extra arguments. Long story short: this isn’t possible with Maven. Consequence: the only workaround is the multiplication of Maven profiles, which a user has to understand, maintain, and remember.

Composition in Gradle builds

Gradle builds use a very different model: composition. In a nutshell, in a Gradle project you don’t explain how to build, but what you build: that is, you would say "this is a library", or "this is a CLI application" or "this is a documentation module". Because a library exposes an API and an application doesn’t, those are different things, so their conventions, and capabilities, are different.

The way you "say" this is in a Gradle build is by applying plugins.

A typical Java library would apply the java-library plugin, while an application would apply the application plugin and a documentation project would apply, say, the asciidoctor plugin. What do a Java library project and a documentation project have in common? Barely nothing. A Java Library has Java sources, a number of dependencies, code quality plugins applied, etc. The documentation module, on its side, is a set of markdown or asciidoc files, and resources. The layout of the projects is different, the conventions are different, and the set of plugins are different. Java projects may share the same conventions for source layout, but they are obviously different for the docs. In addition, there’s no reason to let the user declare "implementation" dependencies on the documentation project: it doesn’t make sense so it should be an error to do so.

On the other hand all those modules may share a number of things:

  • they are all published to a Maven repository

  • they need to use the same Java toolchain

  • they need to comply to security policies of your company

The way Gradle solves this problem is by composing plugins:

  • a plugin can "apply" another plugin

  • each plugin is guaranteed to be applied only once, even if several plugins use it

  • a plugin can "react" to the application of other plugins, allowing fine-grained customizations

So in the example above, the application use case can be easily solved: first, you’d have your own "convention plugin" which defines your company conventions (e.g apply the checkstyle plugin with a number of rules). Then, you’d have the Micronaut application plugin which is already written for you. Finally, your application module would simply apply both plugins:

plugins {
   id 'com.mycompany.conventions' version '1.0.0'
   id 'io.micronaut.application' version '3.0.0'

micronaut {
    version '3.2.0'

What becomes more interesting is that you can (and you actually should) create your own "component types" which apply a number of plugins. In the example above, we could replace the use of the 2 plugins with a single one:

plugins {
   id 'com.mycompany.micronaut-application' version '3.0.0'

Note how we moved the configuration of the micronaut version to our convention plugin. I’m not going to explain how to write a custom Gradle plugin in this blog post, but the code of this plugin would very much look like this:

plugins {
    id 'com.mycompany.conventions' version '1.0.0'
    id 'io.micronaut.application' version '3.0.0'

micronaut {
    version '3.2.0'

Does it look familiar? Yes it does, this is exactly what we had in the beginning: composition is slowly happening! I encourage you to take a look at this documentation for further details about writing your own convention plugins.

Interestingly, as I said, Gradle plugins are allowed to react to the presence of other plugins. This makes it particularly neat for defining dynamically more tasks depending on the context. For example, a plugin can do:

pluginManager.withPlugin('io.micronaut.application') {
    // configure the Micronaut application plugin
pluginManager.withPlugin('io.micronaut.library') {
    // configure the Micronaut library plugin
pluginManager.withPlugin('io.spring.boot') {
    // configure the Spring Boot plugin

Which is very resilient to the fact that the plugins may be applied in any order and that they can combine with each other to provide higher level constructs. It also makes it possible to give choice to users regarding their preferences: you provide a single convention plugin which is aware of what to do if the user prefers to use Spring Boot over Micronaut.

In the end, com.mycompany.micronaut-application is defined as a combination of the io.micronaut.application, plugins. Instead of declaring how to build your company application, you simply described what it is.

This is only touching the surface of the Gradle world here, but when I read that Gradle is "just Ant on steroids", nothing could be more wrong. Gradle in this case is much superior, because it focuses on convention over configuration, while providing better constructs than Maven does for it.

But let’s come back to our multi-project example: each of the modules would apply a different convention plugin (which is also why it’s important that the allprojects pattern dies):

  • library would apply the com.mycompany.library plugin

  • application would apply the com.mycompany.application plugin

  • docs would apply the plugin

The com.mycompany.library plugin would, for example, apply the java-library and plugin. The com.mycompany.application plugin would, for example, apply the io.micronaut.application and plugin (knowing that the io.micronaut.application plugin applied the application plugin and more, such as the GraalVM plugin) The plugin would, for example, apply the org.asciidoctor.jvm.convert plugin and the plugin.

You’ll notice how those actually combine together, making it easier to maintain and upgrade builds: should you change the company conventions, all you have to do is release a new version of the convention plugin.


In this quickie, I have explained a major difference in how Maven and Gradle envision build configuration. While both of them are designed with convention over configuration in mind, the inheritance model of Maven makes it difficult to build conventions on top of each other without duplication. On the other hand, Gradle uses a composition model which makes it possible to design your own conventions while being aware of other plugins being applied by the user: Gradle builds are more lenient and more maintainable.

As a complement, you might be interested in:


Multi-repository development made easy

04 November 2021

Tags: gradle multirepo development micronaut

Are you working in a multi-repository setup?

In general, things start getting messy as soon as you have a feature which requires changes to more than one repository. For example, you may have a core repository, and a module repository, and the feature that you’re working on for module requires API changes in module.

If so, it’s likely that you’ve been annoyed by the fact that to be able to test the changes to module, you minimally had to publish a local snapshot to your local Maven repository. While this can kind of work locally, it’s easy to miss publishing from time to time, and therefore thinking that a change works when it actually relies on an outdated dependency.

Things get more complicated as soon as CI is involved, or that you want to share the results of work in progress, for example for review, with your colleagues:

  • did you ever had to explain that they had to checkout core/some-branch, publish to Maven local, then checkout module/some-feature-branch and test it?

  • did you ever realize late that you forgot to push changes to master so that they could try?

  • did you ever complain that to make this happen on CI, you actually had to eagerly merge your feature branch to core, just so that the other repository, on a feature branch, could see it?

  • did you ever want to see if your modules simply do not break with latest master, without having to change anything to your build scripts?

If you answered yes to any of those questions, then I’m glad to say there’s a solution!

The underlying problem is that using Maven SNAPSHOTs to deal with multi-repository development is not a good enough. It cannot model the complexity of multi-repository development, with features being developed concurrently on different branches. Using SNAPSHOTs (binary dependencies) to coordinate projects leads to hard to diagnose bugs, broken integration processes. You typically have to eagerly push changes, or wait for snapshots to be published on a shared repository, just so that you can actually verify that integration with other modules work. Those problems do not happen in a a single repository world, because all changes are integrated at once.

I faced this very same problem with Micronaut:I’m currently working on a feature which involves changes to multiple repositories at once:

That’s, minimally 4 different projects, and a change to any of them is a pain to deal with. With my experience with Gradle, I knew there was a better way.

A plugin to make it easier!

Today, I’m happy to announce a new Gradle plugin which aims at making multi-repository development a breeze: Included Git repositories plugin.

This plugin lets you import Git repositories as source dependencies, without having to change your dependency declarations. What does that mean? In the example above, it means that I can explain, when I’m working on module, that it needs to build against core/some-branch: Gradle will then automatically checkout the project, build the branch and substitute any binary dependency corresponding to core with the source dependency.

In a nutshell, the configuration would look like this:

gitRepositories {
	include('core') {
		uri = ''
		branch = 'some-branch'

That’s it! No need to change your build scripts to update dependency coordinates, Gradle will do the magic!

It completely changes the way of thinking about multi-repository development, because CI, or colleagues, would not have to care about instructions about how to build your particular branch: everything is known upfront.

Of course, you’re going to tell that well, that’s cool but it still requires you to push your changes to the remote repository so that you can test things locally. Well, a good multi-repository development story must integrate both the local and remote experience. This is why this plugin actually makes it a breeze to support this pattern.

There are actually 2 ways you can handle this. The first one is to explain to Gradle that instead of checking out the sources, it can simply use a local copy instead. In this case, the plugin will simply ignore whatever you declared in the gitRepositories block for the repository, and use whatever is available locally. For this you’d set a local.git.<repoName> Gradle property (in your file) pointing to your local copy. In the example above, I would for example add a local.git.core property pointing to my local copy of core.

Alternatively, if you keep things organized into checkout directories like I do, it’s likely that you have all your micronaut related projects in a single micronaut-projects directory. In this case, by setting the auto.include.git.dirs Gradle property to the micronaut-projects directory, the plugin will automatically map directory names in that micronaut-projects directory to included Git repository names. So if I have:

gitRepositories {
	include('micronaut-core') {
		uri = ''
		branch = 'some-branch'

and a micronaut-core directory under micronaut-projects, then it will automatically be used instead of cloned from remote.

Those options make it extremely convenient to develop locally, and only push changes when ready. On CI, builds would checkout the dependents automatically, and you’d have nothing to configure.

More complex use cases

The very same mechanism can be used to create "integration" builds on CI. For example, it makes it very simple to have builds which would automatically build against the latest state of master, instead of having to wait for SNAPSHOT to be published, and more importantly, without having to change any build file. As a bonus, it also works for transitive dependencies: for example if you have A --depends on-→ B --depends on-→ C, then you may want to make sure that if C is changed, A still works. How do you do this with snapshots, if there’s no direct dependency between A and C? This plugin makes it very simple to test: just declare a Git repository for C and you’re done!

Need your help!

I think this plugin has potential to dramatically change how we develop in the multi-repository world. The plugin is in very early stages, and I will need your help: report bugs, improve the documentation, improve testing, etc. It will also be interesting to get your user stories so that we, collectively, can improve it to support more scenarios.


A Gradle quickie: properly using dependsOn

06 October 2021

Tags: gradle micronaut

Today I’d like to share a small example of what not to do with Gradle. Some of you may already know that I recently joined the Micronaut team at Oracle, and part of my job is to improve the build experience, be it for Micronaut itself or Micronaut users. Today I’m going to focus on an example I found in the Micronaut build itself.

TL/DR: If you use dependsOn, you’re likely doing it wrong.

When should you use dependsOn?

In a nutshell, Gradle works by computing a graph of task dependencies. Say that you want to build a JAR file: you’re going to call the jar task, and Gradle is going to determine that to build the jar, it needs to compile the classes, process the resources, etc… Determining the task dependencies, that is to say what other tasks need to be executed, is done by looking up at 3 different things:

  1. the task dependsOn dependencies. For example, assemble.dependsOn(jar) means that if you run assemble, then the jar task must be executed before

  2. the task transitive dependencies, in which case we’re not talking about tasks, but "publications". For example, when you need to compile project A, you need on classpath project B, which implies running some tasks of B.

  3. and last but not least, the task inputs, that is to say, what it needs to execute its work

In practice, it’s worth noting that 2. is a subset of 3. but I added it for clarity.

Now let’s look at this snippet:

task docFilesJar(type: Jar, description: 'Package up files used for generating documentation.') {
    archiveVersion = null
    archiveFileName = "grails-doc-files.jar"
    from "src/main/template"
    doLast {
        copy {
            from docFilesJar.archivePath
            into "${buildDir}/classes/groovy/main"

jar.dependsOn docFilesJar

First, let’s realize that this snippet is years old. I mean, very years old, copied from Grails, which was using early releases of Gradle. Yet, there’s something interesting in what it does, which is a typical mistake I see in all builds I modernize.

It’s tempting, especially when you’re not used to Gradle, to think the same way as other build tools do, like Maven or Ant. You’re thinking "there’s a task, jar, which basically packages everything it finds in classes/groovy/main, so if I want to add more stuff to the jar task, let’s put more stuff in classes/groovy/main".

This is wrong!

This is wrong for different reasons, most notably:

  • when the docsFilesJar task is going to be executed, it will contribute more files to the "classes" directory, but, wait, those are not classes that we’re putting in there, right? It’s just a jar, resources. Shouldn’t we use resources/groovy/main instead? Or is it classes/groovy/resources? Or what? Well, you shoudn’t care because it’s not your concern where the Java compile task is going to put its output!

  • it breaks cacheability: Gradle has a build cache, and multiple tasks contributing to the same output directory is the typical example of what would break caching. In fact, it breaks all kinds of up-to-date checking, that is to say the ability for Gradle to understand that it doesn’t need to execute a task when nothing changed.

  • it’s opaque to Gradle: the code above executes a copy in a doLast block. Nothing tells Gradle that the "classes" have additional output.

  • imagine another task which needs the classes only. Depending on when it executes, it may or may not, include the docsFileJar that it doesn’t care about. This makes builds non-reproducible (note that this is exactly the reason why Maven build cannot be trusted and that you need to run clean, because any "goal" can write to any directory at any time, making it impossible to infer who contributed what).

  • it requires to declare an explicit dependency between the jar task and the docsFileJar task, to make sure that if we execute jar, our "docs jar" file is present

  • it doesn’t tell why there’s a dependency: is it because you want to order things, or is it because you require an artifact produced by the dependent task? Something else?

  • it’s easy to forget about those: because you may run build often, you might think that your build works, because jar is part of the task graph, and by accident, the docsFileJar would be executed before

  • it creates accidental extra work: most often a dependsOn will trigger too much work. Gradle is a smart build tool which can compute precisely what it needs to execute for each specific task. By using dependsOn, you’re a bit using a hammer and forcing it to integrate something in the graph which wasn’t necessarily needed. In short: you’re doing too much work.

  • it’s difficult to get rid of them: when you see a dependsOn, because it doesn’t tell why it’s needed, it’s often hard to get rid of such dependencies when optimizing builds

Use implicit dependencies instead!

The answer to our problem is actually simpler to reason about: reverse the logic. Instead of thinking "where should I put those things so that it’s picked up by jar", think "let’s tell the jar task that it also needs to pick up my resources".

All in all, it’s about properly declaring your task inputs.

Instead of patching up the output of another task (seriously, forget about this!), every single task must be thought as a function which takes inputs and produces an output: it’s isolated. So, what are the inputs of our docsFileJar? The resources we want to package. What are its outputs? The jar itself. There’s nothing about where we should put the jar, we let Gradle pick a reasonable place for us.

Then what are the inputs of the jar task itself? Well, it’s regular inputs plus our jar. It’s easier to reason about, and as bonus, it’s even shorter to write!

So let’s rewrite the code above to:

task docFilesJar(type: Jar, description: 'Package up files used for generating documentation.') {
    archiveVersion = null
    archiveFileName = "grails-doc-files.jar"
    from "src/main/template"

jar {
    from docFilesJar

Can you spot the difference? We got rid of the copy in the docFilesJar task, we don’t want to do this. What we want, instead, is to say "when you build the jar, also pick this docsFileJar. And that’s what we’re doing by telling from docsFileJar. Gradle is smart enough to know that when it will need to execute the jar task, first, it will need to build the docsFilesJar.

There are several advantages to this:

  • the dependency becomes implicit: if we don’t want to include the jar anymore, we just have to remove it from the specification of the inputs.

  • it doesn’t pollute the outputs of other tasks

  • you can execute the docsFileJar independently of jar

All in all, it’s about isolating things from each other and reducing the risks of breaking a build accidentally!

All things lazy!

The modified code isn’t 2021 compliant. The code above works, but it has one drawback: the docFilesJar and jar tasks are going to be configured (instantitated) even if we call something that doesn’t need it. For example, imagine that you call gradle compileJava: there’s no reason to configure the jar tasks there because we won’t execute them.

For this purpose, to make builds faster, Gradle provides a lazy API instead:

tasks.register('docFilesJar', Jar) {
    description = 'Package up files used for generating documentation.'
    archiveVersion = null
    archiveFileName = "grails-doc-files.jar"
    from "src/main/template"

tasks.named('jar', Jar) {
    from docFilesJar


As a conclusion:

  • avoid using explicit dependsOn as much as you can

  • I tend to say that the only reasonable use case for dependsOn is for lifecycle tasks (lifecycle tasks are tasks which goal is only there to "organize the build", for example build, assemble, check: they don’t do anything by themselves, they just bind a number of dependents together)

  • if you find use cases which are not lifecycle tasks and cannot be expressed by implicit task dependencies (e.g declaring inputs instead of dependsOn), then report it to the Gradle team


Frequently asked questions about version catalogs

11 April 2021

Tags: gradle catalog convenience

Version catalogs FAQ

Can I use a version catalog to declare plugin versions?

No.The initial implementation of the version catalogs had, in TOML files, a dedicated section for plugins:


However, after community feedback and for consistency reasons, we removed this feature from the initial release. This means that currently, you have to use the pluginManagement section of the settings file to deal with your plugin versions, and this cannot use, in particular, the TOML file to declare plugin versions:

pluginManagement {
    plugins {
        id("me.champeau.jmh") version("0.6.3")

It may look surprising that you can’t use version(libs.plugins.jmh) for example in the pluginManagement block, but it’s a chicken and egg problem: the pluginManagement block has to be evaluated before the catalogs are defined, because settings plugins may contribute more catalogs or enhance the existing catalogs. Therefore, the libs extension doesn’t exist when this block is evaluated.

The limitation of not being able to deal with plugin versions in catalogs will be lifted in one way or another in the future.

Can I use the version catalog in buildSrc?

Yes you can. Not only in buildSrc, but basically in any included build too. You have several options, but the easiest is to include the TOML catalog in your buildSrc/settings.gradle(.kts) file:

dependencyResolutionManagement {
    versionCatalogs {
        lib {

But how can I use the catalog in plugins defined in buildSrc?

The solution above lets you use the catalogs in the build scripts of buildSrc itself, but what if you want to use the catalog(s) in the plugins that buildSrc defines, or precompiled script plugins? Long story short, currently, you can do it using a type unsafe API only.

First you need to access the version catalogs extension to your plugin/build script, for example in Groovy:

def catalogs = project.extensions.getByType(VersionCatalogsExtension)

or in Kotlin:

val catalogs = extensions.getByType<VersionCatalogsExtension>()

then you can access the version catalogs in your script, for example writing:

pluginManager.withPlugin("java") {
    val libs = catalogs.named("libs")
    dependencies.addProvider("implementation", libs.findDependency("lib").get())

Note that this API doesn’t provide any static accessor but is nevertheless safe, using the Optional API. There’s a reason why you cannot access type-safe accessors in plugins/precompiled script plugins, you will find more details on this issue. In a nutshell, that’s because buildSrc plugins (precompiled or not) are plugins which can be applied to any kind of project and we don’t know what the target project catalogs will be: there’s no inherent reason why they would be the same. In the future we will probably provide a way to explain that, at your own risk, expect the target catalog model to be the same.

Can I use version catalogs in production code?

No, you can’t. Version catalogs are only accessible to build scripts/plugins, not your production code.

Should I use a platform or a catalog?

You should probably use both, look at our docs for a complete explanation.

Why did you choose TOML and not YAML?

or XML (or pick your favorite format). The rationale is described in the design document.

My IDE is red everywhere, MISSING_DEPENDENCY_CLASS error

If you are seeing this error:

missing dependency

upgrade to the latest IntelliJ IDEA 2021.1, which fixes this problem.

Why can’t I have nested aliases with the same prefix?

Imagine that you want to have 2 aliases, say junit and junit-jupiter and that both represent distinct dependencies: Gradle won’t let you do this and you will have to rename your aliases to, say junit-core and junit-jupiter. That’s because Gradle will map those aliases to accessors, that is to say libs.getJunit() and libs.getJUnit().getJupiter(). The problem is that you can’t have an accessor which is both a leaf (represents a dependency notation) and a node (that is to say an intermediate node to access a real dependency). The reason we can’t do this is that we’re using lazy accessors of type Provider<MinimalExternalDependency> for leaves and that type cannot be extended to provide accessors for "children" dependencies. In other words, the type which represents a node with children provides accessors which return Provider<...> for dependencies, but a provider itself cannot have children. A potential workaround for this would be to support, in the future, an explicit call to say "I’m stopping here, that’s the dependency I need", for example:

dependencies {
    // or
    testImplemementation(libs.junit.peek()) // because `get()` might be confusing as it would return a `Provider` on which you can call `get()` itself

For now the team has decided to restrict what you can do by preventing having aliases which have "name clashes".

Why can’t I use an alias with dots directly?

You will have noticed that if you declare an alias like this:

junit-jupiter = "..."

then Gradle will generate the following accessor: libs.junit.jupiter (basically the dashes are transformed to dots). The question is, why can’t we just write:

junit.jupiter = "..."

And the reason is: tooling support. The previous declaration is actually equivalent to writing:

   jupiter = "..."

but technically, it’s undecidable where the "nesting hierarchy" stops, which would prevent tools from providing good completion (for example, where you can use { module = "..."}. It also makes it harder for tooling to automatically patch the file since they wouldn’t know where to look for.

As a consequence, we’ve decided to keep the format simple and implement this mapping strategy.

Should I use commons-lang3 as an alias or commonsLang3?

Problably neither one nor the other :) By choosing commons-lang3, you’re implicitly creating a group of dependencies called commons, which will include a number of dependencies, including lang3. The question then is, does that commons group make sense? It’s rather abstract, no? Does it actually say it’s "Apache Commons"?

A better solution would therefore be to use commonsLang3 as the alias, but then you’d realize that you have chosen a version in the alias name, so why not commonsLang directly?


commonsLang = { module="org.apache.commons:commons-lang3:3.3.1" }

This means that the dashes should be limited to grouping of dependencies, so that they are organized in "folders". This can make it practical when you have lots of dependencies, but it also makes them less discoverable by completion, since you’d have to know in which subtree to look at. Proper guidance on what to use will be discussed later, based on your feedback and practices.

Should I use the settings API or the TOML file?

Gradle comes with both a settings API to declare the catalog, or a convenience TOML file. I would personally say that most people should only care about the TOML file as it covers 80% of use cases. The settings API is great as soon as you want to implement settings plugins or, for example, if you want to use your own, existing format to declare a catalog, instead of using the TOML format.

Why can’t I use excludes or classifiers?

By design, version catalogs talk about dependency coordinates only. The choice of applying excludes is on the consumer side: for example, for a specific project, you might need to exclude a transitive dependency because you don’t use the code path which exercises this dependency, but this might not be the case for all places. Similarly, a classifier falls into the category of variant selectors (see the variant model): for the same dependency coordinates, one might want classifier X, another classifier Y, and it’s not necessarily allowed to have both in the same graph. Therefore, classifiers need to be declared on the dependency declaration site:

dependencies {
   implementation(variantOf(libs.myLib) { classifier('test-fixtures') })

The rationale being this limitation is that the use of classifiers is an artifact of the poor pom.xml modeling, which doesn’t assign semantics to classifiers (we don’t know what they represent), contrary to Gradle Module Metadata. Therefore, a consumer should only care about the dependency coordinates, and the right variant (e.g classifier) should be selected automatically by the dependency resolution engine. We want to encourage this model, rather than supporting adhoc classifiers which will eventually require more work for all consumers.

How do I tell Gradle to use a specific artifact?

Similarly to classifiers or excludes, artifact selectors belong to the dependency declaration site. You need to write:

dependencies {
    implementation(libs.myLib) {
        artifact {
            name = 'my-lib' // note that ideally this will go away, see
            type = 'aar'

Where should I report bugs or feature requests?

As usual, on our issue tracker. There’s also the dedicated epic where you will find the initial specification linked, which explains a lot of the design process.


If you like this blog or my talks, consider helping me acquire astronomy equipment

Older posts are available in the archive.