Gradle quickie: laziness

24 May 2022

Tags: gradle laziness

Yesterday, I wrote this tweet:

2022 05 24 tweet

I got a surprisingly high number of answers, so I thought it would be a good idea to expand a bit on the topic.

Gradle introduced lazy APIs several years ago. Those APIs are mostly directed at plugin authors but some build authors may have to deal with them too. Lazy APIs are designed to improve performance, by avoiding to create tasks which would never be invoked during a build. While lots of users wouldn’t notice the difference between a build using lazy APIs and a build which doesn’t, in some ecosystems like Android or with large projects, this makes a dramatic difference. In other words, while Gradle’s performance is often praised, it’s easy to break performance by unintentionally trigerring configuration of tasks which shouldn’t.

Task configuration

The discussion was trigerred when I was doing a code review yesterday. I saw the following block:

tasks.withType(Test) {
    testLogging {
        showStandardStreams = true
        exceptionFormat = 'full'
    }
}

This block configures logging for all test tasks of the project. At first glance, this seems appropriate, but there’s this gotcha: you should use .configureEach:

tasks.withType(Test).configureEach {
    testLogging {
        showStandardStreams = true
        exceptionFormat = 'full'
    }
}

If you don’t, then all tasks of type Test will always be configured, even if you don’t call them in a build. In other words, lazy configuration is about only configuring tasks which are going to be invoked.

Unfortunately, there are no warnings about eager configuration, or "unnecessary" configuration in a build. If you use Build Scans, you can have insights about configuration and realize that, but casual users wouldn’t.

Similarly, this code:

test {
    testLogging {
        showStandardStreams = true
        exceptionFormat = 'full'
    }
}
----

Will configure the test task (not all test tasks) eagerly: even if the test task isn’t executed in a build, it would be configured. Now you see the problem: this configuration pattern has been there basically forever, so it’s hard to remove. To do lazy configuration, you have to write:

tasks.named('test') {
    testLogging {
        showStandardStreams = true
        exceptionFormat = 'full'
    }
}
----

Obviously, this isn’t as nice, DSL-wise. One thing you may wonder is why Gradle’s DSL default to the lazy version? In other words, why doesn’t it call the lazy version instead of the eager one?

It’s because of backwards compatiblity: because this pattern has been present since day one in Gradle, eager configuration is everywhere in older builds. If you search for configuration blocks in Stack Overflow, it’s very likely that you’ll end up copy and pasting eager configuration samples. But, as the name implies, lazy configuration has a different behavior than eager: in the lazy case, the configuration block is invoked only when the task is needed, either because it’s going to be executed, or that another task depends on its configuration to configure itself. In the eager case, configuration is executed immediately: unfortunately there are lots of builds which accidentally depend on this order of execution, so changing from eager to lazy could result in breaking changes!

What should you use?

The consequence is that there’s a mix of lazy and eager APIs in Gradle, and making the difference between what is going to trigger configuration or not isn’t obvious, even for Gradle experts. Let’s summarize a few patterns:

  • If you want to configure one particular task by name, you should write:

tasks.named("myTask") {
   // configure the task
}

or

tasks.named("myTask", SomeType) {
   // configure the task
}
  • If you want to all tasks of a particular type, you should write:

tasks.withType(SomeType).configureEach {
   // configure the task
}
  • If you want to create a new task, don’t use create, but register instead:

tasks.register("myTask", SomeType) {
    ...
}

In the DSL, the following code that you find in many tutorials would immediately create a task:

task hello {
   doLast {
       println "Hello!"
   }
}

So the correct way to do this is:

tasks.register("hello") {
    doLast {
         println "Hello!"
    }
}

Note that the return type of both calls is different: the eager version will return a Task, while the 2d one returns a TaskProvider. This is the reason why upgrading plugins isn’t that trivial, since it’s a binary breaking change!

Task collections and implicit dependencies

In a previous blog post I explained that the provider API is the right way to handle implicit inputs. For example, you can pass directly a TaskProvider as an element of a file collection: Gradle would automatically resolve dependencies and trigger the configuration of that task, include it in the task graph and use its output as an input of the task you’re invoking.

Therefore, understanding lazy APIs means that you should understand when things are executed. In the example above, the call tasks.withType(Test) by itself does not configure anything. You can see it as a lazy predicate: it returns a live task collection, it’s a declaration of intent: "this models all tasks of type `Test`".

Therefore, the following blocks of code are strictly equivalent:

tasks.withType(Test) {
   // configure
}

or

tasks.withType(Test).each {
    // configure
}

or

def testTasks = tasks.withType(Test)
testTasks.each {
    // configure
}

In other words, the last version explains the "magic" behind the traditional Gradle DSL. The first line is lazy, returns a task collection, and it’s the fact of calling .each which triggers configuration of all tasks! Replace .each with .configureEach and you are now lazy!

Newer APIs like named are lazy from day one, but are not necessarily user friendly.

A Gradle puzzle

In effect, named is lazy in terms of configuration, but eager in terms of lookup: it will fail if the task that you’re looking for doesn’t exist. It’s a bit strange, since in Gradle everything is now supposed to be lazy, so you can’t know when a task is going to be available or not. As an illustration, let’s explore the following script (don’t write this in your own builds, this is for demonstration purposes!):

tasks.register("hello") {
   doLast {
       println "Hello,"
   }
}

tasks.named("hello") {
   doLast {
        println "World!"
   }
}

If you run gradle hello, then the output is what you expect:

> Task :hello
Hello,
World!

Now, invert the position of the 2 tasks:

tasks.named("hello") {
   doLast {
        println "World!"
   }
}

tasks.register("hello") {
   doLast {
       println "Hello,"
   }
}

and run again. Boom!

* Where:
Build file '/tmp/ouudfd/build.gradle' line: 1

* What went wrong:
A problem occurred evaluating root project 'ohnoes'.
> Task with name 'hello' not found in root project 'ohnoes'.

That is very unexpected: I think what most people would expect is, if any change, that the World! and Hello outputs would be exchanged. But because named eagerly searches for a task registed with a particular name, it fails if not found.

As a consequence, plugin authors who want to react to other plugins, or react to tasks which may be present or not, tend to use the following API instead:

tasks.matching { it.name == 'hello' }.configureEach {
    doLast {
        println "World!"
   }
}

tasks.register("hello") {
   doLast {
       println "Hello,"
   }
}

Now let’s run our hello task:

> Task :hello
World!
Hello,

Yay! No failure anymore, and the output is in the order we expected. Problem solved, right?

Well, not so fast. You’ve used configureEach, so everything should be lazy, right? Sorry, nope: the matching API is an old, eager API! Actually, if you look at what the predicate uses, it becomes obvious:

// T is a Task!
TaskCollection<T> matching(Spec<? super T> var1)

Because it works on Task instances, it needs to create and configure the tasks so that you can run an arbitrary predicate on them!

That’s why if you have to write things like this, you must guard calls to matching with a withType before, which will restrict the set of tasks which will be configured. For example:

tasks.withType(Greeter).matching { it.name == 'hello' }.configureEach {
   messages.add("World!")
}

tasks.register("hello", Greeter) {
   messages.add("Hello,")
}

Of course the example is a bit stupid, but it makes sense when you’re not the one in control of when a task is configured or even if you don’t know if it will ever be.

Unfortunately, Gradle doesn’t provide an API which is fully lazy and lenient to tasks being present or not. If you simply want to configure a task, that is not a big deal since you can simply use configureEach:

tasks.configureEach {
    if (it.name == 'hello') { ... }
}

This is fine because the configuration block will be called for each task being configured. However, this configureEach block is a configurer, not a predicate, so you can’t use it as an input to another task:

tasks.named("md5") {
    inputFiles.from(tasks.named("userguide"))
}

The code above would fail if the userguide task doesn’t exist before the md5 task is configured…​

Conclusion

In this blog post, I have explained why you should use the new lazy APIs instead of their eager counterparts. I have also described that while they are more verbose, they make it possible to have faster builds by avoiding configuration of tasks which would not be executed. However, Gradle doesn’t warn you if you eagerly configure tasks, and it’s easy to shoot yourself in the foot. Some would blame the docs, some would blame the APIs.

As a former Gradler, I would blame none of those: the docs are here, and changing the APIs to be lazy everywhere is either a binary breaking change (return type of methods which create instead of register), or a behavior change (deferred configuration vs immediate configuration). This makes it particularly complicated to upgrade builds without pissing off a number of users!

Comments

Astrophotographie: rendez-vous sur Twitch !

03 May 2022

Tags: astrophotographie twitch

Nous ne sommes pas que des développeurs !

Ma passion, en dehors du développement, c’est l’astronomie. Depuis quelques années, je me suis lancé dans l’astrophotographie: je ne fais quasiment plus que ça. Pour un développeur comme moi, c’est assez intéressant de constater que lorsque je poste sur Twitter une photo que j’ai faite, j’ai bien souvent plus de réponses et de likes que sur mes tweets professionnels (ce qui est parfois vexant, lol !).

Des lives sur Twitch

Edit: C'est fait ! L'expérience fut enrichissante pour moi, retrouvez le replay sur Youtube.

J’ai, au final, souvent les mêmes questions qui reviennent:

  • Qu’est ce que c’est ?

  • Quel matériel tu utilises ?

  • Combien de fois ça grossit ?

  • Est-ce que ce sont de vraies couleurs ?

  • Combien de temps de pose ?

et bien d’autres !

Alors, l’an dernier, je me suis lancé ce défi de faire un talk dans une conférence de développeurs (Devoxx) sur le sujet des "miracles du logiciel" en termes d’astrophotographie. Malheureusement, le talk n’a pas été retenu, mais j’ai conservé en tête l’idée de présenter quelque chose.

Alors, envie de savoir comment on fait des photos comme celle-ci ?

2021 11 09 flaming star nebula
Figure 1. The flaming star nebula

Aujourd’hui, je vous annonce donc un premier live sur Twitch pour parler d’astrophotographie ! Je dis premier, parce que je pense qu’il y a de quoi en faire plusieurs avant d’avoir fait le tour du sujet.

Attention, ça sera sans prétention, pas aussi préparé qu’un talk en conférence. Ca sera aussi mon tout premier live et donc probablement plein de problèmes techniques, mais il faut bien se lancer un jour !

J’annonce donc le jeudi 12 mai à 20h sur ma chaîne Twitch.

Le premier sujet sera autour de mon setup photographique: quel matériel j’utilise, les principes de base de l’acquisition photo. Attention, il ne s’agira pas d’une présentation générique sur l’astrophoto, mais bien d’une présentation spécifique à mon matériel, soupoudrée de détails sur comment ça fonctionne en règle générale.

En fonction du succès et/ou de ce que j’arrive à couvrir, d’autres lives seront programmés.

A bientôt !

Comments

Conditional dependencies with Gradle

21 March 2022

Tags: gradle dependencies

Introduction

If you ever wrote a Gradle plugin for a framework (e.g Micronaut) or a plugin which needs to add dependencies if the user configures a particular flag, then it is likely that you’ve faced some ordering issues.

For example, imagine that you have this DSL:

micronaut {
    useNetty = true
}

Obviously, at some point in time, you have to figure out if the property useNetty is set in order to transparently add dependencies. A naive solution is to use the good old afterEvaluate block. Many plugins do this:

afterEvaluate {
    dependencies {
        if (micronaut.useNetty.get()) {
            implementation("io.netty:netty-buffer:4.1.75.Final")
        }
    }
}

The problem is that while afterEvaluate seems to fix the problem, it’s just a dirty workaround which defers the problem to a later stage: depending on the plugins which are applied, which themselves could use afterEvaluate, your block may, or may not, see the "final" configuration state.

In a previous post, I introduced Gradle’s provider API. In this post, we’re going to show how to use it to properly fix this problem.

Using providers for dependencies

Let’s start with the easiest. It’s a common requirement of a plugin to provide the ability to override the version of a runtime. For example, the checkstyle plugin would, by default, use version of checkstyle by convention, but it would still let you override the version if you want to use a different one.

Micronaut provides a similar feature:

micronaut {
    version = "3.3.1"
}

The Micronaut dependencies to be added on the user classpath depend on the value of the version in our micronaut extension. Let’s see how we can implement this. Let’s create our Gradle project (we’re assuming that you have Gradle 7.4 installed):

$ mkdir conditional-deps && cd conditional-deps
$ gradle init --dsl groovy \
   --type java-library \
   --package me.champeau.demo \
   --incubating \
   --test-framework junit-jupiter

Now we’re going to create a folder for our build logic, which will contain our plugin sources:

$ mkdir -p build-logic/src/main/groovy/my/plugin

Let’s update the settings.gradle file to include that build logic:

settings.gradle
pluginManagement {
    // include our plugin
    includeBuild "build-logic"
}
rootProject.name = 'provider-dependencies'
include('lib')

For now our plugin is an empty shell, so let’s create its build.gradle file so that we can use a precompiled script plugin.

build-logic/build.gradle
plugins {
    id 'groovy-gradle-plugin'
}

Now let’s define our extension, which is simply about declaring an interface:

build-logic/src/main/groovy/my/plugin/MicronautExtension.groovy
package my.plugin

import org.gradle.api.provider.Property

interface MicronautExtension {
    Property<String> getVersion()
}

It’s now time to create our plugin: precompiled script plugins are a very easy way to create a plugin, simply by declaring a file in build-logic/src/main/groovy which name ends with .gradle:

build-logic/src/main/groovy/my.plugin.gradle
import my.plugin.MicronautExtension

def micronautExtension = extensions.create("micronaut", MicronautExtension) (1)
micronautExtension.version.convention("3.3.0")                              (2)
1 Create our extension, named "micronaut"
2 Assign a default value to the "version" property

By convention, our plugin id will be my.plugin (it’s derived from the file name). Our plugin is responsible for creating the extension, and it assigns a convention value to the version property: this is the value which is going to be used if the user doesn’t declare anything explicitly.

Then we can use the plugin in our main build, that is, in the lib project:

lib/build.gradle
plugins {
    // Apply the java-library plugin for API and implementation separation.
    id 'java-library'
    // And now apply our plugin
    id 'my-plugin'
}

micronaut {
   // empty for now
}

If we look at the lib compile classpath, it will not include any Micronaut dependency for now:

$ ./gradlew lib:dependencies --configuration compileClasspath

------------------------------------------------------------
Project ':lib'
------------------------------------------------------------

compileClasspath - Compile classpath for source set 'main'.
+--- org.apache.commons:commons-math3:3.6.1
\--- com.google.guava:guava:30.1.1-jre
     +--- com.google.guava:failureaccess:1.0.1
     +--- com.google.guava:listenablefuture:9999.0-empty-to-avoid-conflict-with-guava
     +--- com.google.code.findbugs:jsr305:3.0.2
     +--- org.checkerframework:checker-qual:3.8.0
     +--- com.google.errorprone:error_prone_annotations:2.5.1
     \--- com.google.j2objc:j2objc-annotations:1.3

Our goal is to add a dependency which is derived from the version defined in our Micronaut extension, so let’s do this. Edit our build-logic plugin:

build-logic/src/main/groovy/my.plugin.gradle
import my.plugin.MicronautExtension

def micronautExtension = extensions.create("micronaut", MicronautExtension)
micronautExtension.version.convention("3.3.0")

dependencies {
    implementation micronautExtension.version.map {
        v -> "io.micronaut:micronaut-core:$v"
    }
}

Now let’s run our dependencies report again:

$ ./gradlew lib:dependencies --configuration compileClasspath

> Task :lib:dependencies

------------------------------------------------------------
Project ':lib'
------------------------------------------------------------

compileClasspath - Compile classpath for source set 'main'.
+--- org.apache.commons:commons-math3:3.6.1
+--- io.micronaut:micronaut-core:3.3.0
|    \--- org.slf4j:slf4j-api:1.7.29
\--- com.google.guava:guava:30.1.1-jre
     +--- com.google.guava:failureaccess:1.0.1
     +--- com.google.guava:listenablefuture:9999.0-empty-to-avoid-conflict-with-guava
     +--- com.google.code.findbugs:jsr305:3.0.2
     +--- org.checkerframework:checker-qual:3.8.0
     +--- com.google.errorprone:error_prone_annotations:2.5.1
     \--- com.google.j2objc:j2objc-annotations:1.3

Victory! Now we can see our micronaut-core dependency. How did we do this?

Note that instead of using afterEvaluate, what we did is adding a dependency, but instead of using the traditional dependency notation, we used a provider: the actual dependency string is computed only when we need it. We can check that we can actually configure the version via our extension by editing our build file:

lib/build.gradle
micronaut {
   version = "3.3.1" // override the convention
}
$ ./gradlew lib:dependencies --configuration compileClasspath

> Task :lib:dependencies

------------------------------------------------------------
Project ':lib'
------------------------------------------------------------

compileClasspath - Compile classpath for source set 'main'.
+--- org.apache.commons:commons-math3:3.6.1
+--- io.micronaut:micronaut-core:3.3.1
|    \--- org.slf4j:slf4j-api:1.7.29
\--- com.google.guava:guava:30.1.1-jre
     +--- com.google.guava:failureaccess:1.0.1
     +--- com.google.guava:listenablefuture:9999.0-empty-to-avoid-conflict-with-guava
     +--- com.google.code.findbugs:jsr305:3.0.2
     +--- org.checkerframework:checker-qual:3.8.0
     +--- com.google.errorprone:error_prone_annotations:2.5.1
     \--- com.google.j2objc:j2objc-annotations:1.3

Maybe add, maybe not!

In the previous example, we have systematically added a dependency, based on the version defined in the extension. What if we want to add a dependency if a property is set to a particular value? For this purpose, let’s say that we define a runtime property which will tell what runtime to use. Let’s add this property to our extension:

build-logic/src/main/groovy/my/plugin/MicronautExtension.groovy
package my.plugin

import org.gradle.api.provider.Property

interface MicronautExtension {
    Property<String> getVersion()
    Property<String> getRuntime()
}

Now let’s update our plugin to use that property, and add a dependency based on the value of the runtime property:

build-logic/src/main/groovy/my.plugin.gradle
import my.plugin.MicronautExtension

def micronautExtension = extensions.create("micronaut", MicronautExtension)
micronautExtension.version.convention("3.3.0")

dependencies {
    implementation micronautExtension.version.map { v ->
        "io.micronaut:micronaut-core:$v"
    }

    implementation micronautExtension.runtime.map { r ->
        switch(r) {
            case 'netty':                                                   (1)
                return "io.netty:netty-buffer:4.1.75.Final"
            case 'tomcat':
                return "org.apache.tomcat.embed:tomcat-embed-core:10.0.18"  (2)
            default:
                return null                                                 (3)
        }
    }
}
1 Add a dependency if the runtime is set to netty
2 Add a dependency if the runtime is set to tomcat
3 But do nothing if the runtime isn’t set

The trick, therefore, is to return null in the provider in case no dependency needs to be added. So let’s check first that without declaring anything, we don’t have any dependency added:

$ ./gradlew lib:dependencies --configuration compileClasspath

> Task :lib:dependencies

------------------------------------------------------------
Project ':lib'
------------------------------------------------------------

compileClasspath - Compile classpath for source set 'main'.
+--- org.apache.commons:commons-math3:3.6.1
+--- io.micronaut:micronaut-core:3.3.1
|    \--- org.slf4j:slf4j-api:1.7.29
\--- com.google.guava:guava:30.1.1-jre
     +--- com.google.guava:failureaccess:1.0.1
     +--- com.google.guava:listenablefuture:9999.0-empty-to-avoid-conflict-with-guava
     +--- com.google.code.findbugs:jsr305:3.0.2
     +--- org.checkerframework:checker-qual:3.8.0
     +--- com.google.errorprone:error_prone_annotations:2.5.1
     \--- com.google.j2objc:j2objc-annotations:1.3

Now let’s switch to use tomcat:

lib/build.gradle
micronaut {
   version = "3.3.1"
   runtime = "tomcat"
}
$ ./gradlew lib:dependencies --configuration compileClasspath

> Task :lib:dependencies

------------------------------------------------------------
Project ':lib'
------------------------------------------------------------

compileClasspath - Compile classpath for source set 'main'.
+--- org.apache.commons:commons-math3:3.6.1
+--- io.micronaut:micronaut-core:3.3.1
|    \--- org.slf4j:slf4j-api:1.7.29
+--- org.apache.tomcat.embed:tomcat-embed-core:10.0.18
|    \--- org.apache.tomcat:tomcat-annotations-api:10.0.18
\--- com.google.guava:guava:30.1.1-jre
     +--- com.google.guava:failureaccess:1.0.1
     +--- com.google.guava:listenablefuture:9999.0-empty-to-avoid-conflict-with-guava
     +--- com.google.code.findbugs:jsr305:3.0.2
     +--- org.checkerframework:checker-qual:3.8.0
     +--- com.google.errorprone:error_prone_annotations:2.5.1
     \--- com.google.j2objc:j2objc-annotations:1.3

Note how the dependency on Tomcat is added!

More complex use cases are supported!

We’ve shown how to add a dependency and derive the dependency notation from the version defined in our extension. We’ve then seen how we could add a dependency, or not, based on the value of an extension: either return a supported dependency notation, or null if nothing needs to be added.

Gradle actually supports more complex cases, that I will let as an exercise to the reader. For example:

Conclusion

In this post, we’ve seen how to leverage Gradle’s provider API to properly implement plugins which need to add dependencies conditionally. This can either mean that they need to add dependencies which version depend on some user configuration, or even full dependency notations which depend on configuration. The interest of using the provider API again lies in the fact that it is lazy and therefore is (largely) immune to ordering issues: instead of relying on hooks like afterEvaluate which come with a number of drawbacks (reliability, ordering, interaction with other plugins), we rely on the fact that it’s only when a value is needed that it is computed. At this moment, we know that the configuration is complete, so we can guarantee that our dependencies will be correct.

Comments

Using the Micronaut Version Catalog

08 February 2022

Tags: gradle micronaut version catalog

Introduction

With the release of Gradle 7.4, Micronaut users now have an interesting option to manage their dependencies: using Gradle’s version catalogs. Indeed, for a few releases already, Micronaut has shipped its own version catalog alongside its BOM.

Let’s explore how to use it and what’s the benefit.

What is a version catalog?

In a nutshell, a version catalog allows centralizing dependency versions in a single place. Instead a build script, a typical dependency declaration looks like this:

dependencies {
    implementation("org.apache.slf4j:slf4j-api:1.7.25")
}

With a version catalog, the declaration looks like this:

dependencies {
    implementation(libs.slf4j)
}

And the dependency coordinates are defined in the gradle/libs.versions.toml file:

[versions]
slf4j = "1.7.25"

[libraries]
slf4j = { module = "org.apache.slf4j", version.ref = "slf4j" }

There are a couple of advantages in doing so:

  • dependency versions are centralized in this TOML file

  • the catalogs create "type safe accessors" which are completed by the IDE (although to my knowledge completion is only supported by IntelliJ IDEA with the Kotlin DSL)

You can read a more complete description about version catalogs in this blog post I wrote a few months ago.

The Micronaut version catalog

In addition, frameworks like Micronaut can publish version catalogs, which are then usable in your projects. You can then think of the Micronaut version catalog as a list of dependencies to pick up from: you don’t have to think about a version to choose, you can simply use the "recommendation" from Micronaut, but you don’t have to remember the dependency coordinates either.

Importing the Micronaut version catalog

Let’s start with a project that you can generate using the Micronaut CLI:

mn create-app catalog

(alternatively, download the project using Micronaut Launch)

Open the generated project and update the Gradle version by changing the gradle/gradle-wrapper.properties file:

distributionUrl=https\://services.gradle.org/distributions/gradle-7.4-bin.zip

Now, in order to import the Micronaut version catalog, add this to your settings.gradle file:

settings.gradle
dependencyResolutionManagement {
    repositories {
        mavenCentral()
    }
    versionCatalogs {
        create("mn") {
            from("io.micronaut:micronaut-bom:${micronautVersion}")
        }
    }
}

Here, we’re creating a new version catalog called mn. Internally, Gradle will automatically download the catalog which is published at the same GAV coordinates as its BOM as a TOML file and expose it to your build scripts.

Let’s open our build.gradle file. By default it defines the following dependencies:

dependencies {
    annotationProcessor("io.micronaut:micronaut-http-validation")
    implementation("io.micronaut:micronaut-http-client")
    implementation("io.micronaut:micronaut-jackson-databind")
    implementation("io.micronaut:micronaut-runtime")
    implementation("jakarta.annotation:jakarta.annotation-api")
    runtimeOnly("ch.qos.logback:logback-classic")
    implementation("io.micronaut:micronaut-validation")
}

Now, we can replace this with the following:

dependencies {
    annotationProcessor(mn.micronaut.http.validation)
    implementation(mn.micronaut.http.client)
    implementation(mn.micronaut.jackson.databind)
    implementation(mn.micronaut.runtime)
    implementation(mn.jakarta.annotation.api)
    runtimeOnly(mn.logback)
    implementation(mn.micronaut.validation)
}

What happened here? Basically, we replaced hardcoded dependency coordinates with references to the mn version catalog. It’s particularly interesting if you are using the Kotlin DSL as I mentioned earlier, because in this case, the dependency notations are type-safe: you can’t make a typo in dependency coordinates, and you get completion:

catalog completion

Nice!

Future work

Version catalogs will probably be enabled by default in future releases of Micronaut, which means that projects created via Micronaut Launch or the CLI tool would automatically use the catalog, so you don’t have to do the conversion described in this blog post. Stay tuned!

Comments


Older posts are available in the archive.