
22 April 2023
Tags: astronomy astro4j solex java graalvm
I’m a software developer, and if you are following me, you may also know that I’m an amateur astrophotographer. For a long time, I’ve been fascinated by the quality of software we have in astronomy, to process images. If you are french speaking, you can watch a presentation I gave about this topic. Naturally, I have been curious about how all these things work, but it’s actually extremely rare to find open source software, and when you do, it’s rarely written in Java. For example, both Firecapture (software to capture video streams) and Astro Pixel Processor are written in Java, but both of them are closed source, commercial software.
Last month, for my birthday, I got a Sol’Ex, an instrument which combines spectrography and software to realize amazing solar pictures in different spectral lines. To process those images, the easiest solution is to use the amazing INTI software, written in Python, but for which sources are not published, as far as I know, neither on GitHub or GitLab.
Note
|
After announcing this project, I have been notified that the sources of INTI are indeed available, as GPL. It’s a pity they are not linked on the webpage, this would have helped a lot. |
To give you an example of what you can do, here’s the first photography I’ve done with Sol’Ex and processed with INTI (color was added in Gimp):
To get this result, one has to combine images which look like this:
Interesting, no? At the same time, I was a bit frustrated by INTI. While it clearly does the job and is extremely easy to use, there are a few things which I didn’t like:
the first, which I mentioned, is that it’s using Python and that the sources are not published (as far as I understand, some algorithms are not published yet). I am not surprised that Python is used, because it’s a language which is extremely popular in academics, with lots of libraries for image processing, science oriented libs, etc. However, because it’s popular in academics also means that programs are often written by and for academics. When we’re talking about maths, it’s often short variable names, cryptic function names, etc…
second, after processing, INTI pops up a lot of images as individual windows. If you want to process a new file, you have to close all of them. The problem is that I still haven’t figured out in which order you have to do this so that you can restart from the initial window which lets you select a video file! Apparently, depending on the order, it will, or will not, show the selector. And sometimes, it takes several seconds before it does so.
INTI seems to be regenerating a font cache every time I reboot. This operation takes several minutes. It’s probably an artifact of packaging the application for Windows, but still, not very user friendly.
INTI generates a number of images, but puts them alongside the videos. I like things organized (well, at least virtually, because if you looked at my desk right now, it is likely you’d feel faint), so I wish it was creating one directory per processed video.
When I started studying at University, back in 1998, I was planning to do astrophysics. However, I quickly forgot about this idea when I saw the amount of maths one has to master to do modern physics. Clearly, I was reaching my limits, and it was extremely complicated for me. Fortunately, I had been doing software development for years already, because I started very young, on my father’s computer. So I decided to switch to computer science, where I was reasonably successful.
However, not being able to do what I wanted to do has always been a frustration. It is still, today, to the point that a lot of what I’m reading is about this topic, but still, I lack the maths.
It was time for me to confront my old demons, and answer a few questions:
am I still capable of understanding maths, in order to implement algorithms which I use everyday when I do astronomy image processing with software written by others?
can I read academic papers, for example to implement a FFT (Fast Fourier Transform) algorithm, although I clearly remember that I failed to understand the principles when I was at school?
can I do this while writing something which could be useful to others, and publish it as open source software?
Astro4j is there to answer those questions. I don’t have the answers yet and time will tell if I’m successful.
One question you may have is why Java? If you are not familiar with this language, you may have this old misconception that Java is slow. It’s not. Especially, if you compare to Python, it’s definitely not.
This project is also for me a way to prove that you can implement "serious science" in Java. You can already find some science libraries in Java, but they tend to me impractical to use, because not following the industry standards (e.g published on Maven Central) or platform-dependent.
I also wanted to leverage this to learn something new. So this project:
uses Java 17 (at least for libraries, so that they can be consumed by a larger number of developers, for applications I’m considering moving to Java 20)
uses JavaFX (OpenJFX) for the application UI
experiments with the Vector API for faster processing
As I said, my initial goal is to obtain a software which can basically do what INTI does. It is not a goal to make it faster, but if I can do it, I will.
After a few evenings (and a couple week-ends ;)), I already have something which performs basic processing, that is to say that it can process a SER video file and generate a reconstructed solar disk. It does not perform geometry correction, nor tilt correction, like INTI does. It doesn’t generate shifted images either (for example the doppler images), but it works.
Since the only source of information I had to do this was Christian Buil’s website and Valérie Desnoux INTI’s website, I basically had to implement my own algorithms from A to Z, and just "guess" how it works.
In order to do this, I had to:
implement a SER video file decoder. The library is ready and performs both decoding the SER file and performs demosaicing of images
on top of the decoder, I implemented a SER file player, which is still very basic at this stage, and uses JavaFX. This player can even be compiled to a native binary using GraalVM!
Here’s an example:
Then I could finally start working on the Sol’Ex video processor. As I said, I don’t know how INTI works, so this is all trial and error, in the end…
In the beginning, as I said, you have a SER video file which contains a lot of frames (for example, in my case, it’s a file from 500MB to 1GB) that we have to process in order to generate a solar disk. Each frame consists of a view of the light spectrum, centered on a particular spectral line.
For example, in the following image, we have the H-alpha spectral line:
Because of optics, you can see that the line is not horizontal: each frame is distorted. Therefore, in order to reconstruct an image, we have to deal with that distortion first. For this, we have to:
detect the spectral line in the frame, which I’m doing by implementing a simple contrast detection
perform a linear regression in order to compute a 2d order polynomial which models the distortion
Note that before doing this, I had no idea how to do a 2d order regression, but I searched and found that it was possible to do so using the least squares method, so I did so. The result is that we can identify precisely the line with this technique:
In the beginning, I tought I would have to perform distortion correction in order to reconstruct the image, because I was (wrongly) assuming that, because each frame represents one line in the reconstructed image, I had to compute the average of the colums of each frame to determine the color of a single pixel in the output. I was wrong (we’ll come to that later), but I did implement a distortion correction algorithm:
When I computed the average, the resulting image was far from the quality and constrast of what I got with INTI. What a failure! So I thought that maybe I had to compute the average of the spectral line itself. I tried this, and indeed, the resulting image was much better, but still not the quality of INTI. The last thing I did, therefore, was to pick the middle of the spectral line itself, and then, magically, I got the same level of quality as with INTI (for the raw images, as I said I didn’t implement any geometry or tilt correction yet).
The reason I was assuming that I had to compute an average, is that it wasn’t clear to me that the absorption ray would actually contain enough data to reconstruct an image. As it was an absorption ray, I assumed that the value would be 0, and therefore that nothing would come out of using the ray itself. In fact, my physics were wrong, and you must use that.
A direct consequence is that there is actually no need to perform a distortion correction. Instead, you can just use the 2d order polynomial that we’ve computed, and "follow the line", that’s it!
Now, we can generate an image, but it will be very dark. The reason is obvious: by taking the middle of the spectral line, we’re basically using dark pixels, so the dynamics of the image are extremely low. So, in order to have something which "looks nice", you actually have to perform brightness correction.
The first algorithm I have used is simply a linear correction: we’re computing the max and min value of the image, then rescaling that so that the max value is the maximum representable (255).
Here’s the result:
However, I felt that this technique wouldn’t give the best results, in particular because linear images tend to give results which are not what the eye would see: our eye performs a bit like an "exponential" accumulator, the more photos you get, the "brighter" we’ll see it.
So I implemented another algorithm which I had seen in PixInsight, which is called inverse hyperbolic (Arcsinh) correction:
Last, you can see that the image has lots of vertical line artifacts. This is due to the presence of dust either on the optics or the sensors. INTI performs correction of those lines, and I wanted to do something similar.
Again, I don’t know what INTI is doing, so I figured out my own technique, which is using "multipass" correction. In a nutshell, for each row, I am computing the average value of the row. Then, for a particular row, I compute the average of the averages of the surrounding lines (for example, 16 rows before and after). If the average of this line is below the average of the averages(!), then I’m considering that the line is darker than it should be, computing a correction factor and applying it.
The result is a corrected image:
We’re still not a the level of quality that INTI produces, but getting close!
So what’s next? I already have added some issues for things I want to fix, and in particular, I’m looking at improving the banding reduction and performing geometry correction. For both, I think I will need to use fast fourier transforms, in order to identify the noise in one case (banding) and detect edges in the other (geometry correction).
Therefore, I started to implement FFT transforms, a domain I had absolutely no knowledge of. Luckily, I could ask ChatGPT to explain to me the concepts, which made it faster to implement! For now, I have only implemented the Cooley-Tukey algorithm. The issue is that this algorithm is quite slow, and requires that the input data has a length which is a power of 2. Given the size of the image we generate, it’s quite costly.
I took advantage of this to learn about the Vector API to leverage SIMD instructions of modern CPUs, and it indeed made things significantly faster (about twice as fast), but still not at the level of performance that I expect.
I am trying to understand the split radix but I’m clearly intimidated by the many equations here… In any case I printed some papers which I hope I’ll be able to understand.
In conclusion, in this article, I’ve introduced astro4j, an open source suite of libraries and applications written in Java for astronomy software. While the primary goal for me is to learn and improve my skills and knowledge of the maths behind astronomy software processing, it may be that it produces something useful. In any case, since it’s open source, if you want to contribute, feel free!
And you can do so in different domains, for example, I pretty much s* at UI, so if you are a JavaFX expert, I would appreciate your pull requests!
Finally, here is a video showing JSol’Ex in action:
12 March 2023
Tags: micronaut gradle version catalogs graalvm maven
This blog post discusses how the Micronaut development team makes use of a feature of Gradle, version catalogs, to improve the team’s developer productivity, reduce the risks of publishing broken releases, coordinate the releases of a large number of modules and, last but not least, provide additional features to our Gradle users.
The Micronaut Framework is a modern open-source framework for building JVM applications. It can be used to build all kinds of applications, from CLI applications to microservices or even good old monoliths. It supports deploying both to the JVM and native executables (using GraalVM), making it particularly suitable for all kind of environments. A key feature of the Micronaut framework is developer productivity: we do everything we can to make things faster for developers. In particular, Micronaut has a strong emphasis on easing how you test your applications, even in native mode. For this we have built a number of tools, including our Maven and Gradle plugins.
When I joined the Micronaut team almost a couple years back, I was given the responsibility of improving the team’s own developer productivity. It was an exciting assignment, not only because I knew the team’s love about Gradle, but because I also knew that there were many things we could do to reduce the feedback time, to provide more insights about failures, to detect flaky tests, etc. As part of this work we have put in place a partnership with Gradle Inc which kindly provides us with a Gradle Enterprise instance, but this is not what I want to talk about today.
Lately I was listening to an interview of Aurimas Liutikas of the AndroidX team, who was saying that he didn’t think that version catalogs were a good solution for library authors to share their recommendations of versions, and that BOMs are probably a better solution for this. I pinged him saying that I disagreed with this statement and offered to provide more details why, if he was interested. This is therefore a long answer, but one which will be easier to find than a thread on social media.
Let’s start with the basics: a version catalog is, like the name implies, a catalog of versions to pick from, nothing more. That doesn’t sound too much exciting, and what versions are we talking about? That’s version of libraries or plugins that you use in your build.
As an illustration, here is a version catalog, defined as a TOML file:
[versions]
javapoet = "1.13.0"
[libraries]
javapoet = { module = "com.squareup:javapoet", version.ref = "javapoet" }
Then this library can be used in a dependencies
declaration block in any of the project’s build script using a type-safe notation:
dependencies {
implementation(libs.javapoet) {
because("required for Java source code generation")
}
}
which is strictly equivalent to writing:
dependencies {
implementation("com.squareup:javapoet:1.13.0") {
because("required for Java source code generation")
}
}
There are many advantages of using version catalogs to declare your library versions, but most notably it provides a single, standard location where those versions are declared. It is important to understand that a catalog is simply a list of dependencies you can pick from, a bit like going to the supermarket and choosing whatever you need for your particular meal: it’s not because a catalog declares libraries that you have to use them. However, a catalog provides you with recommendations of libraries to pick from.
An interesting aspect of version catalogs is that they can be published, for others to consume: they are an artifact. Micronaut users can already make use of catalogs, as I have explained in a previous blog post. This makes it possible for a user who doesn’t know which version of Micronaut Data to use, to simply declare:
dependencies {
implementation mn.micronaut.data
}
People familiar with Maven BOMs can easily think that it is the same feature, but there are key differences which are described in the Gradle docs.
In the rest of this post we will now focus on how we generate those catalogs, and how they effectively help us in improving our own developer productivity.
As I said, the Micronaut framework consists of a large number of modules which live in their own Git repository. All the projects share the same layout, the same conventions in order to make things easier to maintain. For this purpose, we use our own collection of internal build plugins as well as a project template.
Those build plugins provide features like:
defining the default Java language level, setting up code conventions and code quality plugins
standardizing how documentation is built (using Asciidoctor)
setting up integration with Gradle Enterprise, to publish build scans, configure the build cache and predictive test selection
implementing binary compatibility checks between releases
configuring publication to Maven Central
providing a high-level model of what a Micronaut module is
The last item is particularly important: in every Micronaut project, we have different kind of modules: libraries (which are published to Maven Central for users to consume), internal support libraries (which are not intended for external consumption), or a BOM module (which also publishes a version catalog as we’re going to see).
Long story short: we heavily rely on conventions to reduce the maintenance costs, have consistent builds, with improved performance and higher quality standards. If you are interested in why we have such plugins, Sergio Delamo and I gave an interview about this a few months ago (alert: the thumbnail shows I have hair, this is fake news!).
Each of our projects declares a version catalog, for example:
One of the advantages of version catalogs is that it provides a centralized place for versions, which can be easily used by bots to provide pull requests for dependency upgrades. For this, we use Renovatebot which integrates particularly well with version catalogs (GitHub’s dependabot lacks behind in terms of support). This allows us to get pull requests like this one which are very easy to review.
Each of the Micronaut projects is now required to provide a BOM (Bill of Materials) for users. Another term for a BOM that is used in the Gradle ecosystem is a platform: a platform has however slightly different semantics in Maven and Gradle. The main goal of a BOM is to provide a list of dependencies a project works with, and, in Maven, it can be used to override the dependency versions of transitive dependencies. While in Maven, a BOM will only influence the dependency resolution of the project which imports the BOM, in Gradle a platform fully participates in dependency resolution, including when a transitive dependency depends on a a BOM. To simplify, a user who imports a BOM may use dependencies declared in the BOM without specifying a version: the version will be fetched from the BOM. In that regards, it looks exactly the same as a version catalog, but there are subtle differences.
For example, if a user imports a BOM, any transitive dependency matching a dependency found in the BOM will be overridden (Maven) or participate in conflict resolution (Gradle). That is not the case for a catalog: it will not influence the dependency resolution unless you explicitly add a dependency which belongs to the catalog.
That’s why Micronaut publishes both a BOM and a catalog, because they address different use cases, and they work particularly well when combined together.
In Micronaut modules, you will systematically find a project with the -bom
suffix.
For example, Micronaut Security will have subprojects like micronaut-security-jwt
, micronaut-security-oauth2
and micronaut-security-bom
.
The BOM project will aggregate dependencies used by the different modules. In order to publish a BOM file, the only thing a project has to do is to apply our convention plugin:
plugins {
id "io.micronaut.build.internal.bom"
}
Note how we don’t have to declare the coordinates of the BOM (group, artifact, version), nor that we have to declare how to publish to Maven Central, what dependencies should be included in the BOM, etc: everything is done by convention, that’s the magic of composition over inheritance.
Should we want to change how we generate the BOM, the only thing we would have to do is to update our internal convention plugin, then all projects would benefit from the change once they upgrade.
In order to determine which dependencies should be included in our BOM, we defined conventions that we use in our catalog files. In our internal terminology, when we want a dependency to be handled by the Micronaut framework, we call that a managed dependency: a dependency that is managed by Micronaut and that users shouldn’t care about in most cases: they don’t have to think about a version, we will provide one for them.
This directly translates to a convention in the version catalogs of the Micronaut projects: dependencies which are managed need to be declared with a managed-
prefix in the catalog:
[versions]
...
managed-kafka = '3.4.0'
...
zipkin-brave-kafka-clients = '5.15.0'
[libraries]
...
managed-kafka-clients = { module = 'org.apache.kafka:kafka-clients', version.ref = 'managed-kafka' }
managed-kafka-streams = { module = 'org.apache.kafka:kafka-streams', version.ref = 'managed-kafka' }
...
zipkin-brave-kafka-clients = { module = 'io.zipkin.brave:brave-instrumentation-kafka-clients', version.ref = 'zipkin-brave-kafka-clients' }
Those dependencies will end up in the version catalog that we generate, but without the managed-
prefix.
This means that we would generate a BOM which contains the following:
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<!-- This module was also published with a richer model, Gradle metadata, -->
<!-- which should be used instead. Do not delete the following line which -->
<!-- is to indicate to Gradle or any Gradle module metadata file consumer -->
<!-- that they should prefer consuming it instead. -->
<!-- do_not_remove: published-with-gradle-metadata -->
<modelVersion>4.0.0</modelVersion>
<groupId>io.micronaut.kafka</groupId>
<artifactId>micronaut-kafka-bom</artifactId>
<version>5.0.0-SNAPSHOT</version>
<packaging>pom</packaging>
<name>Micronaut Kafka</name>
<description>Integration between Micronaut and Kafka Messaging</description>
<url>https://micronaut.io</url>
<licenses>
<license>
<name>The Apache Software License, Version 2.0</name>
<url>http://www.apache.org/licenses/LICENSE-2.0.txt</url>
<distribution>repo</distribution>
</license>
</licenses>
<scm>
<url>scm:git@github.com:micronaut-projects/micronaut-kafka.git</url>
<connection>scm:git@github.com:micronaut-projects/micronaut-kafka.git</connection>
<developerConnection>scm:git@github.com:micronaut-projects/micronaut-kafka.git</developerConnection>
</scm>
<developers>
<developer>
<id>graemerocher</id>
<name>Graeme Rocher</name>
</developer>
</developers>
<properties>
<micronaut.kafka.version>5.0.0-SNAPSHOT</micronaut.kafka.version>
<kafka.version>3.4.0</kafka.version>
</properties>
<dependencyManagement>
<dependencies>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<version>${kafka.compat.version}</version>
</dependency>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-streams</artifactId>
<version>${kafka.version}</version>
</dependency>
<dependency>
<groupId>io.micronaut.kafka</groupId>
<artifactId>micronaut-kafka</artifactId>
<version>${micronaut.kafka.version}</version>
</dependency>
<dependency>
<groupId>io.micronaut.kafka</groupId>
<artifactId>micronaut-kafka-streams</artifactId>
<version>${micronaut.kafka.version}</version>
</dependency>
</dependencies>
</dependencyManagement>
</project>
Note how we automatically translated the managed-kafka
property into a BOM property kafka.version
, which is used in the <dependencyManagement>
block.
Dependencies which do not start with managed-
are not included in our generated BOM.
Let’s now look at the version catalog that we generate:
#
# This file has been generated by Gradle and is intended to be consumed by Gradle
#
[metadata]
format.version = "1.1"
[versions]
kafka = "3.4.0"
kafka-compat = "3.4.0"
micronaut-kafka = "5.0.0-SNAPSHOT"
[libraries]
kafka = {group = "org.apache.kafka", name = "kafka-clients", version.ref = "kafka-compat" }
kafka-clients = {group = "org.apache.kafka", name = "kafka-clients", version.ref = "kafka" }
kafka-streams = {group = "org.apache.kafka", name = "kafka-streams", version.ref = "kafka" }
micronaut-kafka = {group = "io.micronaut.kafka", name = "micronaut-kafka", version.ref = "micronaut-kafka" }
micronaut-kafka-bom = {group = "io.micronaut.kafka", name = "micronaut-kafka-bom", version.ref = "micronaut-kafka" }
micronaut-kafka-streams = {group = "io.micronaut.kafka", name = "micronaut-kafka-streams", version.ref = "micronaut-kafka" }
Given a single input, the version catalog that we use to build our Micronaut module, our build conventions let us automatically declare which dependencies should land in the output BOM and version catalogs that we generate for that project! Unlike Maven BOMs which either have to be a parent POM or redeclare all dependencies in an independent module, in Gradle we can generate these automatically and completely decouple the output BOM from what is required to build our project.
In general, all api dependencies must be managed, so in the example above, the Micronaut Kafka build scripts would have an API dependency on kafka-clients
, which we can find in the main project build script:
dependencies {
api libs.managed.kafka.clients
...
}
The benefit of generating a version catalog for a user is that there is now a Micronaut Kafka version catalog published on Maven Central, alongside the BOM file.
This catalog can be imported by a user in their settings file:
dependencyResolutionManagement {
versionCatalogs {
create("mnKafka") {
from("io.micronaut.kafka:micronaut-kafka-bom:4.5.2")
}
}
}
Then the dependency on Micronaut Kafka and its managed dependencies can be used in a build script using the mnKafka
prefix:
dependencies {
implementation mnKafka.micronaut.kafka
implementation mnKafka.kafka.clients
}
A user doesn’t have to know about the dependency coordinates of Kafka clients: the IDE (at least IntelliJ IDEA) would provide completion automatically!
In Micronaut 3.x, there is a problem that we intend to fix in Micronaut 4 regarding our "main" BOM: the Micronaut core BOM is considered as our "platform" BOM, in the sense that it aggregates BOMs of various Micronaut modules. This makes it hard to release newer versions of Micronaut which, for example, only upgrade particular modules of Micronaut.
Therefore in Micronaut 4, we are cleanly separating the "core" BOM, from the new platform BOM. It is interesting in this blog post because it offers us the opportunity to show how we are capable of generating aggregating BOMs and aggregated catalogs.
In the platform BOM module, you can find the "input" catalog that we use, and only consists of managed-
dependencies.
Most of those dependencies are simply dependencies on other Micronaut BOMs: this is an "aggregating" BOM, which imports other BOMs.
This is, therefore, the only BOM that a user would effectively have to use when migrating to Micronaut 4: instead of importing all BOMs for the different Micronaut modules they use, they can simply import the Micronaut Platform BOM, which will then automatically include the BOMs of other modules which "work well together".
This allows us to decouple the releases of the framework from the releases of Micronaut core itself.
However, there is a subtlety about aggregating BOMs in Maven: they are not regular dependencies, but dependencies with the import
scope.
This means that we must make a difference between a "managed dependency" and an "imported BOM" in our input catalog.
To do this, we have another naming convention, which is to use the boms-
prefix for imported BOMs:
[versions]
...
managed-micronaut-aws = "4.0.0-SNAPSHOT"
managed-micronaut-azure = "5.0.0-SNAPSHOT"
managed-micronaut-cache = "4.0.0-SNAPSHOT"
managed-micronaut-core = "4.0.0-SNAPSHOT"
...
[libraries]
...
boms-micronaut-aws = { module = "io.micronaut.aws:micronaut-aws-bom", version.ref = "managed-micronaut-aws" }
boms-micronaut-azure = { module = "io.micronaut.azure:micronaut-azure-bom", version.ref = "managed-micronaut-azure" }
boms-micronaut-cache = { module = "io.micronaut.cache:micronaut-cache-bom", version.ref = "managed-micronaut-cache" }
boms-micronaut-core = { module = "io.micronaut:micronaut-core-bom", version.ref = "managed-micronaut-core" }
...
This results in the following BOM file:
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<modelVersion>4.0.0</modelVersion>
<groupId>io.micronaut.platform</groupId>
<artifactId>micronaut-platform</artifactId>
<version>4.0.0-SNAPSHOT</version>
<packaging>pom</packaging>
<name>Micronaut Platform</name>
<description>Bill-Of-Materials (BOM) and Gradle version catalogs for Micronaut</description>
...
<properties>
...
<micronaut.aws.version>4.0.0-SNAPSHOT</micronaut.aws.version>
<micronaut.azure.version>5.0.0-SNAPSHOT</micronaut.azure.version>
<micronaut.cache.version>4.0.0-SNAPSHOT</micronaut.cache.version>
<micronaut.core.version>4.0.0-SNAPSHOT</micronaut.core.version>
...
</properties>
<dependencyManagement>
<dependencies>
...
<dependency>
<groupId>io.micronaut.aws</groupId>
<artifactId>micronaut-aws-bom</artifactId>
<version>${micronaut.aws.version}</version>
<type>pom</type>
<scope>import</scope>
</dependency>
<dependency>
<groupId>io.micronaut.azure</groupId>
<artifactId>micronaut-azure-bom</artifactId>
<version>${micronaut.azure.version}</version>
<type>pom</type>
<scope>import</scope>
</dependency>
<dependency>
<groupId>io.micronaut.cache</groupId>
<artifactId>micronaut-cache-bom</artifactId>
<version>${micronaut.cache.version}</version>
<type>pom</type>
<scope>import</scope>
</dependency>
<dependency>
<groupId>io.micronaut</groupId>
<artifactId>micronaut-core-bom</artifactId>
<version>${micronaut.core.version}</version>
<type>pom</type>
<scope>import</scope>
</dependency>
...
</dependencies>
</dependencyManagement>
</project>
A more interesting topic to discuss is what we can do with version catalogs that we publish for users: we can inline dependency aliases from each of the imported catalogs into the platform catalog. All dependencies in the catalog files of each modules are directly available in the platform catalog:
[versions]
dekorate = "1.0.3"
elasticsearch = "7.17.8"
...
micronaut-aws = "4.0.0-SNAPSHOT"
micronaut-azure = "5.0.0-SNAPSHOT"
micronaut-cache = "4.0.0-SNAPSHOT"
micronaut-core = "4.0.0-SNAPSHOT"
...
[libraries]
alexa-ask-sdk = {group = "com.amazon.alexa", name = "ask-sdk", version = "" }
alexa-ask-sdk-core = {group = "com.amazon.alexa", name = "ask-sdk-core", version = "" }
alexa-ask-sdk-lambda = {group = "com.amazon.alexa", name = "ask-sdk-lambda-support", version = "" }
aws-java-sdk-core = {group = "com.amazonaws", name = "aws-java-sdk-core", version = "" }
aws-lambda-core = {group = "com.amazonaws", name = "aws-lambda-java-core", version = "" }
aws-lambda-events = {group = "com.amazonaws", name = "aws-lambda-java-events", version = "" }
aws-serverless-core = {group = "com.amazonaws.serverless", name = "aws-serverless-java-container-core", version = "" }
awssdk-secretsmanager = {group = "software.amazon.awssdk", name = "secretsmanager", version = "" }
azure-cosmos = {group = "com.azure", name = "azure-cosmos", version = "" }
azure-functions-java-library = {group = "com.microsoft.azure.functions", name = "azure-functions-java-library", version = "" }
...
The alexa-ask-sdk
is for example an alias which was originally declared in the micronaut-aws
module.
Because we aggregate all catalogs, we can inline those aliases and make them directly available in user build scripts:
dependencyResolutionManagement {
versionCatalogs {
create("mnKafka") {
from("io.micronaut.platform:micronaut-platform:4.0.0-SNAPSHOT")
}
}
}
dependencies {
...
implementation(mn.micronaut.aws.alexa)
implementation(mn.alexa.sdk)
}
Generating a version catalog offers us a very pragmatic way to define all dependencies that users can use in their build scripts with guarantees that they work well together.
If you survived reading up to this point, you may be interested in learning how, technically, we implemented this. You can take a look at our internal build plugins, but more specifically at the BOM plugin.
In order to generate our BOM and version catalogs, we have mainly 2 inputs:
the list of subprojects which need to participate in the BOM: in a Micronaut modules, we explained that we have several kinds of projects: libraries which are published, test suites, etc. Only a subset of these need to belong to the BOM, and we can determine that list automatically because each project applies a convention plugin which determines its kind. Only projects of a particular kind are included. Should exceptions be required, we have a MicronautBomExtension
which allows us to configure more precisely what to include or not, via a nice DSL.
the list of dependencies, which is determined from the project’s version catalog
One issue is that while Gradle provides automatically the generated, type-safe accessors for version catalogs, there is actually no built-in model that you can access to represent the catalog model itself (what is an alias, references to versions, etc): the type-safe API represents a "realized" catalog, but not a low-level model that we can easily manipulate. This means that we had to implement our own model for this.
We have also seen that we can generate a single platform, aggregating all Micronaut modules for a release, that the users can import into their build scripts. Unfortunately it is not the case for the Micronaut modules themselves: for example, Micronaut Core must not depend on other Micronaut modules, but, for example, Micronaut Data can depend on Micronaut SQL and use dependencies from the Micronaut SQL catalog. Those modules cannot depend on the platform BOM, because this is the aggregating BOM, so we would create a cyclic dependency and wouldn’t be able to release any module.
To mitigate this problem, our internal build plugins expose a DSL which allows each projects to declare which other modules they use:
micronautBuild {
importMicronautCatalog() // exposes a `mn` catalog
importMicronautCatalog("micronaut-reactor") // exposes a `mnReactor` catalog
importMicronautCatalog("micronaut-rxjava2") // exposes a `mnRxjava2` catalog
...
}
While this is simple from the declaration site point of view, it is less practical from a consuming point of view, since it forces us to use different namespaces for each imported catalog:
dependencies {
...
testImplementation mn.micronaut.inject.groovy
testImplementation mnRxjava2.micronaut.rxjava2
...
}
It would have been better if we could actually merge several catalogs into a single one, but unfortunately that feature has been removed from Gradle.
I still have hope that this will eventually be implemented, because not having this creates unnecessary boilerplate in build scripts and redundancy in names (e.g implementation mnValidation.micronaut.validation
).
All that I described in this article aren’t the only benefits that we have on standardizing on version catalogs. For example, we have tasks which allow us to check that our generated BOM files only reference dependencies which are actually published on Maven Central, or that there are no SNAPSHOT dependencies when we perform a release. In the end, while most of the Micronaut developers had no idea what a version catalog was when I joined the team, all of them pro-actively migrated projects to use them because, I think, they immediately saw the benefits and value. It also streamlined the dependency upgrade process which was still a bit cumbersome before, despite using dependabot.
We now have a very pragmatic way to both use catalogs for building our own projects, and generating BOMs and version catalogs which can be used by both our Maven and Gradle users. Of course, only the Gradle users will benefit from the version catalogs, but we did that in a way which doesn’t affect our Maven users (and if you use Maven, I strongly encourage you to evaluate building Micronaut projects with Gradle instead, since the UX is much better).
I cannot end this blog post without mentioning a "problem" that we have today, which is that if you use Micronaut Launch to generate a Micronaut project, then it will not use version catalogs. We have an issue for this and pull requests are very welcome!
06 February 2023
I often say that flexibility isn’t the reason why you should select Gradle to build your projects: reliability, performance, reproducibility, testability are better reasons. There are, however, cases were its flexibility comes in handy, like last week, when a colleague of mine asked me how we could benchmark a Micronaut project using a variety of combination of features and Java versions. For example, he wanted to compare the performance of an application with and without epoll enabled, with and without Netty’s tcnative library, with and without loom support, building both the fat jar and native binary, etc. Depending on the combinations, the dependencies of the project may be a little different, or the build configuration may be a little different.
It was an interesting challenge to pick up and the solution turned out to be quite elegant and very powerful.
I have tried several options before this one, which I’m going to explain below, but let’s focus with the final design (at least at the moment I write this blog post).
The matrix of artifacts to be generated can be configured in the settings.gradle
file:
combinations {
dimension("tcnative") { (1)
variant("off")
variant("on")
}
dimension("epoll") { (2)
variant("off")
variant("on")
}
dimension("json") { (3)
variant("jackson")
variant("serde")
}
dimension("micronaut") { (4)
variant("3.8")
variant("4.0")
}
dimension("java") { (5)
variant("11")
variant("17")
}
exclude { (6)
// Combination of Micronaut 4 and Java 11 is invalid
it.contains("micronaut-4.0") && it.contains("java-11")
}
}
1 | a dimension called tcnative is defined with 2 variants, on and off |
2 | another dimension called epool also has on and off variants |
3 | the json dimension will let us choose 2 different serialization frameworks: Jackson or Micronaut Serde |
4 | we can also select the version of Micronaut we want to test |
5 | as well as the Java version! |
6 | some invalid combinations can be excluded |
The generates a number of synthetic Gradle projects, that is to say "projects" in the Gradle terminology, but without actually duplicating sources and directories on disk. With the example above, we generate the following projects:
:test-case:tcnative-off:epoll-off:json-jackson:micronaut-3.8:java-11
:test-case:tcnative-off:epoll-off:json-jackson:micronaut-3.8:java-17
:test-case:tcnative-off:epoll-off:json-jackson:micronaut-4.0:java-17
:test-case:tcnative-off:epoll-off:json-serde:micronaut-3.8:java-11
:test-case:tcnative-off:epoll-off:json-serde:micronaut-3.8:java-17
:test-case:tcnative-off:epoll-off:json-serde:micronaut-4.0:java-17
:test-case:tcnative-off:epoll-on:json-jackson:micronaut-3.8:java-11
:test-case:tcnative-off:epoll-on:json-jackson:micronaut-3.8:java-17
:test-case:tcnative-off:epoll-on:json-jackson:micronaut-4.0:java-17
… and more
To build the fat jar of the "tcnative on", "epoll on", "Jackson", "Micronaut 4.0" on Java 17 combination, you can invoke:
$ ./gradlew :test-case:tcnative-on:epoll-on:json-jackson:micronaut-4.0:java-17:shadowJar
And building the native image of the "tcnative off", "epoll on", "Micronaut Serde", "Micronaut 3.8" on Java 17 combination can be done with:
$ ./gradlew :test-case:tcnative-off:epoll-on:json-serde:micronaut-3.8:java-17:nativeCompile
Cherry on the cake, all variants can be built in parallel by executing either ./gradlew shadowJar
(for the fat jars) or ./gradlew nativeCompile
(for the native binaries), which would copy all the artifacts under the root projects build
directory so that they are easy to find in a single place.
In a typical project, say the Micronaut application we want to benchmark, you would have a project build which consists of a single Micronaut application module.
For example, running ./gradlew build
would build that single project artifacts.
In a multi-project build, you could have several modules, for example core
and app
, and running :core:build
would only build the core library and :app:build
would build both core
and app
(assuming app
depends on core
.
In both cases, single or multi-project builds, for a typical Gradle project, there’s a real directory associated for each project core
, app
, etc, where we can find sources, resources, build scripts, etc.
For synthetic projects, we actually generate Gradle projects (aka modules) programmatically.
We have a skeleton directory, called test-case-common
, which actually defines our application sources, configuration files, etc.
It also contains a build script which applies a single convention plugin, named io.micronaut.testcase
.
This plugin basically corresponds to our "baseline" build: it applies the Micronaut plugin, adds a number of dependencies, configures native image building, etc.
Then the "magic" is to use Gradle’s composition model for the variant aspects.
For example, when we define the tcnative
dimension with 2 variants on
and off
, we’re modeling the fact that there are 2 possible outcomes for this dimension.
In practice, enabling tcnative
is just a matter of adding a single dependency at runtime:
dependencies {
runtimeOnly("io.netty:netty-tcnative-boringssl-static::linux-x86_64")
}
The dimension which handles the version of Java (both to compile and run the application) makes use of Gradle’s toolchain support:
java {
toolchain {
languageVersion.set(JavaLanguageVersion.of(17))
}
}
This can be done in a convention plugin which is named against the dimension variant name: io.micronaut.testcase.tcnative.on
.
In other words, the project with path :test-case:tcnative-off:epoll-off:json-jackson:micronaut-3.8:java-11
will have a "synthetic" build script which only consists of applying the following plugins:
plugins {
id("io.micronaut.testcase") (1)
id("io.micronaut.testcase.tcnative.off") (2)
id("io.micronaut.testcase.epoll.off") (3)
id("io.micronaut.testcase.json.jackson") (4)
id("io.micronaut.testcase.micronaut.3.8") (5)
id("io.micronaut.testcase.java.11") (6)
}
1 | Applies the common configuration |
2 | Configures tcnative off |
3 | Configures epoll off |
4 | Configures Jackson as the serialization framework |
5 | Configures Micronaut 3.8 |
6 | Configures build for Java 11 |
Each of these plugins can be found in our build logic.
As you can see when browsing the build logic directory, there is actually one small optimization: it is not necessary to create a variant script if there’s nothign to do.
For example, in practice, tcnative
off doesn’t need any extra configuration, so there’s no need to write a io.micronaut.testcase.tcnative.off
plugin which would be empty in any case.
The best case would have been that we only have to tweak the build process (for example to add dependencies, disable native image building, etc), but in some cases, we have to change the actual sources or resource files.
Again, we leveraged Gradle’s flexibility to define custom conventions in our project layout.
In a traditional Gradle (or Maven) project, the main sources are found in src/main/java
.
This is the case here, but we also support adding source directories based on the variants.
For example in this project, some DTOs will make use of Java records on Java 17, but those are not available in Java 11, so we need to write 2 variants of the same classes: one with records, the other one with good old Java beans.
This can be done by putting the Java 11 sources under src/main/variants/java-11/java
, and their equivalent Java 17 sources under src/main/variants/java-17/java
.
This is actually generic: you can use any variant name in place of java-11
: we could, for example, have a source directory for the epoll-on
folder.
The same behavior is available for resources (in src/main/variants/java-11/resources
).
This provides very good flexibility while being totally understandable and conventional.
So far, we explained how a user interacts with this build, for example by adding a dimension and a variant or adding specific sources, but we didn’t explain how the projects are actually generated.
For this purpose, we have to explain that Gradle supports multiple types of plugins.
The typical plugins, which we have used so far in this blog post, the io.micronaut.testcase.xxx
plugins, are project plugins, because they apply on the Project
of a Gradle build.
There are other types of plugins, and the other one which we’re interested in here is the settings plugin.
Unlike project plugins, these plugins are applied on the Settings
object, that is to say thay they would be typically applied on the settings.gradle(.kts
) file.
This is what we have in this project:
// ...
plugins {
id("io.micronaut.bench.variants")
}
include("load-generator-gatling")
configure<io.micronaut.bench.AppVariants> {
combinations {
dimension("tcnative") {
variant("off")
variant("on")
}
dimension("epoll") {
variant("off")
variant("on")
}
dimension("json") {
variant("jackson")
//variant("serde")
}
dimension("micronaut") {
variant("3.8")
//variant("4.0")
}
dimension("java") {
//variant("11")
variant("17")
}
exclude {
// Combination of Micronaut 4 and Java 11 is invalid
it.contains("micronaut-4.0") && it.contains("java-11")
}
}
}
The io.micronaut.bench.variants
is another convention plugin defined in our build logic.
It doesn’t do much, except for creating an extension, which is what lets us configure the variants:
import io.micronaut.bench.AppVariants
val variants = extensions.create<AppVariants>("benchmarkVariants", settings)
The logic actually happens within that AppVariants
class, for which you can find the sources here.
This class handles both the variants
extension DSL and the logic to generate the projects.
The entry point is the combinations
method which takes a configuration block.
Each of the call to dimension
registers a new dimension, which is itself configured via a variant configuration block, where each individual variant is declared.
When we return from this call, we have built a model of dimension of variants, for which we need to compute the cartesian product.
We can check each of the entry that we have generated against the excludes, and if the combination is valid, we can use the Gradle APIs which are available in settings script to generate our synthetic projects.
For example:
val projectPath = ":test-case:${path.replace('/', ':')}"
settings.include(projectPath)
computes the project path (with colons) and includes it, which is equivalent to writing this manually in the settings.gradle
file:
include(":test-case:tcnative-off:epoll-off:json-jackson:micronaut-3.8:java-11")
include(":test-case:tcnative-off:epoll-off:json-jackson:micronaut-3.8:java-17")
include(":test-case:tcnative-off:epoll-off:json-jackson:micronaut-4.0:java-17")
If we stopped here, then we would have defined projects, but Gradle would expect the sources and build scripts for these projects to be found in test-case/tcnative-off/epoll-off/json-jackson/micronaut-3.8/java-11
.
This isn’t the case for us, since all projects will share the same project directory (test-case-common
).
However, if we configure all the projects to use the same directory, then things could go wrong at build time, in particular because we use parallel builds: all the projects would write their outputs in the same build
directory, but as we have seen, they may have different sources, different dependencies, etc.
So we need to set both the project directory to the common directory, but also change the build directory to a per-project specific directory.
This way we make sure to reuse the same sources without having to copy everything manually, but we also make sure that up-to-date checking, build caching and parallel builds work perfectly fine:
settings.project(projectPath).setProjectDir(File(settings.rootDir, "test-case-common"))
gradle.beforeProject {
if (this.path == projectPath) {
setBuildDir(File(projectDir, "build/${path}"))
}
}
Note that we have to use the gradle.beforeProject
API for this: it basically provides us with the naked Project
instance of our synthetic projects, before its configuration phase is triggered.
The next step is to make sure that once the java
plugin is applied on a project, we configure the additional source directories for each dimension.
This is done via the withPlugin
API which lets use react on the application of a plugin, and the SourceSet
API:
project.plugins.withId("java") {
project.extensions.findByType(JavaPluginExtension::class.java)?.let { java ->
variantNames.forEach { variantName ->
java.sourceSets.all {
this.java.srcDir("src/$name/variants/$variantName/java")
this.resources.srcDir("src/$name/variants/$variantName/resources")
}
}
}
}
Last, we need to apply our convention plugins, the plugins which correspond to a specific combination variant, to our synthetic project:
gradle.afterProject {
if (this.path == projectPath) {
variantSpecs.forEach {
val pluginId = "io.micronaut.testcase.${it.dimensionName}.${it.name}"
val plugin = File(settings.settingsDir, "build-logic/src/main/kotlin/$pluginId.gradle.kts")
if (plugin.exists()) {
plugins.apply(pluginId)
}
}
}
}
As you can see, for each variant, we basically compute the name of the plugin to apply, and if a corresponding file exists, we simply apply the plugin, that’s it!
It only takes around 100 lines of code to implement both the DSL and logic to generate all this, which is all the power Gradle gives us!
Of course, there are limitations to this approach. While we could handle the Java version easily, we can’t, however, add a dimension we would have needed : GraalVM CE vs GraalVM EE. This is a limitation of Gradle’s toolchain support, which cannot make a difference between those 2 toolchains.
Another limitation is that this works well for a single project build, or a project like here where there’s a common application, a support library, but all modifications happen in a single project (the application). Supporting multi-project builds and variants per module would be possible in theory, but would add quite a lot of complexity.
It was also lucky that I could support both Micronaut 3 and Micronaut 4: in practice, the Gradle plugin for Micronaut 4 isn’t compatible with Micronaut 3, so I would have to either use Micronaut 3 or Micronaut 4. However, we can use the Micronaut 4 plugin with Micronaut 3, provided some small tweaks.
Last, there is one unknown to this, which is that building synthetic projects like that makes use of APIs which are stable in Gradle, but likely to be deprecated in the future (event based APIs).
Before going to the "final" solution, I have actually tried a few things (which could be spiked in a couple hours or so). In particular, the first thing I did was actually to use a single project, but configure additional artifacts (e.g jar and native binary) for each variant. While I could make it work, the implementation turned out to be more complicated, because you have to understand how each of the plugins work (Micronaut, GraalVM, the Shadow plugin) and create exotic tasks to make things work. Also this had a number of drawbacks:
impossible to build variants in parallel (at least without the experimental configuration cache)
configuring each of the variant specific build configuration (e.g adding dependencies) was more complicated. It was in particular only possible to add additional runtime dependencies. If something else was needed, for example compile time dependencies or additional resources, this wasn’t possible to do because a single main jar was produced.
In this blog post, we have seen how we can leverage Gradle’s flexibility to support what seemed to be a complicated use case: given a common codebase and some "small tweaks", generate a matrix of builds which are used to build different artifacts, in order to benchmark them.
The solution turned out to be quite simple to implement, and I hope pretty elegant, both in terms of user facing features (adding dimensions and configuring the build should be easy), maintenance (composition over inheritance makes it very simple to understand how things are combined) and implementation.
Many thanks to Jonas Konrad for the feature requests and for reviewing this blog post!
20 January 2023
Tags: tourainetech peugeot electrique
Hier, je me déplaçais sur Tours pour la conférence Touraine Tech, où j’ai donné un talk sur Micronaut Test Resources. Je remercie encore l’organisation d’avoir accepté ce talk, qui, d’après les commentaires que j’ai reçu, a plutôt été bien reçu ! Mais ça n’est pas le sujet de ce billet : je souhaite simplement vous parler de mon expérience avec ma voiture électrique, que j’ai utilisé pour me rendre à la conférence.
Tours, ça n’est pas si loin de chez moi, environ 200km. J’avais donc décidé de m’y rendre avec mon e-208, dont l’autonomie théorique, avec sa batterie de 50kW (disponible 46kW), est annoncée à 340km. J’ai fais l’acquisition de cette voiture il y a 2 ans, et j’en suis globalement très content : j’habite en zone rurale, nous n’avons pas de transports en commun, et cette voiture sert donc pour tous les trajets du quotidien. Je peux la recharger à la maison sans problème. Jusqu’ici, les trajets les plus longs que j’avais effectué étaient sans recharge : des allez-retours à Pornic, où j’ai de la famille, soit environ 160 km aller/retour, et ça se passait très bien, en particulier l’été.
Maintenant, entre l’autonomie théorique et la réalité, il y a un monde, en particulier en hiver. J’étais donc assez nerveux à l’idée de me retrouver "en rade" avant d’arriver sur Tours, et j’ai donc planifié mon déplacement avec l’application ChargeMap (j’ai une carte chez eux et l’application Peugeot est franchement pas top, impossible de planifier aussi bien).
Je voulais faire l’aller-retour dans la journée, ce qui impliquait de pouvoir recharger en arrivant sur Tours. Un des problèmes, c’est que les bornes de recharge "rapides" ne sont pas si nombreuses. Autre problème : il est impossible de savoir si une borne va être occupée lorsqu’on y arrivera. La 208 dispose d’une prise combo CCS qui accepte une charge à 100kW.
J’avais donc 2 choix:
m’arrêter à une charge rapide (50kW et +) avant de me rendre à la conférence
ou déposer ma voiture sur une borne lente à proximité de la conférence et revenir plus tard dans la journée pour libérer la borne
J’ai choisi la première option, parce que j’avais un doute que la borne soit occupée en arrivant, et que je doive donc faire 10 min de route de plus pour me rendre à la borne rapide et donc perdre du temps. Par ailleurs, ça n’est pas super pratique que de devoir quitter la conférence et marcher 1km (potentiellement sous la pluie) dans la journée.
En bref, j’ai planifié pour être tranquille. Voici les conditions du trajet:
départ 5h32, tout le trajet en mode éco
je suis parti avec une batterie chargée à 100% (je sais qu’il faut éviter, mais d’une, je n’allais pas risquer de devoir m’arrêter sur une borne lente en cours de trajet, je ne souhaitais pas me retrouver à moins de 10% de batterie à l’arrivée, trop stressant, et d’autre part, le logiciel Peugeot ne permet pas d’interrompre une charge lorsque la batterie atteint une certaine limite, par exemple 80% !)
j’ai choisi un itinéraire sans autoroute
j’ai roulé à 80km sur les départementales (y compris celles limitées à 90km/h, en Maine et Loire), entre 90 et 100km/h sur les nationales
je roule en conduite souple : pas d’accélérations brutales, utilisation du mode B pour freiner, etc…
la température extérieure oscillait entre 0 et 3 degrés, chauffage réglé à 18
L’application ChargeMap dispose d’une fonctionnalité qui lui permet d’envoyer le trajet planifié sur Google Maps, que j’ai utilisé pour le guidage. Ça se passait très bien, jusqu’à ce que j’arrive après Saumur où je me rends compte que le GPS avait décidé de me faire prendre l’autoroute ! Problème, je n’avais clairement pas assez d’autonomie pour rouler à 130 km/h. Ne pouvant pas rouler à 90 km/h sur autoroute, trop dangereux, je suis donc monté à 110 km/h et autant dire que vu les conditions météo (froid), mon autonomie restante fondait comme neige au soleil. Je suis donc sorti un peu plus loin pour finir le trajet en passant par les bords de Loire, comme c’était initialement prévu.
Au final, je suis arrivé sur ma borne de recharge Allego, au Casino de La Riche, à 8h08 : 180km en 2h36. De mémoire (l’appli Peugeot récupère les trajets, mais pas la consommation, incroyable ce retard du logiciel par rapport à la concurrence !), ma consommation moyenne était de l’ordre de 16kWh/100km. Je me suis branché et j’ai chargé pendant 49 minutes pour atteindre 90% de batterie, soit 29kW, facture: 31,38€, pas franchement économique (1.082€ du kWh !). Je me suis arrêté à 90% parce que la recharge "ralentit" à mesure qu’on s’approche de la charge maximale : il m’aurait fallu rester encore une bonne demi-heure (voire plus) pour atteindre les 100%, et je souhaitais me rendre à la conférence.
Je suis donc arrivé sur Polytech’Tours à 9h12, soit 3h40, à comparer aux 2h25 si j’étais parti avec ma 407 diesel, qui ferait l’aller-retour sans aucun pb sans faire le plein (autonomie environ 950km…).
Pour le retour, je savais donc que je serai très juste et qu’il faudrait probablement que je fasse un arrêt supplémentaire pour recharger (à cause des 10% de batterie en moins au départ). Je ne suis pas passé par l’autoroute au retour, et donc suivi les bords de Loire. Les conditions météo étaient similaires, mais avec plus de pluie. J’ai surveillé mon autonomie, et si au départ, j’avais une marge de 100km entre l’autonomie annoncée par la voiture (c’est à dire qu’en suivant ses indications, je serais à la maison avec 100km d’autonomie restante), au fur et à mesure du trajet, cette estimation a sensiblement baissé. Arrivé à Cholet (environ 40km de chez moi), il ne restait plus que 60 km de marge, alors que j’avais baissé la température dans l’habitacle à 16 degrés. Encore une fois, je roulais en mode éco, souple, pas de bouchons, rien. En clair, l’estimation d’autonomie, c’est du grand n’importe quoi et complètement irréaliste (à noter, qu’en été, c’est bien plus proche de la réalité).
Bref, j’avais aussi faim et n’étant pas très joueur, je me suis arrêté sur une borne rapide en chemin, à côté d’une pizzeria, au SIEML de l’Ecuyère. Comme je savais que quel que soit le temps de charge, j’aurais de quoi rentrer large, j’ai juste pris le temps de manger. Je récupère ma voiture, pas mal, 22,7kW de récupérés en 35 minutes de charge, pour 9.62€ : 3 fois moins cher que la recharge à Tours (mais à comparer aux 0.14€/kWh quand je charge à la maison…).
L’expérience fut concluante : je sais que je peux faire ce genre de trajets, moyennant quelques concessions (heure d’arrivée tardive à cause de la recharge à l’arrivée, trajet sans autoroute, confort "limité", etc), mais c’est à peu près la distance maximale que je puisse faire sans que ça ne devienne trop pénible. En revanche, je reste très mitigé sur l’autonomie "réelle" : ici, j’étais plus proche des 200 km en faisant tout pour économiser. Même en conditions idéales, jamais, ô grand jamais, je n’atteindrais les 340 km (le mieux, c’est environ 300km). Le logiciel est aussi bien trop basique comparé à la concurrence (oui, Tesla) : l’application mobile manque de fonctionnalités de base (blocage de charge, récupération des consommations, …) et le GPS pour planifier un trajet est franchement nul. L’estimation de l’autonomie restante est irréaliste et pire, on ne sait pas vraiment où on en est de charge: l’indicateur à la "jauge d’essence" n’est pas adapté à une batterie (dites moi le % restant, c’est plus parlant !). Enfin, un des gros soucis reste la recharge et les tarifs hallucinants qui sont pratiqués : il est pour ainsi dire impossible de savoir combien va vous coûter un trajet, puisqu’en fonction des conditions, vous allez devoir vous arrêter, ou non, et que les tarifs varient en fonction de la puissance de recharge, du fournisseur, etc.
Lorsqu’on part en thermique, on sait que le carburant coûte environ 1.9€/L, à 25% près : en électrique, oubliez. Vous pouvez faire du simple au quadruple. Est-ce à dire que je ne recommande pas l’électrique ? Pas du tout ! Déjà, je préfère 100 fois le confort de la conduite en électrique au thermique. La voiture est aussi super agréable à conduire et la puissance disponible immédiatement est un indéniable atout de l’électrique.
Mon alternative, sur ce trajet, aurait été de prendre ma voiture thermique, mais ça aurait été la solution de facilité. Et compte-tenu de l’urgence climatique, j’ai fais le choix de perdre un peu de confort, pour le bien de ma conscience :)
Older posts are available in the archive.