Monitoring pour optimiser ma conso électrique

26 November 2022

Tags: solaire fioul électricité solaredge dualsun raspberry

En juillet dernier, j’ai fait installer des panneaux solaires. Cette installation change considérablement nos habitudes de consommation: au lieu de consommer de préférence aux heures creuses, il est préférable de consommer lorsque la production locale le permet. Dans ce billet, je vous explique ce que j’ai mis en place pour nous aider dans l’optimisation de notre consommation, en particulier un outil de monitoring avec un Raspberry PI. L’objectif de mon projet est de réduire ma facture électrique en maximisant l’autoconsommation. Nous allons donc voir comment l’installation des panneaux peut changer nos habitudes.

moniteur sep

Installation de panneaux solaires

Il y a longtemps que l’idée nous trottait dans la tête. En effet, ici, nous chauffions au fioul, ce qui avait un certain nombre d’inconvénients: pollution (fort émetteur de CO₂), prix très variable (entre 800€ et 1400€ les 1000L selon les saisons), bruit (la chaudière), odeurs de combustion, etc. Cette chaudière au fioul nous servait à la fois à chauffer l’eau chaude, mais aussi au chauffage domestique. La maison étant relativement bien isolée, nous arrivions cependant, à 4 personnes, à ne consommer qu’environ 1000L de fioul par an. Une vieille chaudière à fioul était tombée en panne en plein hiver, il y a 5 ans, nous avions dû la changer en urgence, et, je me souviens très bien de cela, il se trouve que j’étais à ce moment à l’étranger pour le "Gradle World Meeting", une semaine de travail avec l’ensemble de l’équipe. Nous n’avions donc pas eu le temps de faire différents devis, notamment de chauffage alternatif. Mais en Mars dernier, patatras, nouvelle panne. Cette fois-ci, la chaudière elle-même n’était pas en cause: c’est le système d’alimentation en fioul, entre la cuve et la chaudière, qui était encrassé. La chaudière se mettait constamment en défaut faute d’avoir une arrivée de fioul propre. Nous avions alors le choix entre faire nettoyer la cuve et la tuyauterie, pour une facture de l’ordre de 2k€, ou de changer.

En même temps, mon épouse et moi-même sommes tous les 2 en télétravail 5j/5. Notre consommation électrique est donc importante, entre l’alimentation des ordinateurs, des écrans, la cuisine le midi, le chauffage d’appoint dans mon bureau, …​ Certains de nos équipements consomment aussi régulièrement: n’ayant pas le tout à l’égout, par exemple, nous avons une microstation de traitement des eaux usées, avec une pompe de recirculation et une pompe de relevage, qui ont des consommations non négligeables à l’année. Nous possédons aussi une voiture électrique (une e-208) qui est chargée à domicile. Enfin, l’été, nous avons une pompe de circulation pour la piscine qui consomme beaucoup. Au final, notre facture électrique est donc beaucoup plus importante que celle de fioul.

Cette dernière panne a donc été l’occasion de revoir notre projet. Après divers devis, nous avons opté pour l’installation de 16 panneaux solaires de la société française DualSun : 10 panneaux classiques "Flash" et 6 panneaux hybrides électricité/eau chaude, pour une production totale de 6kWC (6 kW crète). Les panneaux hybrides permettent de produire de l’électricité et de préchauffer l’eau chaude. Le tout a été associé à une pompe à chaleur air-eau, une Alfea Extensa A.I de la marque française Atlantic. Il s’agit d’une pompe à chaleur air-eau moyenne température (55⁰C) qui nous permet de conserver notre installation de chauffages en fonte, au prix (à déterminer) d’une consommation supérieure lors des grands froids (ce qui n’arrive pas souvent ici). La pompe est associée à un ballon d’eau chaude qui exploite le circuit préchauffé par les panneaux.

Les 6kWc nous permettent d’être en auto-consommation totale en journée l’été (et probablement une partie du printemps/automne, à confirmer avec le temps) et je devrais pouvoir revendre une partie de la surproduction à Enedis (mais pour des raisons administratives, mon dossier est toujours en attente…​).

Mon calcul de retour sur investissement m’a donné 12 ans, en prenant en compte une augmentation annuelle de 5% des prix de l’électricité. Si les tarifs augmentent plus vite, alors ça sera rentabilisé plus vite, mais il est impossible de savoir si ça sera le cas…​ a priori avec la crise énergétique, j’ai tendance à croire que ça n’est pas un mauvais calcul…​

Enfin, nous n’avons pas fait installer de batteries: bien que cela serait extrêmement intéressant dans mon cas, pour récupérer la nuit le surplus de la production en journée, le prix des batteries est encore bien trop élevé (de l’ordre de 7k€ pour 10kW).

Changer ses habitudes de consommation

Dans mon cas, nos habitudes de consommation étaient assez simples, au final:

  • en journée, nous n’avions pas trop le choix: il faut bien alimenter les ordinateurs, etc.

  • les machines à laver, lave-vaisselle, etc tournaient la nuit pour profiter des heures creuses

Malheureusement, notre consommation en heures creuses était assez limitée comparée au reste. Tout à changé avec les panneaux solaires:

  • cet été et même fin septembre, lorsque le ciel est dégagé, mes panneaux produisent jusqu’à 5.5kW, ce qui dépasse largement ma consommation "live"

  • nous chargeons la voiture en journée via la prise "domestique" (~2kW) lorsque c’est possible, et la nuit en charge "rapide" (6kW) lorsque la batterie est trop basse

  • les machines tournent en journée au lieu de la nuit

Mais optimiser tout cela est compliqué, et je souhaitais un système qui permette à mon épouse et mes enfants de savoir si, par exemple, "c’est le moment" de lancer une machine.

Je me suis donc lancé dans un projet de "bidouille" pour faire un système de monitoring qui donnerait en direct ma production, ma consommation, et indiquerait de manière simple si on a de la marge ou pas.

Petite complication

Évidemment, vous pourriez vous dire que ça devrait être simple, avec toutes les applications connectées qui existent. Oui et non. Chez moi, l’installation est particulière: mes panneaux solaires sont sur le toit de ma maison, ainsi que les onduleurs et optimiseur. Ces appareils sont capables de vous donner en direct la production, et mesurent aussi la consommation instantanée, ce qui permet donc de mesurer directement son auto-consommation: on peut savoir si on produit plus qu’on ne consomme, ou l’inverse, en temps réel, c’est parfait non ? Voici par exemple un graphique fournit par SolarEdge en été:

conso aout

En rouge, nous avons la consommation. En vert, la production solaire, et en bleu, l’autoconsommation. Vous noterez d’ores et déja un problème, si, comme moi, vous êtes un tant soit peu curieux: lorsqu’il y a production solaire, la courbe de consommation se met à suivre celle de production. Elle monte lorsque la production augmente et diminue lorsqu’elle baisse: ça n’est pas logique et probablement un bug quelque par chez SolarEdge. J’ai demandé via mon installateur des explications (ils ne comprennent pas non plus), mais SolarEdge n’est pas revenu vers eux.

A titre de comparaison, voici un autre graphe ce mois-ci:

conso nov

On note déja qu’il y a des pics de consommation, correspondant à l’enclenchement de la pompe à chaleur, ou des appareils de cuisson. On constate aussi que la période de production est plus resserrée, mais qu’il existe encore, en journée, de la surproduction par moment (ça n’est pas toujours vrai, dès qu’il pleut, la production est pour ainsi dire nulle).

Si on oublie la fausse valeur de consommation, c’est parfait me direz-vous ! Et bien oui mais non. Chez moi, il y a un hic: le capteur de consommation est au niveau de l’onduleur, près de mon tableau électrique. Or ici, j’ai plusieurs tableaux électriques: un dans mon habitation principale, mais aussi un dans mon garage (bâtiment indépendant) et un autre dans un local technique du jardin. Lorsque j’ai acheté cette maison, elle était installée en triphasé, il y avait donc les 3 phases, et de mon compteur EDF principal, dans mon allée, partent des lignes électriques vers 3 bâtiments distincts. Ces phases étaient particulièrement déséquilibrées, nous avions donc fait repasser en monophasé, mais ce qu’il faut retenir, c’est qu’il aurait fallu que le capteur soit au niveau du compteur électrique, et non dans ma maison, pour qu’on puisse mesurer correctement la consommation en live. Pour des raisons techniques, il n’était pas possible de le faire: j’ai donc une mesure imparfaite de ma consommation, qui ne donne que la consommation de mon habitation principale.

Ma solution

Si on résume, je dispose pour l’instant d’un outil qui me donne la production (nous verrons plus loin comment l’obtenir), mais pas la consommation. Mais mon fournisseur d’électricité (Total Energies) propose une clé ATOME qui permet de connaître ce que je consomme du réseau en live, clé qui se branche sur le compteur Linky. Malheureusement, Enedis ne propose pas d’APIs permettant de connaître sa consommation live, il n’y a donc pas d’autre choix que de louer la clé de Total Energies…​ J’ai donc fait l’acquisition de cette clé, et je dispose donc des 2 mesures dont j’ai besoin:

  • la consommation live donnée par ma clé ATOME: attention, il ne s’agit donc pas de ma consommation totale, mais de ce que j’ai besoin en plus de ma production, du réseau EDF

  • la production live donnée par SolarEdge

Il suffit donc de faire la différence entre les 2 pour savoir de quelle marge on dispose, mais je ne peux jamais, en pratique, savoir combien je consomme.

Mon idée a donc été d’utiliser un vieux Rapsberry PI qui traînait dans mon armoire, combiné à un écran e-ink, pour afficher cette production et cette consommation, ainsi qu’une "note" permettant de suggérer de lancer une machine à laver, par exemple.

J’ai donc fait l’acquisition de cet écran, un écran e-ink dont la consommation en veille est nulle, ce qui, pour un outil de monitoring de consommation d’énergie, me semblait le minimum. Nous verrons cependant que ça ne fut pas sans inconvénients.

Les APIs, toujours le point faible

Bien, maintenant que nous savons que les données sont disponibles, via le site de SolarEdge pour la production, et via l’application Total Energies pour la clé live, il fallait disposer de ces données via des API que je puisse interroger via mon Raspberry.

Là, douche froide:

  • SolarEdge propose bien une API pour développeurs, mais elle n’est ni super bien documentée (il faut comprendre soit-même à quoi correspondent les champs retournés), ni illimitée : on ne peut effectuer que 300 requêtes par jour, soit un peu moins d’une requête toutes les 5 minutes ! C’est d’autant plus regrettable que l’information est disponible en continu et en live via leur interface web!

  • pour Total Energies, c’est encore pire: il n’y a pas d’API officielle. Il faut donc "hacker" pour avoir accès, en simulant une connexion via l’application, qui donne la consommation live

Bref, ni SolarEdge, ni Total ne proposent d’API de type push, ou d’event bus qu’on puisse écouter pour obtenir les informations. C’est très décevant, à une heure où ce genre d’optimisations devient critique pour une bonne gestion de notre consommation électrique: c’est un outil pour le climat !

L’autre problème, c’est que même si j’utilise l’API officielle de SolarEdge et que je réussis à récupérer l’information de Total Energies, ces APIs sont instables : elles tombent très souvent "en panne" et ne renvoient aucune info. Bref, lorsque ça marche, c’est parfait, mais souvent, ça ne fonctionne tout simplement pas, par exemple en ce moment, ma production indique 0 alors que ça n’est pas le cas:

moniteur nov

Ceci me permet au moins de savoir qu’en ce moment, je demande 1135W du réseau, ce qui signifie que je consomme sensiblement plus (entre le chauffe-eau, l’ordinateur de mon fils qui joue à Minecraft et la PS5 de mon autre garçon).

Un peu de technique

Alors comment récupérer en pratique ces informations ? En fait j’ai adapté un script en Python qui, toutes les 5 minutes, se connecte à ces 2 APIs, récupère les informations et déclenche le rafraîchissement de l’écran. Alors, Python, personnellement c’est pas ma tasse de thé. J’ai l’impression de refaire du PHP, avec des scripts cracras et des variables globales de partout. Il y a sûrement possibilité de faire mieux, mais en bidouille en se connectant par SSH à mon Raspberry, c’est pour l’instant tout ce que j’ai.

Vous trouverez donc le script ici.

Parmi les problèmes, je vous mentionnait celui du choix de l’écran. J’avoue avoir été relativement naïf, parce qu’à l’utilisation, si l’écran e-ink est très sympa à voir, son rafraîchissement prend…​ jusqu’à 30s ! En effet, la façon de dessiner sur ces écrans est assez particulière: on écrit des modèles mémoire, puis on envoie des instructions à l’écran pour effacer telle zone, etc. Ces instructions sont très lentes à s’exécuter, mais surtout, elles provoquent systématiquement un effet bizarre à l’écran, qui passe du blanc au noir plusieurs fois, commence à afficher des choses, puis la couleur, etc…​ Bref, pas super sympa pour du "live", mais, au final, suffisant pour mon usage.

Conclusion

Au final, j’ai quand même un outil proche de ce que je souhaitais. Il nous a permis, d’ores et déjà, d’adapter notre consommation: n’importe qui peut regarder l’écran et décider de lancer une machine si la production est importante, alors que cette info n’était avant uniquement disponible que pour moi, via une application sur mobile: ici, l’information est donnée en continu, de manière passive, via cet écran.

Nous avons, par exemple, pu adapter notre consommation en automne: la pompe à chaleur ne fonctionnait pas (pas besoin de chauffer) et donc nous avons pu mettre notre voiture à charger, plus travailler tous les 2 à domicile et lancer une machine à laver, sans consommer un seul kW du réseau EDF ! Lorsque le monitoring a indiqué que nous commencions à consommer du réseau, il a suffit de couper la charge de la voiture (malheureusement, l’application MyPeugeot est extrêmement limitée et ne permet pas d’interrompre une charge mais simplement de la différer, mais c’est un autre problème).

Depuis Juillet, nous sommes à -70% de consommation électrique, ce qui est énorme. Cependant, les conditions météo ont, jusqu’ici, été très favorables: beaucoup de soleil et des températures record en Octobre et Novembre (malheureusement pour le climat…​). Depuis une semaine, la pompe à chaleur se met régulièrement en route pour tenir les 19⁰C, mais j’ai des résultats surprenants:

  • 90kWh consommés en Novembre pour chauffer l’eau

  • et seulement 9kWh pour le chauffage !

L’avenir nous dira si le passage à une pompe à chaleur pour le chauffage était une bonne idée ou non (en même temps, ici nous n’avions pas trop le choix). Il serait pertinent, compte tenu du surplus de production qui arrive souvent en journée, d’installer des batteries pour maximiser l’autoconsommation. Malheureusement comme je l’ai indiqué, le prix est à ce jour bien trop élevé. Je devrais donc me contenter de revendre ma production (au prix de 10cts le kWh, prix fixé pour 20 ans (!!) contre environ 15cts lorsque je consomme). Attention cependant, lorsqu’on revend et que notre installation, comme la nôtre, dépasse les 3kWc, alors nous devons déclarer celà en revenus. Cela rend la revente bien moins intéressante, puisque malgré le fait qu’on va "consommer" notre surplus, mais à des heures différentes, le tarif de rachat ansi que le fait de devoir déclarer réduit sensiblement la rentabilité.

Comments

Hello, Mastodon (goodbye Twitter!)

06 November 2022

Tags: twitter mastodon

It should be no news to anyone that Elon Musk finally took control of Twitter. There were reasons to be worried about this move, but the recent events at Twitter made it worse than most of us would have thought.

TL/DR: I am moving primarily to Mastodon. You can now follow me at @melix@mastodon.xyz.

Twitter as a self-promotion tool

Lots of people have a complicated story with Twitter, I am no different. I joined Twitter back in January 2010, and I’ve grown up to more 4800 followers nowadays, and from time to time, I suffered from being exhausted by the negativity of the network. Last year, I was even in the middle of an harassment story, because some trolls thought I was someone else.

However, Twitter has been my main communication channel for professional work: this is where I explain what I work on, announce software updates, events I will attend or talk to, or publish blog posts links like this one. This is also my main feedback channel and where I do most of my technology intelligence.

My community of followers doesn’t make me an "influencer" at large scale, like a rock star, but it definitely had an influence on my professional career. For example, different potential employers explicitly mentioned my "influence" and social media presence, for example, as a major reason why they’d like to work with me. I have no doubt that without Twitter, my career wouldn’t have been the one it is today. For that, I am forever grateful.

I am also using my Twitter account for personal stuff. Most notably, some of you have seen my astronomy pictures. You may also have seen some political posts and point of views being expressed. While I’m using Twitter professionally, I always made it clear that the opinions I’m expressing are not representative of my employer’s point of view. Said differently, I see my Twitter account as a tool, but a tool which "survives" my different employers. To some extent, it’s part of my digital identity.

Then came Elon Musk

I was already very critical of Elon Musk, and not just because of the infamous Starlink project, for which I have so much to say. One can admire what he created with Tesla or SpaceX (and I do admire), but at the same time, we can recognize that he’s a terrible human being. This is all about business, all about money, and “freedom of speech” only when it suits his point of view. Elon is making people work as hard as they can until they burnout or simply get fired once they have "done their job". Elon is also a "special" view of free speech, where those who pay will have right for free speech, while those who don’t won’t be visible anymore. He’s also the one blocking anyone criticizing him.

As a consequence, the head of Twitter, and therefore Twitter itself, now represents everything I fight against in my personal life. This is the world I don’t want for my children. This is the world which puts money before human beings, the world where power, masculinity, is emphasized. Elon doesn’t care that we exhaust resources, as long as part of the population can survive, be it on Mars (!).

But now, he fired half of the company, by email, ignoring the law, because, you know, Musk does whatever he wants. It happens that I know people who got fired and I also know folks who used to work for Twitter before. I, for one, am respectful of what people produce. I am respectful of what efforts it takes to build a site like Twitter, and, in general, I am someone who puts trust in other people at the top of my hierarchy of beliefs. So I find it extremely annoying, or, to be clear, disgusting when someone who has absolutely no understanding of how to build such a large community over the years, decides to layoff half of a company, change the spirit of the website so that paying customers have more power than the others, so that it matches more his political point of view. Even if the company is losing $4M a day, compared to B44$, seriously, what would have it taken to show the people who built Twitter a bit of respect and simply build a plan, say, of volunteer departures.

Elon Musk’s behavior is everything I hate: someone coming because they have power (understand, money), and then ruining other people lives just because he can, without any respect.

This puts me in a difficult situation: at the same time, I need Twitter, because it’s a professional tool which can’t be easily replaced and I have to go because there’s no way I’m going to pay 8 dollars a month to a company who treats human beings like that, and simply to get more visibility over others just because I can afford it.

Hello, Mastodon!

As a FOSS supporter, back in 2017, I had registered to Mastodon. My account was left more or less inactive for years, because, honestly, not many people used it so it was …​ annoying. This was like talking in an echo chamber. Until the rumor of Elon Musk acquiring Twitter started: we’ve seen more and more people joining, and I’m super glad that this week alone, many other people, including from the Tech industry, decided to make the move.

No more ads. No more recommendations. A clean timeline, as it should be.

It’s really refreshing to be on Mastodon, it feels like the Twitter from the early days.

However, Mastodon has a number of key differences, which make it more "complicated" for users to understand. First of all, and that’s the biggest issue for joining, Mastodon is not a single site like Twitter: it’s a federation of servers. Just like you have a provider for your email (say GMail, Hotmail, etc.), you can choose your Mastodon server provider. As an OSS project, you can go even as far as hosting your own instance.

Therefore, Mastodon is, by nature, distributed, which makes it completely immune to what just happened to Twitter. But it also means that there are a few things you should be aware when you join a server:

  • content moderation is the responsibility of the server administrator: they can block you, read your posts, private messages. They can block other providers too, which means, for example, that if you choose to go to an instance which has a Code of Conduct, you have to follow the rules.

  • the good news is that you own the content: if you are not happy with your server moderation rules, you can move to a different instance, and you can move your data with it: toots, folks you follow, followers, …​

  • the cost of maintenance is distributed on the community: there are lots of free servers out there, but you can choose to participate to the bills. You can even build your own server, for a price which wouldn’t exceed much what Elon wants us to pay to get a blue mark

  • Hashtags are much more important in Mastodon than Twitter: there’s no global search: the only things which are indexed are hash tags. You won’t find contents which is not with a hash tag.

This is the internet I remember of. It’s not suitable for everyone, but it’s what I call open. What I also like is that it respects freedom of speech, while preserving your freedom of not seeing assholes (there are instances which are full of alt-right folks, but at least, we can block the whole server and not have to suffer their nauseating posts).

Conclusion

So, for the time being, I’m transitioning to Mastodon. You can follow me at @melix@mastodon.xyz, and I strongly suggest that you do the same and find a server which suits you. I will not support a company which now represents the worst of human beings. I will not pay $8 a month to a company which shows no respect to its employees, content creators and business partners.

Because it’s hard to say goodbye to a professional network just like that, I do not plan to shutdown my account on Twitter yet, though: I still didn’t find a way to solve this cognitive dissonance.

So what I plan on doing is moving to Mastodon first. I have already updated my screen name in Twitter to give a link to my Mastodon account. I suggest you do the same. I will now limit my tweets to what is strictly required for my professional career (announcements, etc.). I will also mostly answer to tweets which are related to moving to Mastodon and use Mastodon for everything else, until I can completely get rid of Twitter. I am conscious that I may not be able to completely get rid of it, if too many people stay on Twitter. So be it.

Congratulations M. Musk, you ruined Twitter!

Comments

Astrophotographie: la suite !

05 October 2022

Tags: astrophotographie twitch

Salut à tous !

Je vous avais promis de faire un deuxième live sur Twitch pour parler d’astrophotographie, cette fois-ci sur la partie logicielle. J’ai décidé d’arrêter de procrastiner et je vous annonce le mercredi 12 octobre à 20h sur ma chaîne Twitch.

Si vous avez raté la première partie sur le matériel astrophoto, c’est dispo en replay sur Youtube.

Dans ce live j’aborderais comment, à partir des données brutes acquises pendant la nuit, on peut obtenir de belles photos grâce à la magie du logiciel. Nous parlerons de flats, darks, pré-traitement, stacking, …​ plein de termes compliqués mais qui au final ne sont pas si difficiles à comprendre.

Bref, il est temps que je prépare tout ça !

Comments

Introducing Micronaut Test Resources

04 August 2022

Tags: micronaut testcontainers docker test testing

The new release of Micronaut 3.6 introduces a new feature which I worked on for the past couple of months, called Micronaut Test Resources. This feature, which is inspired from Quarkus' Dev Services, will greatly simplify testing of Micronaut applications, both on the JVM and using GraalVM native images. Let’s see how.

Test resources in a nutshell

Micronaut Test Resources simplifies testing of applications which depend on external resources, by handling the provisioning and lifecycle of such resources automatically. For example, if your application requires a MySQL server, in order to test the application, you need a MySQL database to be installed and configured, which includes a database name, a username and a password. In general, those are only relevant for production, where they are fixed. During development, all you care about is having one database available.

Here are a couple of traditional solutions to this problem:

  1. document that a MySQL server is a pre-requisite, and give instructions about the database to create, credentials, etc. This can be simplified by using Docker containers, but there’s still manual setup involved.

  2. Use a library like Testcontainers in order to simplify the setup

In general, using Testcontainers is preferred, because it integrates well with the JVM and provides an API which can be used in tests to spawn containers and interact with them. However, a better integration between Micronaut and Testcontainers can improve the developer experience in several ways:

  • simplify the container lifecycle configuration by providing an opinionated framework-specific default way, making you think less of how to setup it in the individual tests : tests shouldn’t need to deal with the container lifecycle: we’d like to have test containers/resources management as transparent as possible.

  • isolate it better from your application making it simpler to reason about dependencies (and transitive dependencies), not just for the developer, but for example tools enabling native mode as well: Testcontainers APIs "leak" to the test classpath, making it difficult to run tests in native mode. This is not a problem specific to the Testcontainers library though: many libraries are not yet compatible with GraalVM. Our solution makes it possible to use Testcontainers in native tests without the hassle of configuring it!

  • enable support for "development mode", that is to say when you run the application locally (not the tests, the application itself) or even several distinct projects at once that can benefit from sharing access to the same running containers (for example, an MQTT client and a server may want to use the same container, even if they are individual projects living in distinct Git repositories).

The goal of Micronaut Test Resources is to achieve all of these at once:

  • zero-configuration: without adding any configuration, test resources should be spawned and the application configured to use them. Configuration is only required for advanced use cases.

  • classpath isolation: use of test resources shouldn’t leak into your application classpath, nor your test classpath

  • compatible with GraalVM native: if you build a native binary, or run tests in native mode, test resources should be available

  • easy to use: the Micronaut build plugins for Gradle and Maven should handle the complexity of figuring out the dependencies for you

  • extensible: you can implement your own test resources, in case the built-in ones do not cover your use case

  • technology agnostic: while lots of test resources use Testcontainers under the hood, you can use any other technology to create resources

In addition, Micronaut Test Resources support advanced development patterns, which are useful in the microservices era. As an example, it is capable of sharing containers between submodules of a single build, or even between independent projects, from different Git repositories! Say that you have 2 projects, one built with Gradle, the other with Maven, both needing to communicate using the same message bus: Micronaut is capable of handling this use case for you, making it extremely easy to test components interacting with each other!

Because of these constraints, we decided to use Testcontainers, because the library is just perfect for the job, but in an isolated process instead, as I’m going to describe below. Note that this solution is also 100% compatible with Testcontainers Cloud, which makes container provisioning even easier!

Using Micronaut Test Resources

Enabling test resources support

Micronaut Test Resources integrates with build tools. In both Maven and Gradle, you need to enable test resources support. If you create a new project using Micronaut Launch or the Micronaut CLI, test resources will be configured for you, but if you migrate an existing application to test resources, here’s what you need to do:

If you are using Maven, you will need to upgrade to the Micronaut 3.6 parent POM and add the following property:

<properties>
   <micronaut.test.resources.enabled>true</micronaut.test.resources.enabled>
</properties>

For Gradle, you can use test resources with Micronaut 3.5+ and you simply need to use the test resources plugin:

plugins {
    id 'io.micronaut.application' version '3.5.1'
    id 'io.micronaut.test-resources' version '3.5.1'
}

Our first test resources contact

In this blog post we will write an application which makes use of Micronaut Data and connects to a MySQL server to list books. The whole application code is available on GitHub, so I’m only going to show the relevant parts for clarity.

In such an application, we typically need a repository:

@JdbcRepository(dialect = Dialect.MYSQL)
public interface BookRepository extends CrudRepository<Book, Long> {
    @Override
    List<Book> findAll();
}

And this repository makes use of the Book class:

@MappedEntity
public class Book {
    @Id
    @GeneratedValue(GeneratedValue.Type.AUTO)
    private Long id;

    private String title;

    public Long getId() {
        return id;
    }

    public void setId(Long id) {
        this.id = id;
    }

    public String getTitle() {
        return title;
    }

    public void setTitle(String title) {
        this.title = title;
    }
}

In order for Micronaut to use the database, we need to add some configuration to our application.yml file:

datasources:
  default:
    schema-generate: CREATE
    db-type: mysql

The most important thing to see is that we don’t specify any username, password or URL to connect to our database: the only thing we have to specify is the database type of our datasource. We can then write the following test:

@MicronautTest
class DemoTest {

    @Inject
    BookRepository bookRepository;

    @Test
    @DisplayName("A MySQL test container is required to run this test")
    void testItWorks() {
        Book book = new Book();
        book.setTitle("Yet Another Book " + UUID.randomUUID());
        Book saved = bookRepository.save(book);
        assertNotNull(saved.getId());
        List<Book> books = bookRepository.findAll();
        assertEquals(1, books.size());
    }

}

The test creates a new book, stores it in the database, then checks that we get the expected number of books when reading the repository. Note, again, that we didn’t have to specify any container whatsoever. In this blog post I’m using Gradle, so we can verify the behavior by running:

./gradlew test

Then you will see the following output (cleaned up for clarity of this blog post):

i.m.testresources.server.Application - A Micronaut Test Resources server is listening on port 46739, started in 128ms
i.m.t.e.TestResourcesResolverLoader - Loaded 2 test resources resolvers: io.micronaut.testresources.mysql.MySQLTestResourceProvider, io.micronaut.testresources.testcontainers.GenericTestContainerProvidereted
o.testcontainers.DockerClientFactory - Connected to docker:
  Server Version: 20.10.17
  API Version: 1.41
  Operating System: Linux Mint 20.3
  Total Memory: 31308 MB
🐳 [testcontainers/ryuk:0.3.3] - Creating container for image: testcontainers/ryuk:0.3.3
🐳 [testcontainers/ryuk:0.3.3] - Container testcontainers/ryuk:0.3.3 is starting: 1f5286fa728aca74a7d6d4c0eb2148a3bc81f5c028027496d7aabda7b7ed45e8
🐳 [testcontainers/ryuk:0.3.3] - Container testcontainers/ryuk:0.3.3 started in PT0.655476S
o.t.utility.RyukResourceReaper - Ryuk started - will monitor and terminate Testcontainers containers on JVM exit
🐳 [mysql:latest] - Creating container for image: mysql:latest
🐳 [mysql:latest] - Container mysql:latest is starting: d796c7a1ce10f393a4181f12967ee77ac9864f45595f97967c700f022e86ac7d
🐳 [mysql:latest] - Waiting for database connection to become available at jdbc:mysql://localhost:49209/test using query 'SELECT 1'
🐳 [mysql:latest] - Container is started (JDBC URL: jdbc:mysql://localhost:49209/test)
🐳 [mysql:latest] - Container mysql:latest started in PT7.573915S

BUILD SUCCESSFUL in 11s
7 actionable tasks: 2 executed, 5 up-to-date

What does this tell us? First, that a "Micronaut Test Resources server" was spawned, for the lifetime of the build. When the test was executed, this service was used to start a MySQL test container, which was then used during tests. We didn’t have to configure anything, test resources did it for us!

Running the application

What is also interesting is that this also works if you run the application in development mode. Using Gradle, you do this by invoking ./gradlew run (mvn mn:run with Maven): as soon as a bean requires access to the database, a container will be spawned, and automatically shut down when you stop the application.

Note
Of course, in production, there won’t be any server automatically spawned for you: Micronaut will rely on whatever you have configured, for example in an application-prod.yml file. In particular, the URL and credentials to use.

What is even nicer is that you can use this in combination with Gradle’s continuous mode!

To illustrate this, let’s create a controller for our books:

@Controller("/")
public class BookController {
    private final BookRepository bookRepository;

    public BookController(BookRepository bookRepository) {
        this.bookRepository = bookRepository;
    }

    @Get("/books")
    public List<Book> list() {
        return bookRepository.findAll();
    }

    @Get("/books/{id}")
    public Book get(Long id) {
        return bookRepository.findById(id).orElse(null);
    }

    @Delete("/books/{id}")
    public void delete(Long id) {
        bookRepository.deleteById(id);
    }
}

Now start the application in continuous mode: ./gradlew -t run

You will see that the application starts a container as expected:

INFO  io.micronaut.runtime.Micronaut - Startup completed in 9166ms. Server Running: http://localhost:8080

Notice how it took about 10 seconds to start the application, most it it spent in starting the MySQL test container itself. You definitely don’t want to pay this price for every change you make, so this is where the continuous mode is helpful. If we ask for the list of books, we’ll get an empty list:

$ http :8080/books
HTTP/1.1 200 OK
Content-Type: application/json
connection: keep-alive
content-length: 2
date: Tue, 26 Jul 2022 16:59:51 GMT

[]

This is expected, but notice how we didn’t have a method to actually add a book to our store. Let’s fix this by editing the BookController.java class without stopping the server. Add the following method:

    @Get("/books/add/{title}")
    public Book add(String title) {
        Book book = new Book();
        book.setTitle(title);
        return bookRepository.save(book);
    }

Save the file and notice how Gradle instantly reloads the application, but doesn’t restart the database: it’s already there so it’s going to reuse it!

In the logs you will see something like this:

INFO  io.micronaut.runtime.Micronaut - Startup completed in 1086ms. Server Running: http://localhost:8080

This time the application started in just a second! Let’s add a book:

$ http :8080/books/add/Micronaut%20in%20action
HTTP/1.1 200 OK
Content-Type: application/json
connection: keep-alive
content-length: 38
date: Tue, 26 Jul 2022 17:03:57 GMT

{
    "id": 1,
    "title": "Micronaut in action"
}

However, if we stop the application (by hitting CTRL+C) and start again, you will see that the database will be destroyed when the application shuts down. Let’s see how we can "survive" different build invocations.

Keeping the service alive

By default, the test resources service is short lived: it’s going to be started at the beginning of a build, and shutdown at the end of a build. This means, that it will live as long as you have tests running, or, if running in development mode, as long as the application is alive. However, you can make it survive the build, and reuse the containers in several, independent build invocations.

To do this, you need to explicitly start the test resources service:

./gradlew startTestResourcesService

This starts the test resources service in the background: it did not start our application, nor did it run tests. This means that now, we can start our application:

./gradlew run

And, because it’s the first time the application is launched since we started the test resources service, it’s going to spawn a test container:

INFO  io.micronaut.runtime.Micronaut - Startup completed in 9211ms. Server Running: http://localhost:8080

We can add our book:

$ http :8080/books/add/Micronaut%20in%20action
HTTP/1.1 200 OK
Content-Type: application/json
connection: keep-alive
content-length: 38
date: Tue, 26 Jul 2022 17:03:57 GMT

{
    "id": 1,
    "title": "Micronaut in action"
}

The difference is now that if we stop the application (e.g hit CTRL+C) and start it again, it will reuse the container:

INFO  io.micronaut.runtime.Micronaut - Startup completed in 895ms. Server Running: http://localhost:8080

If we list our books, the database wasn’t cleaned, so we’ll get the book we created from the previous time we started the app:

$ http :8080/books
HTTP/1.1 200 OK
Content-Type: application/json
connection: keep-alive
content-length: 40
date: Tue, 26 Jul 2022 17:14:40 GMT

[
    {
        "id": 1,
        "title": "Micronaut in action"
    }
]

Nice, right? However there’s a gotcha if you do this: what happens if we run tests?

$ ./gradlew test

> Task :compileTestJava
Note: Creating bean classes for 1 type elements

> Task :test FAILED

DemoTest > A MySQL test container is required to run this test FAILED
    org.opentest4j.AssertionFailedError at DemoTest.java:28

Why is that? This is simply because our tests expect a clean database, and we had a book in it, so keep this in mind if you’re using this mode.

At some point, you will want to close all open resources. You can do this by explicitly stopping the test resources service:

./gradlew stopTestResourcesService

Now, you can run the tests again and see them pass:

$ ./gradlew test

...
INFO  🐳 [testcontainers/ryuk:0.3.3] - Creating container for image: testcontainers/ryuk:0.3.3
INFO  🐳 [testcontainers/ryuk:0.3.3] - Container testcontainers/ryuk:0.3.3 is starting: ea2aa1c7f1e66a9c7306b00443e8a6693451f3f02bd780b3e2ed7b96ed59936a
INFO  🐳 [testcontainers/ryuk:0.3.3] - Container testcontainers/ryuk:0.3.3 started in PT0.553559699S
INFO  o.t.utility.RyukResourceReaper - Ryuk started - will monitor and terminate Testcontainers containers on JVM exit
INFO  o.testcontainers.DockerClientFactory - Checking the system...
INFO  o.testcontainers.DockerClientFactory - ✔︎ Docker server version should be at least 1.6.0
INFO  🐳 [mysql:latest] - Creating container for image: mysql:latest
INFO  🐳 [mysql:latest] - Container mysql:latest is starting: 1c6437a55b8f9e5668bcec4aef27087c889b8a77ca18d2ddf58809853482a422
INFO  🐳 [mysql:latest] - Waiting for database connection to become available at jdbc:mysql://localhost:49227/test using query 'SELECT 1'
INFO  🐳 [mysql:latest] - Container is started (JDBC URL: jdbc:mysql://localhost:49227/test)
INFO  🐳 [mysql:latest] - Container mysql:latest started in PT7.469460173S

BUILD SUCCESSFUL in 11s
7 actionable tasks: 2 executed, 5 up-to-date

Native testing

Did you know that you can run your test suite in native mode? That is to say, that the test suite is going to be compiled into a native binary which runs tests? One issue with this approach is that it’s extremely complicated to make it work with Testcontainers, as it requires additional configuration. With Micronaut Test Resources, there is no such problem: you can simply invoke ./gradlew nativeTest and the tests will properly run. This works because Testcontainers libraries do not leak into your test classpath: the process which is responsible for managing the lifecycle of test resources is isolated from your tests!

Under the hood

How does that work?

In a nutshell, Micronaut is capable of reacting to the absence of a configured property. For example, a datasource, in order to be created, would need the value of the datasources.default.url property to be set. Micronaut Test Resources work by injecting those properties at runtime: when the property is read, it triggers the creation of test resources. For example, we can start a MySQL server, then inject the value of the JDBC url to the datasources.default.url property. This means that in order for test resources to work, you need to remove configuration (note that for production, you will need to provide an additional configuration file, for example application-prod.yml, which provides the actual values).

The entity which is responsible for resolving missing properties is the "Test Resources Server": it’s a long lived process which is independent from your application and it is responsible for managing the lifecycle of test resources. Because it’s independent from the application, it means it can contain dependencies which are not required in your application such as, typically, the Testcontainers runtime. But it may also contain additional classes, like JDBC drivers, or even your custom test resources resolver!

Because this test resources server is a separate process, it also means it can be shared by different applications, which is the reason why we can share the same containers between independent projects.

Once you understand that Micronaut Test Resources work by resolving missing properties, it becomes straightforward to configure. In particular, we offer configuration which makes it very easy to support scenarios which are not supported out of the box. For example, Micronaut Test Resources supports several JDBC or reactive databases (MySQL, PostgreSQL, MariaDB, SQL Server and Oracle XE), Kafka, Neo4j, MQTT, RabbitMQ, Redis, Hashicorp Vault and ElasticSearch, but what if you need a different container?

In that case, Micronaut Test Resources offer a conventional way to create such containers, by simply adding some configuration lines: in the documentation we demonstrate how to use the fakesmtp SMTP server with Micronaut Email for example.

Custom test resources

If the configuration-based support isn’t sufficient, you also have, in addition, the ability to write your own test resources. If you use Gradle, which I of course recommend, this is made extremely easy by the test resources plugin, which creates an additional source set for this, named testResources. For Maven, you would have to create an independent project manually to support this scenario.

As an illustration, let’s imagine that we have a bean which reads a configuration property:

@Singleton
public class Greeter {
     private final String name;

     public Greeter(@Value("${my.user.name}") String name) {
         this.name = name;
     }

     public String getGreeting() {
     	return "Hello, " + name + "!";
     }

     public void sayHello() {
         System.out.println(getGreeting());
     }
}

This bean requires the my.user.name property to be set. We could of course set it in an application-test.yml file, but for the sake of the exercise, let’s imagine that this value is dynamic and needs to be read from an external service. We will implement a custom test resources resolver for this purpose.

Let’s create the src/testResources/java/demo/MyTestResource.java file with the following contents:

package demo;

import io.micronaut.testresources.core.TestResourcesResolver;

import java.util.Collection;
import java.util.Collections;
import java.util.List;
import java.util.Map;
import java.util.Optional;

public class MyTestResource implements TestResourcesResolver {

    public static final String MY_TEST_PROPERTY = "my.user.name";

    @Override
    public List<String> getResolvableProperties(Map<String, Collection<String>> propertyEntries, Map<String, Object> testResourcesConfig) {
        return Collections.singletonList(MY_TEST_PROPERTY); // (1)
    }

    @Override
    public Optional<String> resolve(String propertyName, Map<String, Object> properties, Map<String, Object> testResourcesConfiguration) {
        if (MY_TEST_PROPERTY.equals(propertyName)) {
            return Optional.of("world");                    // (2)
        }
        return Optional.empty();
    }

}
  1. Tells that this resolver can resolve the my.user.name property

  2. Returns the value of the my.user.name property

And in order for the resolver to be discovered, we need to create the src/testResources/resources/META-INF/services/io.micronaut.testresources.core.TestResourcesResolver file with the following contents:

demo.MyTestResource

Now let’s write a test for this by adding the src/test/java/demo/GreeterTest.java file:

package demo;

import io.micronaut.context.annotation.Requires;
import io.micronaut.test.extensions.junit5.annotation.MicronautTest;
import jakarta.inject.Inject;
import org.junit.jupiter.api.Assertions;
import org.junit.jupiter.api.DisplayName;
import org.junit.jupiter.api.Test;

@MicronautTest
class GreeterTest {

    @Inject
    Greeter greeter;


    @Test
    @DisplayName("Says hello")
    void saysHello() {
        greeter.sayHello();
        Assertions.assertEquals("Hello, world!", greeter.getGreeting());
    }

}

Now if you run ./gradlew test, you will notice that Gradle will compile your custom test resource resolver, and when the test starts, you will read the following line:

Loaded 3 test resources resolvers: demo.MyTestResource, io.micronaut.testresources.mysql.MySQLTestResourceProvider, io.micronaut.testresources.testcontainers.GenericTestContainerProvider

So when the Greeter bean is created, it will read the value of the my.user.name property by calling your custom test resolver! Of course this is a very simple example, and I recommend that you take a look at the Micronaut Test Resources sources for more examples of implementing resolvers.

Conclusion

In this blog post, we’ve explored the new Micronaut Test Resources module, which will greatly simplify development of Micronaut applications which depend on external services like databases or messaging queues. It works by simplifying configuration, by removing lines which used to be present, but now are dynamically resolved, like datasources.default.url. Test resources are handled in a separate process, the test resources server, which is responsible for handling their lifecycle. This also makes it possible to share the resources (containers, databases, …​) between independent builds. For advanced use cases, Micronaut Test Resources provides configuration based resources creation.

Last but not least, Micronaut Test Resources is an extensible framework which will let you implement your own test resources in case the built-in ones miss a feature.

Special thanks to Tim Yates for his hard work on upgrading the Micronaut Guides to use test resources, and Álvaro Sanchez-Mariscal for his support on the Maven plugin!

Comments


Older posts are available in the archive.