JSol’Ex 4.0.0 est sorti !

11 septembre 2025

Tags: solex jsolex solaire astronomie ia claude bass2000

Le 18 février 2023, je montais à l’Observatoire du Pic du Midi de Bigorre pour y passer une "Nuit au Sommet", une expérience mêlant observation nocturne et visite des installations scientifiques. J’y ai découvert avec fascination le coronographe Bernard-Lyot, l’expérience CLIMSO gérée par les Observateurs Associés de l’Observatoire Midi-Pyrénées.

Des étoiles plein les yeux (et une en particulier, notre soleil), ma vie d’astronome amateur centrée sur le ciel profond allait basculer vers l’observation de notre Soleil. De retour à la maison, j’ai commencé à me renseigner sur le matériel solaire, et découvrait avec stupéfaction qu’il fallait investir, beaucoup, pour pouvoir observer notre astre en toute sécurité : filtres pleine ouverture, étalons, …​ tout avait un coût démesuré pour un débutant comme moi. De quoi décourager, jusqu’à ce que je tombe par hasard sur le projet Sol’Ex, initié par la légende Christian Buil.

Ce projet, très didactique, permet de se construire un spectrohéliographe, par impression 3D. Un tel instrument présente de nombreux avantages : coût modeste, résolution élevée et la possibilité d’observer dans de nombreuses longueurs d’ondes tout en mettant à disposition des données exploitables scientifiquement !

L’inconvénient, si c’en est vraiment un, de cet instrument est qu’il ne produit pas une image du soleil observable directement à l’oculaire. Il est nécessaire d’utiliser un logiciel de reconstruction qui, à partir d’une vidéo enregistrant un "scan" du soleil, permet de générer une (voir plus) images du soleil. Ce processus de reconstruction, bien mystérieux pour moi à l’époque, était réalisé par un logiciel écrit par Valérie Desnoux, nommé INTI. Etant développeur (Java), je me suis alors lancé un challenge à l’époque : puisque je ne comprenais pas comment ça fonctionnait, j’allais essayer de créer moi-même un logiciel pour faire cette reconstruction. De cet appétit est né JSol’Ex.

JSol’Ex 4.0 et la collaboration scientifique

Nous voici 2 ans et demi plus tard, JSol’Ex arrive en version 4. Entre temps, le nombre d’utilisateurs a explosé et mon logiciel est exploité non seulement par les utilisateurs du Sol’Ex de Christian Buil, mais aussi pour des spectrohéliographes commerciaux comme le MLAstro SHG 700 (que j’utilise désormais). JSol’Ex a apporté de nombreuses innovations, comme la possibilité d’exécuter des scripts pour générer des animations, faire du stacking, créer des images personnalisées, ou encore la détection automatique d’éruptions, des régions actives (avec annotation), la détection de bombes d’Ellerman, la correction des bords dentelés et j’en passe.

Récemment, JSol’Ex a été utilisé pour réaliser un Atlas de Spectrohéliogrammes, et mentionné dans un article précisément sur la détection des bombes d’Ellerman. Bref, JSol’Ex était devenu suffisamment mature pour faire de la "vraie science", chose qui m’a toujours un peu effrayé pour être honnête.

Ainsi, j’ai toujours refusé d’intégrer une fonctionnalité dans JSol’Ex, qui m’a pourtant été demandée de nombreuses fois : la possibilité de l’utiliser pour soumettre des images dans la base de données BASS2000 de l’Observatoire de Paris-Meudon. Plusieurs raisons à cela : d’une, je ne faisais pas confiance à mon propre logiciel pour sa qualité "scientifique". Si je le savais capable de produire de "belles images", il est tout à fait différent de s’en servir pour une exploitation scientifique. D’autre part, il n’était pas question pour moi d’intégrer une fonctionnalité qui avait été développée par Valérie Desnoux (autrice de INTI) et l’équipe de Meudon sans son accord, par respect pour son travail.

Cependant, vous aurez compris que les temps ont changé et que la pression des utilisateurs ainsi que mes discussions avec Florence Cornu, du projet SOLAP, aux Rencontres du Ciel et de l’Espace fin 2024, puis aux JASON en Juin dernier ont eu raison de mes doutes. Avec son accord et celui de Valérie, je suis donc heureux de vous annoncer que JSol’Ex 4 est officiellement supporté pour soumettre vos images dans la base de données BASS 2000 !

Un assistant pas à pas

Ne souhaitant pas faire les choses à moitié, j’ai particulièrement peaufiné le logiciel pour simplifier cette procédure. J’ai souhaité faire les choses avec sérieux, puisqu’il s’agit d’une base de données professionnelle, exploitée par des scientifiques. Ainsi, une chose importante pour moi était de guider l’utilisateur dans ce processus et de lui faire comprendre l’importance de la qualité des données, tout en simplifiant le processus de soumission.

bass2000
Figure 1. Assistant de soumission BASS2000

Ainsi par exemple, BASS2000 n’a que très peu de tolérance sur les problèmes d’orientation de l’image : il faut que le Nord solaire soit à moins de 1 degré d’erreur, sinon l’image sera refusée. L’assistant intègre donc un outil pour aider à vérifier l’orientation et corriger les erreurs minimes dues par exemple à une mise en station imparfaite.

L’assistant guidera aussi l’utilisateur dans son processus de soumission, en lui demandant de bien vérifier toutes les métadonnées associées à l’observation, et ira jusqu’à envoyer l’image sur le serveur FTP de BASS2000 pour validation par les équipes.

Je suis particulièrement reconnaissant à Florence CORNU pour son aide afin que les images JSol’Ex soient acceptées dans la base et je remercie mes beta-testeurs qui ont patiemment testé mes versions de développement pendant l’été.

Je ne vous cache pas qu’il s’agit là d’une forme de reconnaissance de mon travail, des centaines d’heures passées soirs et week-ends à développer ce logiciel qui rappelons-le est entièrement Open Source et gratuit.

Enfin, je ne peux pas m’arrêter sans vous annoncer une deuxième bonne nouvelle : non seulement vous pourrez utiliser JSol’Ex pour soumettre vos images acquises avec un Sol’Ex, mais aussi avec le MLAstro SHG 700, qui devient officiellement supporté dans la base BASS2000 !

Changements dans l’interface

Pour cette version j’ai aussi souhaité moderniser un peu l’interface graphique et la simplifier : avec le nombre de fonctionnalités croissantes arrive ce moment fatidique que tout développeur redoute : que l’interface ne devienne trop complexe pour les nouveaux utilisateurs et ne parle qu’aux anciens. J’ai essayé d’éviter cet écueil au cours du temps en refusant certaines fonctionnalités trop "de niche", mais sans pouvoir pour autant réussir à avoir une interface totalement claire.

interface overview
Figure 2. Aperçu des changements d’interface

Cette nouvelle version essaie donc de regrouper les paramètres par sections plus claires, tout en ajoutant des infobulles pour guider les utilisateurs, anciens comme nouveaux, dans cet esprit qui est que le logiciel se doit d’être le plus didactique possible : j’essaie, autant que faire se peut, de vous transmettre en tant qu’utilisateurs ce que moi-même j’apprends en développant ce logiciel. Un exemple frappant de cette philosophie, c’est cette fonctionnalité que j’avais ajoutée qui permet de montrer en cliquant sur le disque solaire, à quelle image il correspond dans le fichier source (une vidéo contenant des spectres).

Détection d’ellipse assistée par l’utilisateur

Je vous parlais un peu plus haut de l’Atlas des Spectrohéliogrammes. Cet atlas, réalisé par Pál Váradi Nagy, nécessite un travail énorme d’analyse des données et est réalisé à l’aide de scripts JSol’Ex. Cependant, pour certaines longueurs d’ondes ou pour parfois des images un peu compliquées à traiter, le logiciel peut échouer à détecter correctement les contours du disque solaire. Ceci peut arriver notamment lorsque les images sont peu contrastées ou lorsqu’il y a des réflexions internes qui biaisent la détection.

Afin de résoudre ces cas complexes qui, encore une fois, nuisent à l’analyse scientifiques, j’ai ajouté la possibilité d’aider le logiciel à détecter les contours, et ainsi à obtenir un image solaire bien ronde:

ellipse detection assistant
Figure 3. Détection d’ellipse assistée par l’utilisateur

L’IA à la rescousse

Je suis développeur et en tant que tel, sur ce media, il me semblait pertinent d’ajouter une section sur la façon dont cette version a été développée.

Cette version est la première version qui a été développée avec l’aide de l’IA, en particulier Claude Code. En effet, les dernières innovations en matière d’IA agentique sont pour le coup la véritable révolution à venir : autant avant, avoir une IA qui n’était pas capable de comprendre le contexte, faire des refactorings ou prévoir un plan de développement ne les rendait pas particulièrement utiles, autant les IA à base d’agents, qui sont capables d’analyser votre code, appeler des outils de manière autonome et avoir de vraies interactions avec vous sont un "game changer" de mon point de vue.

Dans cette version, j’ai donc utilisé Claude Code (plan Pro) pour m’aider à faire les refactorings dont j’avais besoin et m’assister dans une tâche où je ne suis pas particulièrement à l’aise : le design d’interfaces graphiques.

De manière générale, ça fonctionne plutôt bien. Très bien même. La capacité de l’outil à définir un plan d’implémentation et comprendre les besoins est assez fascinante. Le code généré, en revanche, nécessite toujours beaucoup de review. On pourrait dire que j’ai l’impression d’avoir un (très bon) stagiaire avec moi, en permanence. En tant que développeur senior, je suis assez rapide à identifier là où l’IA complique les choses inutilement, ou utilise des design pattern dépréciés, ou ne respecte pas les conventions de code. Je serai, par exemple, terrifié si le code original produit était parti en production sans review. Mais, en lui donnant les bonnes directions, en lui expliquant ses erreurs, on arrive rapidement à ce que l’on souhaite avec la qualité que l’on souhaite.

Quelques exemples de choses qui ne fonctionnent pour le coup vraiment pas bien:

  • Claude vous demande de créer un fichier CLAUDE.md qui comprend des instructions sur comment compiler votre projet, comment il est organisé, etc. Bref, du contexte qui est systématiquement ajouté à chaque session. Pourtant, à l’utilisation, Claude ignore allègrement ces instructions.

  • En Java, les fichiers .properties sont encodés en ISO-8859-1, même dans une base où tous les sources sont en UTF-8. C’est une bizarrerie historique, mais Claude ne la comprend absolument pas. A chaque fois qu’il modifie mes fichiers properties (qui servent à l’internationalisation de l’interface), il casse systématiquement l’encodage. Pour l’éviter, le dois systématiquement, avant de lui faire faire une opération dont je sais qu’elle implique ces fichiers, lui dire "respecte les guidelines du fichier CLAUDE" où je lui ai donné une technique pour éviter les problèmes (convertir le fichier en UTF-8, puis l’éditer, puis le reconvertir en ISO-8859-1)

  • Les commentaires dans le code. Quand on commence à avoir un peu de bouteille comme moi, il est assez insupportable de lire des commentaires "captain obvious", ce genre de commentaires qui dit "la ligne suivante calcule 1+1". Claude en génère beaucoup. Trop. Et malgré le fait que mon fichier CLAUDE lui interdise explicitement.

  • La confiance en soi. Claude est bien trop optimiste et tend trop à flatter l’utilisateur. Par exemple, si je lui mens explicitement ("ton algorithme est faux parce que XXX"), il répondra systématiquement "Tu as raison !" sans "réfléchir" (j’utilise les guillemets avec intention), comme un tic de langage ! Ca devient assez frustrant à la longue, lorsqu’il ne comprend pas un problème ou complique inutilement l’implémentation.

  • Les limites : on arrive, sur un projet de la taille de JSol’Ex, très rapidement aux limites d’usage même sur un forfait Pro. Si je suis satisfait de ce que ça m’apporte pour le prix, je ne suis pas prêt à payer les 200€ / mois pour augmenter ces limites. N’oublions pas que je fais ça sur mon temps libre…​ je n’ai aucune obligation de résultat !

Quoi qu’il en soit, il s’agit là d’avancées qu’il devient difficile d’ignorer. Et ceux qui me connaissent savent à quel point je suis critique sur l’utilisation des IA, les mythes autour de ce qu’elle est capable de faire et son impact écologique. Néanmoins, une chose est certaine : les dirigeants qui pensent économiser du temps et de l’argent en virant leurs développeurs pour les remplacer par de l’IA se mettent une balle dans le pied : elle pose de sérieux problèmes de qualité de code et donc de maintenance, et, avec l’arrivée des agents, posent de graves problèmes de sécurité (un outil autonome qui décide par lui même s’il faut vous demander l’autorisation pour exécuter une commande !"). L’IA doit être vue comme une aide à la productivité, mais qui doit être cadrée par des gens d’expérience. Bref, nous sommes à un tournant et je ne suis pas encore bien à l’aise avec ce que cela implique. Nous n’avons jamais été aussi près du rêve de gosse que j’avais d’IA "autonomes", mais la maturité me force à en avoir peur.

Mais nous nous éloignons là du sujet initial et concluons donc ce billet : dites bienvenue à JSol’Ex 4, consultez la vidéo de présentation ci-dessous et n’hésitez pas à contribuer !

Ellerman Bombs Detection with JSol’Ex 3.2

17 mai 2025

Tags: solex jsolex solar astronomy ellerman bombs

About Ellerman Bombs

Ellerman bombs were first described by Ferdinand Ellerman back in 1917. Ellerman’s article was named -Solar Hydrogen "Bombs"- and it’s only later that these were commonly referred to as "Ellerman bombs".

I came upon this term while reading an article from Sylvain or André Rondi quite early in my solar imaging journey, and I was since then obsessed by these phenomena.

Ellerman Bombs are small, transient, and explosive events that occur in the solar atmosphere, particularly in the vicinity of sunspots. These are very short lived compared to other solar features: from several dozens of seconds to a few minutes. The most common explanation is magnetic reconnection, which occurs when two magnetic regions opposite polarity come into contact and reconnect, releasing energy in the form of heat and light. This is for example described in this article from González, Danilovic and Kneer.

Amateur observations of Ellerman bombs are quite rare, but Rondi described such observations using a spectroheliograph back in 2005. They are rare because they are mostly invisible where the amateur observations are made (the center of the H-alpha line), and are too small to notice. However, these are visible in the wings of the H-alpha line: this is where a spectroheliograph comes in handy, since the cropping window that is used to capture an image contains more than just the H-alpha line.

I had a long standing issue to do something about it in JSol’Ex, and I finally got around to it: all it took was getting some test data to entertain the ideas.

Observation of an Ellerman Bomb

I am doing many observations of the sun: I started in 2023 with a Sol’Ex, then I recently got an SHG 700, so I have accumulated quite a bit of data, which is completed by scans which are shared with me by other users of JSol’Ex.

I have been looking for Ellerman bombs in my data, but I never found any yet. This changed a couple weeks ago: I was doing some routine work on JSol’Ex and using a capture I had done in April, 29, 2025 at 08:32 UTC, when I noticed, by accident, a bright spot in the continuum image:

continuum

It may not be obvious at first, which is precisely why these are hard to spot, so here’s a hint:

zoom continuum

JSol’Ex offers the ability to easily generate animations of the data captured at different wavelengths, so I generated a quick animation, which shows the same image at ±2.5Å from the center of the H-alpha line:

anim ellerman

We can see the typical behavior of an Ellerman bomb: it is bright in the wings of the H-alpha line, but it vanishes when we are at the center of the line. The fine spectral dispersion of the spectroheliograph makes it possible to highlight this phenomenon very precisely.

The corresponding frame of the SER file shows the aspect of the Ellerman bomb in the spectrum:

ellerman spectrum

The shape that you can see is often referred to as the "moustache". At this stage I was pretty sure I had observed my first Ellerman bomb, and that I could implement an algorithm to detect it.

JSol’Ex 3.2 auto-detection

JSol’Ex 3.2 ships with a new feature to automatically detect Ellerman bombs in the data. Currently, it is limited to H-alpha, but it should be possible to detect these in CaII as well.

The algorithm I implemented uses statistical analysis of the spectrum to match the characteristics of the "moustache" shape, in particular:

  • a maximum of intensity around 1Å from the center of the line

  • a distance which spreads up to 5Å from the center of the line

  • a brightening which is only visible in the wings of the line

JSol’Ex will generate, for each detection, an image showing the location of the detected bombs:

ellerman location

And for each bomb, it will create an image which shows the region of the spectrum which is used to detect the bomb. This is for example what is automatically generated for the bomb described above:

spectrum detection

Algorithm details

Note
This is a description of the algorithm that I implemented in an adhoc fashion: I’m not a mathematician nor a scientist: I’m an engineer and the algorithm above was implemented using my "intuition" of what I thought would work. It is likely to change as new versions are released.

The algorithm is based on the following steps:

  • for each frame in the SER file, identify the "borders" of the sun

  • perform a Gaussian blur on the spectrum to reduce noise

  • within the borders, compute, for each column, the average intensity of the spectrum for the center of the line and the wings separately. The center of the line is defined as the range [-0.35Å, 0.35Å] and the wings as the range [-5Å, -0.35Å[ ∪ ]0.35Å, 5Å]

  • compute the maximum intensity of the wings, starting from the center of the line, and going outwards until we reach the maximum intensity (local extremum)

  • compute the average of each column average intensity for the wings (the "global average")

With Ellerman bomb scoring parameters defined below, the algorithm proceeds per column:

  • For each column index x in the spectrum image:

  • Build a neighborhood of up to 16 columns around x: Nₓ = { x + k | k ∈ ℤ, |k| ≤ 8 }, clamped to the image boundaries.

  • Compute the overall mean column intensity Ī_global = (1 / N_total) ∑_{j=1..N_total} Ī(j).

  • Exclude any columns in Nₓ whose average intensity falls below 90 % of Ī_global, since very dark columns (usually sunspots) would pull down our estimate of the local wing background and hide true brightening events. Call the remaining set Mₓ and let m = |Mₓ|.

  • If m < 1, there aren’t enough valid neighbors to form a reliable background—skip column x.

  • On the Gaussian-smoothed data, measure three key values at column x:

  • c₀ ≔ I_center(x), the mean intensity in the core region of the spectral line.

  • c_w ≔ I_wing(x), the average intensity across the two wing windows at ±1 Å.

  • c_max ≔ max_{p ∈ wing-pixels nearest ±1 Å} I(p, x), the single highest wing intensity near the expected shift.

  • Compute the local wing background r₀ = (1 / (m−1)) ∑_{j ∈ Mₓ, j≠x} I_wing(j). Using only nearby “bright enough” columns keeps the background estimate from being skewed by dark features.

  • Define a line-brightening factor B = max(1, c₀ / min(r₀, I_center,global)). Ellerman bombs boost the wings without greatly brightening the core, whereas flares brighten both.

  • Form an initial score S₀ = 1 + c_max / min(r₀, I_wing,global), where I_wing,global = (1/N_total) ∑_{j=1..N_total} I_wing(j). This compares the local wing peak to the typical wing level across the image.

  • Adjust for how many neighbors were used: S₁ = S₀ × (m / 16). Fewer valid neighbors mean less confidence, so the score is scaled down proportionally.

  • Compute the wing-to-background ratio rᵢ = c_max / r₀. If rᵢ ≤ 1.05, the wing peak is too close to the local background and the column is discarded. Otherwise, we boost the score further:

  • Raise S₁ to the power of e^{rᵢ}, giving S₂ = S₁^( e^{rᵢ} ). This makes the score grow quickly when the wing peak stands out strongly.

  • Multiply by √(c_max / c₀) to get S₃ = S₂ · √(c_max / c₀). That emphasizes cases where the wings are much brighter than the core.

  • Finally, penalize any shift away from the ideal ±1 Å wing position. If y_core and y_max are the pixel locations of line center and wing peak, compute Δλ = |y_max – y_core| × (Å / pixel), then S_final = S₃ / (1 + |1 Å – Δλ|).

  • If S_final > 12, mark column x as a candidate event. Use the value of B to decide:

  • When B < 1.5, it behaves like an Ellerman bomb (wings bright, core unchanged).

  • When B > 2, it matches a flare (both core and wings bright).

  • If 1.5 ≤ B ≤ 2, the result is ambiguous and ignored.

All thresholds (0.9× global mean, 1.05 ratio, score > 12, B cutoffs) were chosen by testing on data and visually inspecting results.

Post-filtering

There will often be cases where the same bomb is detected in multiple frames. Therefore, we need to do some merging of bombs which are spatially connected.

Eventually, we apply a limit threshold: if there are more than 5 Ellerman Bombs detected in an image, then we consider the detection to be false positives (this happens typically on saturated images, or images with too much noise). This is a bit arbitrary, but it seems to work well in practice.

Test data

I had about ~1100 scans I could reuse for detection, and it successfully discovered Ellerman bombs candidates in about 10% of them. Of course this required some tuning and several runs to get the parameters right. This doesn’t mean that you have 10% chances of finding an Ellerman bomb in your data, because the test data I have is biased (I often do 10 to 20 scans in a row, within a few minutes, to perform stacking, so if a bomb is detected in an image, it has decent chances of being detected in the next one). Also, I am using the term "Ellerman Bomb candidate", because there’s nothing better than visual confirmation to make sure that what you see is indeed an Ellerman bomb: an algorithm is not perfect, and it may fail for many reasons (noise, saturation, artifacts, etc.)

Here are a few examples of Ellerman bombs candidates detected in my data:

candidate1
candidate2
candidate3
candidate4
candidate5
candidate6

Conclusion

This blog post described my first visual Ellerman bomb detection. Then I described how I implemented an algorithm to automatically detect Ellerman bombs in JSol’Ex 3.2. I am very happy to release this to the wild, so that this kind of discovery is made more accessible to everyone. Of course, as I always say, you should take the detections with care, and always review the results. This is why you get both a global "map" of the detected bombs, and a detailed view of each bomb, which can be used to confirm the detection. In addition, I recommend that you create animations of the regions, which you can simply do in JSol’Ex by CTLR+clicking on the image then selecting an area around the bomb.

Finally, I’d like to thank my friends of the Astro Club de Challans, who heard me talk about Ellerman bombs detection for a while, showing them preliminary results, and who were very supportive of my work. Last but not least, thanks again to my wife for her patience, seeing me work on this (too) late at night!

Jagged Edges Correction with JSol’Ex 3.1

02 mai 2025

Tags: solex jsolex solar astronomy

I’m happy to announce the release of JSol’Ex 3.1, which ships with a long awaited feature: jagged edges correction! Let’s explore in this article what this is about.

The dreaded jagged edges

Spectroheliographs like the Sol’Ex or the Sunscan are not using a traditional imaging system like, for example, in planetary imaging, where you can capture dozens to hundreds of frames per second and do the so called "lucky imaging" to get the best frames and stack them together to get a high resolution image.

In the case of a spectroheliograph, the image is built by scanning the solar disk in a series of "slices" of the sun: it takes several seconds and sometimes minutes (~3 minutes when you let the sun pass "naturally" through the slit) to get a full image of the sun.

In practice, this means that between each frame, each "slice" of the sun, the atmosphere will have slightly moved, causing some misalignment between the frames. This is also particularly visible when there is some wind, which can cause the telescope to shake a bit, and the image to be misaligned. Lastly, you may even have a mount which is not perfectly balanced, or which has some resonance at certain scan speeds.

As an illustration, let’s take this image captured using a Sunscan (courtesy of Oscar Canales):

original

This image shows 3 problems:

  1. the jagged edges, which cause some unpleasant "spikes" on the edges of the sun

  2. misalignment of features of the sun, particularly visible on filaments

  3. a disk which isn’t perfectly round

These issues are typical of spectroheliographs, and are the main limiting factor when it comes to achieving high resolution images. Therefore, excellent seeing conditions are a must to get high quality images. Even if you do stacking, the fact that the reference image will show spikes is often a problem.

Correcting jagged edges

Starting with release 3.1.0, JSol’Ex ships with an experimental feature to correct jagged edges. It is not perfect yet, but good enough for you to provide feedback and even improve the quality of your images.

For example, here’s the same image, but with jagged edges correction applied:

corrected

And so that it’s even easier to see the difference, here’s a blinking animation of the two images:

blink

The jagged edges are now mostly gone, the features in the sun are better aligned, and the image is much more pleasant to look at. There is still some jagging visible, the correction will never be perfect, but it is a good start.

In particular, you should be careful when applying the correction, because it could cause some artifacts in the image, in particular on prominences. As usual, with great powers comes great responsibilities!

How does it work?

To illustrate how the correction works, let’s imagine a perfect scan: a scan speed giving us a perfectly circular disk, no turbulence, no wind, etc.

In this case, what we would see during the scan is a spectrum which width slowly increases, reaches a maximum, and then decreases. The pace at which the width increases and decreases is determined by the scan speed and is predictable. In particular, the left and right borders of the spectrum will follow a circular curve.

Now, let’s get back to a "real world" scan. In that case, the left and right edges will slightly deviate from the circular curve. They will also follow the path of an ellipse: in fact, this ellipse is already required in order to perform geometric correction.

The idea is therefore quite simple in theory: we need to detect the left and right edges of the spectrum, then compare them to the ideal ellipse that we have computed. Pixels which deviate from this curve give us an information about the jagged edges. We can then compute a distortion map, which will be used to correct the image.

In practice, we also need to apply some filtering of samples: in practice, while the detection of edges is robust enough to provide us with a good geometric correction, it is not perfect. It can also be skewed by the presence of proms for example. Therefore, we are performing a sigma clipping on the detected edges, in order to remove outliers, that is to say pixels which deviate too much from the average deviation.

This is also why the correction will not work properly if the image is not focused correctly: you would combine two problems in one, and the correction would not be able to detect the edges properly.

In addition, in the image above you can see that the bottom most prominence is slightly distorted, which is caused by the fact that it’s far away from the 2 points which were used to compute the distortion. It may be possible to reduce such artifacts by using a smaller sigma factor (at the risk of undercorrecting edges).

Conclusion

In this blog post, I have described the new jagged edges correction feature in JSol’Ex 3.1. This solves one of the most common issues users are having with spectroheliographs, and I hope it will help you get better images. However, as usual, it’s a work in progress, so do not hesitate to provide feedback!

JSol’Ex 3.0.0 is out!

14 avril 2025

Tags: solex jsolex solar astronomy

After dozens of hours of work, I’m happy to announce the release of JSol’Ex 3.0.0! This major release is a new milestone in the development of JSol’Ex, and it brings new features and improvements that I hope you will enjoy.

A bit of history

Since its inception as an educational project for understanding how the Sol’Ex works, JSol’Ex has grown into a powerful tool for processing and analyzing images captured with the Sol’Ex. However, it became very popular over time and started to be used outside the sole Sol’Ex community. In particular, it is now a tool of choice for many spectroheliographs owners.

I have always been keen on providing a user-friendly interface while keeping a good innovation pace. JSol’Ex was the first SHG software to offer:

  • automatic colorization of images

  • automatic detection of spectral lines

  • Doppler eclipse image, inverted image and orientation grid

  • automatic correction of the P angle

  • single click processing of Helium line images

  • embedded stacking

  • automatic trimming and compression of SER files

  • identifying what frame of a SER file matches a particular point of the solar disk

  • an optimal exposure calculator

  • automatic detection of redshifts

  • automatic detection and annotation of sunspots

  • automatic creation of animations of a single image taken at different wavelengths

  • a full-fledged scripting engine which allows creation of custom images, animations, etc.

  • support for home-made SHGs

    1. and more!

All integrated into a single, easy to use, cross-platform application: no need for Gimp, ImPPG or Autostakkert! (but you can use them if you want to!).

For this new release, I wondered if I should change the name so that it better matches the new scope of the project, but eventually decided to keep it as it is, because it is already well known in the community and that changing it also implies significant amount of time spent on this that wouldn’t go into the new features.

Here comes JSol’Ex 3.0.0!

In addition to performance improvements and bugfixes, this release deserves its major version number because of many significant improvements.

Improved image quality

Better line detection

The first thing you may notice is the improved image quality. The algorithm to detect the spectral lines have been improved, which will result in a better polynomial detection and therefore a more accurate image reconstruction. This will be noticeable in images which have low signal, which is often the case in calcium.

Background removal

Next, a new background removal algorithm has been added. It is fairly common to have either internal reflections or light leaks in the optical path of a spectroheliograph. This results in images which are hard to process or not usable at all. This version of JSol’Ex is capable of removing difficult gradients. To illustrate this, here’s an image that a user with a Sunscan sent me:

sunscan ca bg

The image on the left is unprocessed and shows important internal reflections. These are completely removed in the image on the right, processed automatically with JSol’Ex.

This background removal will only be applied to the "Autostretch" image, which is the default "enhanced" image that JSol’Ex is using, but it is also available as a standalone function in ImageMath scripts.

Physical flat correction

Another common issue with SHGs is the presence of vignetting, visible on the poles of the solar disk. The vignetting issue stems from the following factors, in the order of their impact:

  • the physical size of the SHG’s optical components — including the lens diameter, grating size, and slit length

  • the telescope’s focal ratio and focal length

  • the telescope’s own intrinsic vignetting (though this is rarely a significant factor)

For prebuilt SHGs like the MLAstro SHG 700, the size of the lens and grating is typically constrained by the housing design and cost limitations. As a result, vignetting often becomes an issue when using longer focal length telescopes,—especially when paired with a longer slit.

To fix this, JSol’Ex had until now the option to use artificial flat correction: the idea was basicaly to model the illumination of the solar disk via a polynomial and to apply a correction to the image. This works relatively well, but it can sometimes introduce some noise, or even bias the reconstruction on low-contrast images. On even longer slits, this artificial correction is not sufficient to remove the vignetting, so JSol’Ex 3 introduces the ability to use a physical flat correction.

The idea with a physical flat correction is to take a series of 10 to 20 images of the sun, using a light diffuser device at the entrance of the telescope, such as tracing paper, in order to diffuse light. The flat should be captured with the same cropping window as the one used for the solar images, but exposure will be longer, and possibly higher gain as well. The result is a SER file that JSol’Ex can use to create a model of the illumination of the disk, which can be used to correct the images.

As an illustration, here’s a series of 3 images of the Sun, taken with a prototype of a 10mm slit:

flat correction

The image on the left is done without any correction and shows very strong vignetting. The image in the middle is done with the artificial flat correction, improves the situation, but still shows some vignetting. The image on the right is done with the physical flat correction, which is much better and shows no vignetting at all.

Flats can be reused between sessions, as long as you use the same cropping window and the same wavelength.

The physical flat correction can also be used on images taken with a Sol’Ex, in particular for some wavelengths like H-beta which show stronger illumination of the middle of the solar disk.

Note

Flat correction is not designed to fix transversalliums: it has to apply low pass filtering to the image to compute a good flat, which will remove the transverse lines. To correct transversalliums, use the banding correction parameters.

New Stretching Options

By default, JSol’Ex used to display images applying a linear stretch. Starting with this version, it is possible to select which stretching algorithm to use: linear, curve or no stretching at all.

stretching

Distance measurement tool

This version introduces a new tool to measure distances! This feature was suggested by Minh Nguyen from MLAstro, after seeing one of my images in Calcium H, which showed a very long filament:

open measure

This tool lets you click on waypoints to follow a path and make measurements on the disk, in which case the distances take the curvature into account, or outside the disk, for example to measure the size of prominences, in which case the distances are linear.

measurements

The measured distances are always an approximation, because it’s basically impossible to know at what height a particular feature is located, but it gives a good rough estimate.

New scripting features

Last but not least, this version significantly improves the scripting engine, aka ImageMath. While this feature is for more advanced users, it is an extremely powerful tool which lets you generate custom images, automatically stack images, create animations, etc.

In this version, the scripting engine has been rewritten to make it more enjoyable to use. It adds:

  • the ability to write expressions on several lines

  • the possibility to use named parameters

  • the ability to define your own functions

  • call an external web service to generate script snippets

  • the ability to import scripts into other scripts

As well as new functions. Let’s take a deeper look.

Declaring your own functions

You may have faced the situation where you wanted to apply the same operation to several images. For example, let’s imagine that you want to decorate an image with the observation details and the solar parameters.

Before, you would write something like this:

image1=draw_solar_params(draw_obs_details(some_image)
image2=draw_solar_params(draw_obs_details(some_other_image)

Now, you can define a function, let’s call it decorate, which will take an image and return the decorated image:

[fun:decorate img]
    result = draw_solar_params(draw_obs_details(img))

[outputs]
image1=decorate(some_image)
image2=decorate(some_other_image)

You can take a look at the documentation for more details.

Importing scripts

In the previous section we have seen how to define functions. It can be useful to externalize these functions in a separate file, so that they can be reused in other scripts. This is now possible with the import statement.

For example, let’s say you have a file called utils.math which contains the decorate function.

We can now import this file in our script:

[include "utils"]

[outputs]
image1=decorate(some_image)
image2=decorate(some_other_image)

This will import the utils.math file and make the decorate function available in the current script.

Named parameters

Named parameters are a new feature that allows you to pass parameters to functions by name, instead of by position. This is particularly useful for functions that take a lot of parameters, or when you want to make your code more readable.

For example, in the example above, we could have written:

[include "utils"]

[outputs]
image1=decorate(img: some_image)
image2=decorate(img: some_other_image)

The names of the parameters are documented here.

New functions

This version introduces a few new functions, which are available in the scripting engine:

  • bg_model: background sky modeling

  • a2px and px2a: conversion between pixels and Angstroms

  • wavelen: returns the wavelength of an image, based on its pixel shift, dispersion, and reference wavelength

  • remote_scriptgen: allows calling an external web service to generate a script or images

  • transition: creates a transition between two or more images

  • curve_transform: applies a transformation to the image based on a curve

  • equalize: equalizes the histograms of a series of images so that they look similar in brightness and contrast

And others have been improved:

  • find_shift: added an optional parameter for the reference wavelength

  • continuum: improved function reliability, enhancing Helium line extraction

The transition function, for example, is capable of generating intermediate frames in an animation, based on the actual difference of time between two images, offering the ability to have smooth, uniform transitions between images.

This is how my partial solar eclipse animation was created!

Acknowledgements

I would like to thank all the users who have contributed to this release by reporting bugs, suggesting features, and testing the software. In particular, I would like to recognize the following people:

  • Minh Nguyen, MLAstro’s founder for his help with the background removal and flat correction algorithms, as well as the new distance measurement tool and review of this blog post

  • Yves Robin for his testing and improvement ideas

  • my wife for her patience, while I was going to bed late every night to work on this release


Older posts are available in the archive.