
17 May 2025
Tags: solex jsolex solar astronomy ellerman bombs
Ellerman bombs were first described by Ferdinand Ellerman back in 1917. Ellerman’s article was named -Solar Hydrogen "Bombs"- and it’s only later that these were commonly referred to as "Ellerman bombs".
I came upon this term while reading an article from Sylvain or André Rondi quite early in my solar imaging journey, and I was since then obsessed by these phenomena.
Ellerman Bombs are small, transient, and explosive events that occur in the solar atmosphere, particularly in the vicinity of sunspots. These are very short lived compared to other solar features: from several dozens of seconds to a few minutes. The most common explanation is magnetic reconnection, which occurs when two magnetic regions opposite polarity come into contact and reconnect, releasing energy in the form of heat and light. This is for example described in this article from González, Danilovic and Kneer.
Amateur observations of Ellerman bombs are quite rare, but Rondi described such observations using a spectroheliograph back in 2005. They are rare because they are mostly invisible where the amateur observations are made (the center of the H-alpha line), and are too small to notice. However, these are visible in the wings of the H-alpha line: this is where a spectroheliograph comes in handy, since the cropping window that is used to capture an image contains more than just the H-alpha line.
I had a long standing issue to do something about it in JSol’Ex, and I finally got around to it: all it took was getting some test data to entertain the ideas.
I am doing many observations of the sun: I started in 2023 with a Sol’Ex, then I recently got an SHG 700, so I have accumulated quite a bit of data, which is completed by scans which are shared with me by other users of JSol’Ex.
I have been looking for Ellerman bombs in my data, but I never found any yet. This changed a couple weeks ago: I was doing some routine work on JSol’Ex and using a capture I had done in April, 29, 2025 at 08:32 UTC, when I noticed, by accident, a bright spot in the continuum image:
It may not be obvious at first, which is precisely why these are hard to spot, so here’s a hint:
JSol’Ex offers the ability to easily generate animations of the data captured at different wavelengths, so I generated a quick animation, which shows the same image at ±2.5Å from the center of the H-alpha line:
We can see the typical behavior of an Ellerman bomb: it is bright in the wings of the H-alpha line, but it vanishes when we are at the center of the line. The fine spectral dispersion of the spectroheliograph makes it possible to highlight this phenomenon very precisely.
The corresponding frame of the SER file shows the aspect of the Ellerman bomb in the spectrum:
The shape that you can see is often referred to as the "moustache". At this stage I was pretty sure I had observed my first Ellerman bomb, and that I could implement an algorithm to detect it.
JSol’Ex 3.2 ships with a new feature to automatically detect Ellerman bombs in the data. Currently, it is limited to H-alpha, but it should be possible to detect these in CaII as well.
The algorithm I implemented uses statistical analysis of the spectrum to match the characteristics of the "moustache" shape, in particular:
a maximum of intensity around 1Å from the center of the line
a distance which spreads up to 5Å from the center of the line
a brightening which is only visible in the wings of the line
JSol’Ex will generate, for each detection, an image showing the location of the detected bombs:
And for each bomb, it will create an image which shows the region of the spectrum which is used to detect the bomb. This is for example what is automatically generated for the bomb described above:
Note
|
This is a description of the algorithm that I implemented in an adhoc fashion: I’m not a mathematician nor a scientist: I’m an engineer and the algorithm above was implemented using my "intuition" of what I thought would work. It is likely to change as new versions are released. |
The algorithm is based on the following steps:
for each frame in the SER file, identify the "borders" of the sun
perform a Gaussian blur on the spectrum to reduce noise
within the borders, compute, for each column, the average intensity of the spectrum for the center of the line and the wings separately. The center of the line is defined as the range [-0.35Å, 0.35Å] and the wings as the range [-5Å, -0.35Å[ ∪ ]0.35Å, 5Å]
compute the maximum intensity of the wings, starting from the center of the line, and going outwards until we reach the maximum intensity (local extremum)
compute the average of each column average intensity for the wings (the "global average")
With Ellerman bomb scoring parameters defined below, the algorithm proceeds per column:
For each column index x
in the spectrum image:
Build a neighborhood of up to 16 columns around x
:
Nₓ = { x + k | k ∈ ℤ, |k| ≤ 8 }
, clamped to the image boundaries.
Compute the overall mean column intensity
Ī_global = (1 / N_total) ∑_{j=1..N_total} Ī(j)
.
Exclude any columns in Nₓ
whose average intensity falls below 90 % of Ī_global
, since very dark columns (usually sunspots) would pull down our estimate of the local wing background and hide true brightening events. Call the remaining set Mₓ
and let m = |Mₓ|
.
If m < 1
, there aren’t enough valid neighbors to form a reliable background—skip column x
.
On the Gaussian-smoothed data, measure three key values at column x
:
c₀ ≔ I_center(x)
, the mean intensity in the core region of the spectral line.
c_w ≔ I_wing(x)
, the average intensity across the two wing windows at ±1 Å.
c_max ≔ max_{p ∈ wing-pixels nearest ±1 Å} I(p, x)
, the single highest wing intensity near the expected shift.
Compute the local wing background
r₀ = (1 / (m−1)) ∑_{j ∈ Mₓ, j≠x} I_wing(j)
.
Using only nearby “bright enough” columns keeps the background estimate from being skewed by dark features.
Define a line-brightening factor
B = max(1, c₀ / min(r₀, I_center,global))
.
Ellerman bombs boost the wings without greatly brightening the core, whereas flares brighten both.
Form an initial score
S₀ = 1 + c_max / min(r₀, I_wing,global)
,
where I_wing,global = (1/N_total) ∑_{j=1..N_total} I_wing(j)
.
This compares the local wing peak to the typical wing level across the image.
Adjust for how many neighbors were used:
S₁ = S₀ × (m / 16)
.
Fewer valid neighbors mean less confidence, so the score is scaled down proportionally.
Compute the wing-to-background ratio
rᵢ = c_max / r₀
.
If rᵢ ≤ 1.05
, the wing peak is too close to the local background and the column is discarded. Otherwise, we boost the score further:
Raise S₁
to the power of e^{rᵢ}
, giving
S₂ = S₁^( e^{rᵢ} )
.
This makes the score grow quickly when the wing peak stands out strongly.
Multiply by √(c_max / c₀)
to get
S₃ = S₂ · √(c_max / c₀)
.
That emphasizes cases where the wings are much brighter than the core.
Finally, penalize any shift away from the ideal ±1 Å wing position. If y_core
and y_max
are the pixel locations of line center and wing peak, compute
Δλ = |y_max – y_core| × (Å / pixel)
,
then
S_final = S₃ / (1 + |1 Å – Δλ|)
.
If S_final > 12
, mark column x
as a candidate event. Use the value of B
to decide:
When B < 1.5
, it behaves like an Ellerman bomb (wings bright, core unchanged).
When B > 2
, it matches a flare (both core and wings bright).
If 1.5 ≤ B ≤ 2
, the result is ambiguous and ignored.
All thresholds (0.9× global mean, 1.05 ratio, score > 12, B cutoffs) were chosen by testing on data and visually inspecting results.
There will often be cases where the same bomb is detected in multiple frames. Therefore, we need to do some merging of bombs which are spatially connected.
Eventually, we apply a limit threshold: if there are more than 5 Ellerman Bombs detected in an image, then we consider the detection to be false positives (this happens typically on saturated images, or images with too much noise). This is a bit arbitrary, but it seems to work well in practice.
I had about ~1100 scans I could reuse for detection, and it successfully discovered Ellerman bombs candidates in about 10% of them. Of course this required some tuning and several runs to get the parameters right. This doesn’t mean that you have 10% chances of finding an Ellerman bomb in your data, because the test data I have is biased (I often do 10 to 20 scans in a row, within a few minutes, to perform stacking, so if a bomb is detected in an image, it has decent chances of being detected in the next one). Also, I am using the term "Ellerman Bomb candidate", because there’s nothing better than visual confirmation to make sure that what you see is indeed an Ellerman bomb: an algorithm is not perfect, and it may fail for many reasons (noise, saturation, artifacts, etc.)
Here are a few examples of Ellerman bombs candidates detected in my data:
This blog post described my first visual Ellerman bomb detection. Then I described how I implemented an algorithm to automatically detect Ellerman bombs in JSol’Ex 3.2. I am very happy to release this to the wild, so that this kind of discovery is made more accessible to everyone. Of course, as I always say, you should take the detections with care, and always review the results. This is why you get both a global "map" of the detected bombs, and a detailed view of each bomb, which can be used to confirm the detection. In addition, I recommend that you create animations of the regions, which you can simply do in JSol’Ex by CTLR+clicking on the image then selecting an area around the bomb.
Finally, I’d like to thank my friends of the Astro Club de Challans, who heard me talk about Ellerman bombs detection for a while, showing them preliminary results, and who were very supportive of my work. Last but not least, thanks again to my wife for her patience, seeing me work on this (too) late at night!
02 May 2025
Tags: solex jsolex solar astronomy
I’m happy to announce the release of JSol’Ex 3.1, which ships with a long awaited feature: jagged edges correction! Let’s explore in this article what this is about.
Spectroheliographs like the Sol’Ex or the Sunscan are not using a traditional imaging system like, for example, in planetary imaging, where you can capture dozens to hundreds of frames per second and do the so called "lucky imaging" to get the best frames and stack them together to get a high resolution image.
In the case of a spectroheliograph, the image is built by scanning the solar disk in a series of "slices" of the sun: it takes several seconds and sometimes minutes (~3 minutes when you let the sun pass "naturally" through the slit) to get a full image of the sun.
In practice, this means that between each frame, each "slice" of the sun, the atmosphere will have slightly moved, causing some misalignment between the frames. This is also particularly visible when there is some wind, which can cause the telescope to shake a bit, and the image to be misaligned. Lastly, you may even have a mount which is not perfectly balanced, or which has some resonance at certain scan speeds.
As an illustration, let’s take this image captured using a Sunscan (courtesy of Oscar Canales):
This image shows 3 problems:
the jagged edges, which cause some unpleasant "spikes" on the edges of the sun
misalignment of features of the sun, particularly visible on filaments
a disk which isn’t perfectly round
These issues are typical of spectroheliographs, and are the main limiting factor when it comes to achieving high resolution images. Therefore, excellent seeing conditions are a must to get high quality images. Even if you do stacking, the fact that the reference image will show spikes is often a problem.
Starting with release 3.1.0, JSol’Ex ships with an experimental feature to correct jagged edges. It is not perfect yet, but good enough for you to provide feedback and even improve the quality of your images.
For example, here’s the same image, but with jagged edges correction applied:
And so that it’s even easier to see the difference, here’s a blinking animation of the two images:
The jagged edges are now mostly gone, the features in the sun are better aligned, and the image is much more pleasant to look at. There is still some jagging visible, the correction will never be perfect, but it is a good start.
In particular, you should be careful when applying the correction, because it could cause some artifacts in the image, in particular on prominences. As usual, with great powers comes great responsibilities!
To illustrate how the correction works, let’s imagine a perfect scan: a scan speed giving us a perfectly circular disk, no turbulence, no wind, etc.
In this case, what we would see during the scan is a spectrum which width slowly increases, reaches a maximum, and then decreases. The pace at which the width increases and decreases is determined by the scan speed and is predictable. In particular, the left and right borders of the spectrum will follow a circular curve.
Now, let’s get back to a "real world" scan. In that case, the left and right edges will slightly deviate from the circular curve. They will also follow the path of an ellipse: in fact, this ellipse is already required in order to perform geometric correction.
The idea is therefore quite simple in theory: we need to detect the left and right edges of the spectrum, then compare them to the ideal ellipse that we have computed. Pixels which deviate from this curve give us an information about the jagged edges. We can then compute a distortion map, which will be used to correct the image.
In practice, we also need to apply some filtering of samples: in practice, while the detection of edges is robust enough to provide us with a good geometric correction, it is not perfect. It can also be skewed by the presence of proms for example. Therefore, we are performing a sigma clipping on the detected edges, in order to remove outliers, that is to say pixels which deviate too much from the average deviation.
This is also why the correction will not work properly if the image is not focused correctly: you would combine two problems in one, and the correction would not be able to detect the edges properly.
In addition, in the image above you can see that the bottom most prominence is slightly distorted, which is caused by the fact that it’s far away from the 2 points which were used to compute the distortion. It may be possible to reduce such artifacts by using a smaller sigma factor (at the risk of undercorrecting edges).
In this blog post, I have described the new jagged edges correction feature in JSol’Ex 3.1. This solves one of the most common issues users are having with spectroheliographs, and I hope it will help you get better images. However, as usual, it’s a work in progress, so do not hesitate to provide feedback!
14 April 2025
Tags: solex jsolex solar astronomy
After dozens of hours of work, I’m happy to announce the release of JSol’Ex 3.0.0! This major release is a new milestone in the development of JSol’Ex, and it brings new features and improvements that I hope you will enjoy.
Since its inception as an educational project for understanding how the Sol’Ex works, JSol’Ex has grown into a powerful tool for processing and analyzing images captured with the Sol’Ex. However, it became very popular over time and started to be used outside the sole Sol’Ex community. In particular, it is now a tool of choice for many spectroheliographs owners.
I have always been keen on providing a user-friendly interface while keeping a good innovation pace. JSol’Ex was the first SHG software to offer:
automatic colorization of images
automatic detection of spectral lines
Doppler eclipse image, inverted image and orientation grid
automatic correction of the P angle
single click processing of Helium line images
embedded stacking
automatic trimming and compression of SER files
identifying what frame of a SER file matches a particular point of the solar disk
an optimal exposure calculator
automatic detection of redshifts
automatic detection and annotation of sunspots
automatic creation of animations of a single image taken at different wavelengths
a full-fledged scripting engine which allows creation of custom images, animations, etc.
support for home-made SHGs
and more!
All integrated into a single, easy to use, cross-platform application: no need for Gimp, ImPPG or Autostakkert! (but you can use them if you want to!).
For this new release, I wondered if I should change the name so that it better matches the new scope of the project, but eventually decided to keep it as it is, because it is already well known in the community and that changing it also implies significant amount of time spent on this that wouldn’t go into the new features.
In addition to performance improvements and bugfixes, this release deserves its major version number because of many significant improvements.
The first thing you may notice is the improved image quality. The algorithm to detect the spectral lines have been improved, which will result in a better polynomial detection and therefore a more accurate image reconstruction. This will be noticeable in images which have low signal, which is often the case in calcium.
Next, a new background removal algorithm has been added. It is fairly common to have either internal reflections or light leaks in the optical path of a spectroheliograph. This results in images which are hard to process or not usable at all. This version of JSol’Ex is capable of removing difficult gradients. To illustrate this, here’s an image that a user with a Sunscan sent me:
The image on the left is unprocessed and shows important internal reflections. These are completely removed in the image on the right, processed automatically with JSol’Ex.
This background removal will only be applied to the "Autostretch" image, which is the default "enhanced" image that JSol’Ex is using, but it is also available as a standalone function in ImageMath scripts.
Another common issue with SHGs is the presence of vignetting, visible on the poles of the solar disk. The vignetting issue stems from the following factors, in the order of their impact:
the physical size of the SHG’s optical components — including the lens diameter, grating size, and slit length
the telescope’s focal ratio and focal length
the telescope’s own intrinsic vignetting (though this is rarely a significant factor)
For prebuilt SHGs like the MLAstro SHG 700, the size of the lens and grating is typically constrained by the housing design and cost limitations. As a result, vignetting often becomes an issue when using longer focal length telescopes,—especially when paired with a longer slit.
To fix this, JSol’Ex had until now the option to use artificial flat correction: the idea was basicaly to model the illumination of the solar disk via a polynomial and to apply a correction to the image. This works relatively well, but it can sometimes introduce some noise, or even bias the reconstruction on low-contrast images. On even longer slits, this artificial correction is not sufficient to remove the vignetting, so JSol’Ex 3 introduces the ability to use a physical flat correction.
The idea with a physical flat correction is to take a series of 10 to 20 images of the sun, using a light diffuser device at the entrance of the telescope, such as tracing paper, in order to diffuse light. The flat should be captured with the same cropping window as the one used for the solar images, but exposure will be longer, and possibly higher gain as well. The result is a SER file that JSol’Ex can use to create a model of the illumination of the disk, which can be used to correct the images.
As an illustration, here’s a series of 3 images of the Sun, taken with a prototype of a 10mm slit:
The image on the left is done without any correction and shows very strong vignetting. The image in the middle is done with the artificial flat correction, improves the situation, but still shows some vignetting. The image on the right is done with the physical flat correction, which is much better and shows no vignetting at all.
Flats can be reused between sessions, as long as you use the same cropping window and the same wavelength.
The physical flat correction can also be used on images taken with a Sol’Ex, in particular for some wavelengths like H-beta which show stronger illumination of the middle of the solar disk.
Note
|
Flat correction is not designed to fix transversalliums: it has to apply low pass filtering to the image to compute a good flat, which will remove the transverse lines. To correct transversalliums, use the banding correction parameters. |
By default, JSol’Ex used to display images applying a linear stretch. Starting with this version, it is possible to select which stretching algorithm to use: linear, curve or no stretching at all.
This version introduces a new tool to measure distances! This feature was suggested by Minh Nguyen from MLAstro, after seeing one of my images in Calcium H, which showed a very long filament:
This tool lets you click on waypoints to follow a path and make measurements on the disk, in which case the distances take the curvature into account, or outside the disk, for example to measure the size of prominences, in which case the distances are linear.
The measured distances are always an approximation, because it’s basically impossible to know at what height a particular feature is located, but it gives a good rough estimate.
Last but not least, this version significantly improves the scripting engine, aka ImageMath. While this feature is for more advanced users, it is an extremely powerful tool which lets you generate custom images, automatically stack images, create animations, etc.
In this version, the scripting engine has been rewritten to make it more enjoyable to use. It adds:
the ability to write expressions on several lines
the possibility to use named parameters
the ability to define your own functions
call an external web service to generate script snippets
the ability to import scripts into other scripts
As well as new functions. Let’s take a deeper look.
You may have faced the situation where you wanted to apply the same operation to several images. For example, let’s imagine that you want to decorate an image with the observation details and the solar parameters.
Before, you would write something like this:
image1=draw_solar_params(draw_obs_details(some_image)
image2=draw_solar_params(draw_obs_details(some_other_image)
Now, you can define a function, let’s call it decorate
, which will take an image and return the decorated image:
[fun:decorate img]
result = draw_solar_params(draw_obs_details(img))
[outputs]
image1=decorate(some_image)
image2=decorate(some_other_image)
You can take a look at the documentation for more details.
In the previous section we have seen how to define functions.
It can be useful to externalize these functions in a separate file, so that they can be reused in other scripts.
This is now possible with the import
statement.
For example, let’s say you have a file called utils.math
which contains the decorate
function.
We can now import this file in our script:
[include "utils"]
[outputs]
image1=decorate(some_image)
image2=decorate(some_other_image)
This will import the utils.math
file and make the decorate
function available in the current script.
Named parameters are a new feature that allows you to pass parameters to functions by name, instead of by position. This is particularly useful for functions that take a lot of parameters, or when you want to make your code more readable.
For example, in the example above, we could have written:
[include "utils"]
[outputs]
image1=decorate(img: some_image)
image2=decorate(img: some_other_image)
The names of the parameters are documented here.
This version introduces a few new functions, which are available in the scripting engine:
bg_model
: background sky modeling
a2px
and px2a
: conversion between pixels and Angstroms
wavelen
: returns the wavelength of an image, based on its pixel shift, dispersion, and reference wavelength
remote_scriptgen
: allows calling an external web service to generate a script or images
transition
: creates a transition between two or more images
curve_transform
: applies a transformation to the image based on a curve
equalize
: equalizes the histograms of a series of images so that they look similar in brightness and contrast
And others have been improved:
find_shift
: added an optional parameter for the reference wavelength
continuum
: improved function reliability, enhancing Helium line extraction
The transition
function, for example, is capable of generating intermediate frames in an animation, based on the actual difference of time between two images, offering the ability to have smooth, uniform transitions between images.
This is how my partial solar eclipse animation was created!
I would like to thank all the users who have contributed to this release by reporting bugs, suggesting features, and testing the software. In particular, I would like to recognize the following people:
Minh Nguyen, MLAstro’s founder for his help with the background removal and flat correction algorithms, as well as the new distance measurement tool and review of this blog post
Yves Robin for his testing and improvement ideas
my wife for her patience, while I was going to bed late every night to work on this release
30 March 2025
Tags: solex jsolex solar astronomy
In this blog post I’m describing what is probably a world premiere (let me know if not!): capturing a partial solar eclipse using a spectroheliograph and making an animation which covers the whole event.
Edit: Turns out Olivier Aguerre did something similar, described here (french).
On March 29, 2025, we were lucky to get a partial solar eclipse visible in France, with a maximum of about 25%. I wanted to do what was a first for me, capturing the event, so I used a TS-Optics 80mm refractor with a 560mm focal length, equipped with an MLAstro SHG 700 spectroheliograph. The Astro Club Challandais was organizing a group observation this morning, but my initial decision was not to attend. Instead, I opted to limit the risks by performing this somewhat complex setup at home, in familiar territory. Unlike the SUNSCAN, using a spectroheliograph like the SHG 700 requires more equipment: a telescope, a mount (AZ-EQ6), and in my case, a mini PC for data acquisition (running Windows) along with a laptop for remote connection to the PC.
I have conducted observations away from home before, but from experience, setting everything up—including WiFi, polar alignment, etc.—can be a bit too risky for an event like this. So, all week, I anxiously monitored the weather. Yesterday, the forecast looked grim, with thick clouds and rain. However, the gods of astronomy were merciful, blessing us with a beautiful day. The sky wasn’t entirely clear, but it was good enough for observations.
First of all, I had a specific observation protocol in mind. As you may know, a spectroheliograph doesn’t directly produce an image—it requires software to process video scans of the Sun. In the case of the Sunscan, the software is built into the device, but for a Sol’Ex-type setup, an independent software handles this task. You are probably familiar with INTI, but I have my own software: JSol’Ex.
The advantage of developing my own software is that I was able to anticipate potential issues. One major challenge with an eclipse is that the software must "recognize" the Sun’s outline, which won’t always be perfectly round—in fact, it could be quite elliptical. The software corrects the image by detecting the edges, but when the Moon moves in front, the sampling points become completely incorrect, sometimes detecting the lunar limb instead of the Sun’s!
My strategy was to start early enough to adjust settings for minimal camera tilt and, more importantly, to ensure an X/Y ratio of 1.0. With these reference scans, I could then force all subsequent scans to use the same parameters. So, I began my first scans under a beautifully clear sky! After some adjustments, I was ready: a single scan, and we were good to go!
At the same time, I activated JSol’Ex’s integrated web server and set up a tunnel so my friends could watch my observations live! I planned to perform a scan every two minutes using a Python script in SharpCap, automating the recording, scan start, stop, and rewind. JSol’Ex’s "continuous" mode processed scans in real time. Everything was going smoothly… until panic struck—clouds!
For the past three hours, the sky had been perfectly clear. Yet, just ten minutes before the eclipse began, clouds started rolling in. What bad luck! Fortunately, by spacing out the scans, I managed to capture many of them in cloud-free moments.
The eclipse began, and the first scans featuring the Moon appeared. It worked! Forcing the X/Y ratio was effective!
As the scans piled up, I encountered a new problem. While locking the X/Y ratio helped, the software still needed to calculate an ellipse to determine the Sun’s center for cropping. But things started going wrong—the software was miscalculating everything. I had anticipated this, and I already had a workaround in mind, but the necessary code wasn’t deployed on my mini PC. So, I didn’t worry too much and simply shared the raw images, which were perfectly round—because, as you recall, I had already adjusted my X/Y ratio to 1.0!
I continued scanning, though my setup wasn’t perfectly precise. My polar alignment wasn’t flawless, and I had no millimeter-accurate return-to-start positioning. As a result, I had to manually realign between each scan. While this wasn’t a major issue, it did create some intriguing scans. For those familiar with Sol’Ex, seeing "gaps" in the spectrum due to the Moon’s presence was quite unusual, making centering more difficult.
Time passed, scans continued, and finally, we reached the maximum eclipse phase!
At one point, I wondered whether we could see lunar relief in the images. However, given the jagged edges typical of SHG imaging, it was hard to say for sure, but, I think some of the details visible below are actual surface details.
By the end of the eclipse, I had 80 SER files, taken between 9:37 AM and 1:43 PM, totaling nearly 175 GB of data! It was time to transfer everything to my desktop PC to create an animation. This also gave me a chance to test whether my earlier workarounds for ellipse detection would function as expected. I ran batch processing, and boom—within minutes, I had this animation:
And here’s the continuum version which was weirdly compressed by Youtube:
This was just a first draft, using a beta version of my software. A few hours later I released a new version of the animation which is visible below:
In the end, the experiment was a success! I also took this opportunity to improve my software, which will benefit everyone. If you have eclipse scans, don’t discard them! Soon, you’ll be able to process them too.
The big question now is: Could this be done during a total solar eclipse, such as next year’s in Spain? Well, I feel lucky that this one was only 25% partial. Managing ellipse detection and mount realignment between scans is already quite tricky. During a total eclipse, there wouldn’t even be a reference point!
Unless one has flawless alignment, a mount capable of returning to position perfectly, and a steady scanning speed, this would be a real challenge. Honestly, it’s beyond my current expertise—it would require a lot more work.
P.S: For french speaking readers in west of France (or simply if you are nearby at that date), we organize the Rencontres Solaires de Vendée on June 7, where we can discuss this topic!
Older posts are available in the archive.