
14 April 2025
Tags: solex jsolex solar astronomy
After dozens of hours of work, I’m happy to announce the release of JSol’Ex 3.0.0! This major release is a new milestone in the development of JSol’Ex, and it brings new features and improvements that I hope you will enjoy.
Since its inception as an educational project for understanding how the Sol’Ex works, JSol’Ex has grown into a powerful tool for processing and analyzing images captured with the Sol’Ex. However, it became very popular over time and started to be used outside the sole Sol’Ex community. In particular, it is now a tool of choice for many spectroheliographs owners.
I have always been keen on providing a user-friendly interface while keeping a good innovation pace. JSol’Ex was the first SHG software to offer:
automatic colorization of images
automatic detection of spectral lines
Doppler eclipse image, inverted image and orientation grid
automatic correction of the P angle
single click processing of Helium line images
embedded stacking
automatic trimming and compression of SER files
identifying what frame of a SER file matches a particular point of the solar disk
an optimal exposure calculator
automatic detection of redshifts
automatic detection and annotation of sunspots
automatic creation of animations of a single image taken at different wavelengths
a full-fledged scripting engine which allows creation of custom images, animations, etc.
support for home-made SHGs
and more!
All integrated into a single, easy to use, cross-platform application: no need for Gimp, ImPPG or Autostakkert! (but you can use them if you want to!).
For this new release, I wondered if I should change the name so that it better matches the new scope of the project, but eventually decided to keep it as it is, because it is already well known in the community and that changing it also implies significant amount of time spent on this that wouldn’t go into the new features.
In addition to performance improvements and bugfixes, this release deserves its major version number because of many significant improvements.
The first thing you may notice is the improved image quality. The algorithm to detect the spectral lines have been improved, which will result in a better polynomial detection and therefore a more accurate image reconstruction. This will be noticeable in images which have low signal, which is often the case in calcium.
Next, a new background removal algorithm has been added. It is fairly common to have either internal reflections or light leaks in the optical path of a spectroheliograph. This results in images which are hard to process or not usable at all. This version of JSol’Ex is capable of removing difficult gradients. To illustrate this, here’s an image that a user with a Sunscan sent me:
The image on the left is unprocessed and shows important internal reflections. These are completely removed in the image on the right, processed automatically with JSol’Ex.
This background removal will only be applied to the "Autostretch" image, which is the default "enhanced" image that JSol’Ex is using, but it is also available as a standalone function in ImageMath scripts.
Another common issue with SHGs is the presence of vignetting, visible on the poles of the solar disk. The vignetting issue stems from the following factors, in the order of their impact:
the physical size of the SHG’s optical components — including the lens diameter, grating size, and slit length
the telescope’s focal ratio and focal length
the telescope’s own intrinsic vignetting (though this is rarely a significant factor)
For prebuilt SHGs like the MLAstro SHG 700, the size of the lens and grating is typically constrained by the housing design and cost limitations. As a result, vignetting often becomes an issue when using longer focal length telescopes,—especially when paired with a longer slit.
To fix this, JSol’Ex had until now the option to use artificial flat correction: the idea was basicaly to model the illumination of the solar disk via a polynomial and to apply a correction to the image. This works relatively well, but it can sometimes introduce some noise, or even bias the reconstruction on low-contrast images. On even longer slits, this artificial correction is not sufficient to remove the vignetting, so JSol’Ex 3 introduces the ability to use a physical flat correction.
The idea with a physical flat correction is to take a series of 10 to 20 images of the sun, using a light diffuser device at the entrance of the telescope, such as tracing paper, in order to diffuse light. The flat should be captured with the same cropping window as the one used for the solar images, but exposure will be longer, and possibly higher gain as well. The result is a SER file that JSol’Ex can use to create a model of the illumination of the disk, which can be used to correct the images.
As an illustration, here’s a series of 3 images of the Sun, taken with a prototype of a 10mm slit:
The image on the left is done without any correction and shows very strong vignetting. The image in the middle is done with the artificial flat correction, improves the situation, but still shows some vignetting. The image on the right is done with the physical flat correction, which is much better and shows no vignetting at all.
Flats can be reused between sessions, as long as you use the same cropping window and the same wavelength.
The physical flat correction can also be used on images taken with a Sol’Ex, in particular for some wavelengths like H-beta which show stronger illumination of the middle of the solar disk.
Note
|
Flat correction is not designed to fix transversalliums: it has to apply low pass filtering to the image to compute a good flat, which will remove the transverse lines. To correct transversalliums, use the banding correction parameters. |
By default, JSol’Ex used to display images applying a linear stretch. Starting with this version, it is possible to select which stretching algorithm to use: linear, curve or no stretching at all.
This version introduces a new tool to measure distances! This feature was suggested by Minh Nguyen from MLAstro, after seeing one of my images in Calcium H, which showed a very long filament:
This tool lets you click on waypoints to follow a path and make measurements on the disk, in which case the distances take the curvature into account, or outside the disk, for example to measure the size of prominences, in which case the distances are linear.
The measured distances are always an approximation, because it’s basically impossible to know at what height a particular feature is located, but it gives a good rough estimate.
Last but not least, this version significantly improves the scripting engine, aka ImageMath. While this feature is for more advanced users, it is an extremely powerful tool which lets you generate custom images, automatically stack images, create animations, etc.
In this version, the scripting engine has been rewritten to make it more enjoyable to use. It adds:
the ability to write expressions on several lines
the possibility to use named parameters
the ability to define your own functions
call an external web service to generate script snippets
the ability to import scripts into other scripts
As well as new functions. Let’s take a deeper look.
You may have faced the situation where you wanted to apply the same operation to several images. For example, let’s imagine that you want to decorate an image with the observation details and the solar parameters.
Before, you would write something like this:
image1=draw_solar_params(draw_obs_details(some_image)
image2=draw_solar_params(draw_obs_details(some_other_image)
Now, you can define a function, let’s call it decorate
, which will take an image and return the decorated image:
[fun:decorate img]
result = draw_solar_params(draw_obs_details(img))
[outputs]
image1=decorate(some_image)
image2=decorate(some_other_image)
You can take a look at the documentation for more details.
In the previous section we have seen how to define functions.
It can be useful to externalize these functions in a separate file, so that they can be reused in other scripts.
This is now possible with the import
statement.
For example, let’s say you have a file called utils.math
which contains the decorate
function.
We can now import this file in our script:
[include "utils"]
[outputs]
image1=decorate(some_image)
image2=decorate(some_other_image)
This will import the utils.math
file and make the decorate
function available in the current script.
Named parameters are a new feature that allows you to pass parameters to functions by name, instead of by position. This is particularly useful for functions that take a lot of parameters, or when you want to make your code more readable.
For example, in the example above, we could have written:
[include "utils"]
[outputs]
image1=decorate(img: some_image)
image2=decorate(img: some_other_image)
The names of the parameters are documented here.
This version introduces a few new functions, which are available in the scripting engine:
bg_model
: background sky modeling
a2px
and px2a
: conversion between pixels and Angstroms
wavelen
: returns the wavelength of an image, based on its pixel shift, dispersion, and reference wavelength
remote_scriptgen
: allows calling an external web service to generate a script or images
transition
: creates a transition between two or more images
curve_transform
: applies a transformation to the image based on a curve
equalize
: equalizes the histograms of a series of images so that they look similar in brightness and contrast
And others have been improved:
find_shift
: added an optional parameter for the reference wavelength
continuum
: improved function reliability, enhancing Helium line extraction
The transition
function, for example, is capable of generating intermediate frames in an animation, based on the actual difference of time between two images, offering the ability to have smooth, uniform transitions between images.
This is how my partial solar eclipse animation was created!
I would like to thank all the users who have contributed to this release by reporting bugs, suggesting features, and testing the software. In particular, I would like to recognize the following people:
Minh Nguyen, MLAstro’s founder for his help with the background removal and flat correction algorithms, as well as the new distance measurement tool and review of this blog post
Yves Robin for his testing and improvement ideas
my wife for her patience, while I was going to bed late every night to work on this release
30 March 2025
Tags: solex jsolex solar astronomy
In this blog post I’m describing what is probably a world premiere (let me know if not!): capturing a partial solar eclipse using a spectroheliograph and making an animation which covers the whole event.
Edit: Turns out Olivier Aguerre did something similar, described here (french).
On March 29, 2025, we were lucky to get a partial solar eclipse visible in France, with a maximum of about 25%. I wanted to do what was a first for me, capturing the event, so I used a TS-Optics 80mm refractor with a 560mm focal length, equipped with an MLAstro SHG 700 spectroheliograph. The Astro Club Challandais was organizing a group observation this morning, but my initial decision was not to attend. Instead, I opted to limit the risks by performing this somewhat complex setup at home, in familiar territory. Unlike the SUNSCAN, using a spectroheliograph like the SHG 700 requires more equipment: a telescope, a mount (AZ-EQ6), and in my case, a mini PC for data acquisition (running Windows) along with a laptop for remote connection to the PC.
I have conducted observations away from home before, but from experience, setting everything up—including WiFi, polar alignment, etc.—can be a bit too risky for an event like this. So, all week, I anxiously monitored the weather. Yesterday, the forecast looked grim, with thick clouds and rain. However, the gods of astronomy were merciful, blessing us with a beautiful day. The sky wasn’t entirely clear, but it was good enough for observations.
First of all, I had a specific observation protocol in mind. As you may know, a spectroheliograph doesn’t directly produce an image—it requires software to process video scans of the Sun. In the case of the Sunscan, the software is built into the device, but for a Sol’Ex-type setup, an independent software handles this task. You are probably familiar with INTI, but I have my own software: JSol’Ex.
The advantage of developing my own software is that I was able to anticipate potential issues. One major challenge with an eclipse is that the software must "recognize" the Sun’s outline, which won’t always be perfectly round—in fact, it could be quite elliptical. The software corrects the image by detecting the edges, but when the Moon moves in front, the sampling points become completely incorrect, sometimes detecting the lunar limb instead of the Sun’s!
My strategy was to start early enough to adjust settings for minimal camera tilt and, more importantly, to ensure an X/Y ratio of 1.0. With these reference scans, I could then force all subsequent scans to use the same parameters. So, I began my first scans under a beautifully clear sky! After some adjustments, I was ready: a single scan, and we were good to go!
At the same time, I activated JSol’Ex’s integrated web server and set up a tunnel so my friends could watch my observations live! I planned to perform a scan every two minutes using a Python script in SharpCap, automating the recording, scan start, stop, and rewind. JSol’Ex’s "continuous" mode processed scans in real time. Everything was going smoothly… until panic struck—clouds!
For the past three hours, the sky had been perfectly clear. Yet, just ten minutes before the eclipse began, clouds started rolling in. What bad luck! Fortunately, by spacing out the scans, I managed to capture many of them in cloud-free moments.
The eclipse began, and the first scans featuring the Moon appeared. It worked! Forcing the X/Y ratio was effective!
As the scans piled up, I encountered a new problem. While locking the X/Y ratio helped, the software still needed to calculate an ellipse to determine the Sun’s center for cropping. But things started going wrong—the software was miscalculating everything. I had anticipated this, and I already had a workaround in mind, but the necessary code wasn’t deployed on my mini PC. So, I didn’t worry too much and simply shared the raw images, which were perfectly round—because, as you recall, I had already adjusted my X/Y ratio to 1.0!
I continued scanning, though my setup wasn’t perfectly precise. My polar alignment wasn’t flawless, and I had no millimeter-accurate return-to-start positioning. As a result, I had to manually realign between each scan. While this wasn’t a major issue, it did create some intriguing scans. For those familiar with Sol’Ex, seeing "gaps" in the spectrum due to the Moon’s presence was quite unusual, making centering more difficult.
Time passed, scans continued, and finally, we reached the maximum eclipse phase!
At one point, I wondered whether we could see lunar relief in the images. However, given the jagged edges typical of SHG imaging, it was hard to say for sure, but, I think some of the details visible below are actual surface details.
By the end of the eclipse, I had 80 SER files, taken between 9:37 AM and 1:43 PM, totaling nearly 175 GB of data! It was time to transfer everything to my desktop PC to create an animation. This also gave me a chance to test whether my earlier workarounds for ellipse detection would function as expected. I ran batch processing, and boom—within minutes, I had this animation:
And here’s the continuum version which was weirdly compressed by Youtube:
This was just a first draft, using a beta version of my software. A few hours later I released a new version of the animation which is visible below:
In the end, the experiment was a success! I also took this opportunity to improve my software, which will benefit everyone. If you have eclipse scans, don’t discard them! Soon, you’ll be able to process them too.
The big question now is: Could this be done during a total solar eclipse, such as next year’s in Spain? Well, I feel lucky that this one was only 25% partial. Managing ellipse detection and mount realignment between scans is already quite tricky. During a total eclipse, there wouldn’t even be a reference point!
Unless one has flawless alignment, a mount capable of returning to position perfectly, and a steady scanning speed, this would be a real challenge. Honestly, it’s beyond my current expertise—it would require a lot more work.
P.S: For french speaking readers in west of France (or simply if you are nearby at that date), we organize the Rencontres Solaires de Vendée on June 7, where we can discuss this topic!
11 March 2025
Tags: solex jsolex solar astronomy
For a few months, I’ve been working with Minh, from MLAstro, to improve support of the SHG 700 in JSol’Ex. While JSol’Ex was initially designed for Christian Buil’s Sol’Ex, it appears that lots of users are using it to process images acquired using a different spectroheliograph. A while back, I added the ability to declare the specifications of the spectroheliograph you are using, which makes JSol’Ex compatible with a wide variety of instruments. An example of collaboration with Minh is the experimental flat correction, which is recommended to enable with the SHG 700, and the addition of the SHG 700 to the list of instruments which are officially supported, in addition to the Sol’Ex and the Sunscan.
As you can see, I had been working with MLAstro for quite some time already, but I didn’t own the instrument. This has recently been "fixed" and I’m now a happy user of the SHG 700!
I must say that Minh’s design is fairly impressive, it’s a truly qualitative instrument. The aluminum housing makes it extremely robust, without any flexion, and it comes pre-collimated. In addition, the micro-focusers, which are used for the collimator lens, for the camera objective and for the wavelength selection wheel, are extremely pleasant to use: anyone who has struggled with the collimation of the Sol’Ex will immediately feel the magic.
Being a regular user of the Sol’Ex, I immediately felt comfortable with the SHG 700: it only took me a few minutes to setup and get my first images! Fine tuning the focus is a breeze with the microfocusers, it’s really fantastic to use.
My first images were showcased by MLAStro, but here are a few:
While the above images were fairly easy to produce, I was puzzled because I didn’t manage to get any decent image in Helium D3. This was surprising, because the sensitivity of the SHG700 is higher, especially because there’s no need for a ND filter or a an ERF, so it was curious that the only helium image I was able to produce was this:
To understand the problem was that, it is important to mention that unlike Minh, I was using JSol’Ex one-click, fully automated processing, a feature which was introduced a while ago (June 2024) and worked extremely well for both Sol’Ex and Sunscan files (N.B: the Sunscan app introduced the same feature a few days ago).
This feature only works if we can properly compute the dispersion of the spectrum, which is measured in Angstrom/pixel. In order to compute this, we need to know the specifications of the spectroheliograph, such as the grating density, the focal length of the collimator and objective lenses, the total angle as well as the pixel size of the camera. Once we know all this, we can determine how many pixels separate the reference line that we use, for example the Sodium D2 line, from the Helium D3 line. To illustrate this, let’s say that we have a dispersion of 0.1Å/px. The Helium line is to be found at 5875.62Å, when the reference line, the Sodium D2 line, is detected at 5889.95Å. Therefore, the Helium line can be found 14.33Å away from the D2 line, which means 14.33/0.1 = 143.3 pixels away.
Using both Sol’Ex and Sunscan, I had great experience at extracting this line automatically, so it was to say the least curious that it wouldn’t work with the SHG 700.
So I contacted Minh, and after eliminating obvious possible candidates, like weak signal or incorrect exposition or gain, I decided to go the "old way" and searched for the helium line manually. It was with great surprise that I discovered that when JSol’Ex told me that the line should be 124.9, I was measuring something closer to 116 pixels! I brought this to Minh, and we identified 3 possible causes for this:
an error in the computation of the dispersion in JSol’Ex
a different focal length of the SHG 700
a grating which wouldn’t have the expected number of lines/mm
It was also possible, but unlikely, that a combination of 2 and 3 would happen. The first thing I’ve done is double checking my formula to compute the dispersion in JSol’Ex. I was doubtful that it could be wrong, given that it was used with success on different SHGs. In addition, it gave exactly the same result as the Ken Harrison' SimSpec SHG spreadsheet.
So we moved to the 2d option: the SHG 700 was advertised with a focal length of 75mm. However, the pixel shift I was manually measuring was closer to 70mm. I brought this again to Minh, who contacted the supplier of the lens, and here’s what he told me:
The lens from the first production run had a focal length of 72mm, a discrepancy I was unaware of at the time. I had sourced this lens from a supplier in China and provided them with the Zemax file for my self-designed 6-element Double Gauss 75mm lens. The first prototype was disappointing—its near-UV performance and coating were poor, and contrast was low across the field. This was largely due to my inexperience in optical design, as I had only begun learning Zemax a month prior, making this my first optical project.
I raised these concerns with the supplier, who was very accommodating and offered to help optimize the design while ensuring key parameters such as focal length, aperture, and exit pupil remained unchanged. With each revision, the lens improved—by the second and third prototypes, contrast was significantly better, sharpness in the blue end of the spectrum improved, and the field was much flatter across the FOV. The field of view closely matched the earlier lens, and because I had already tested focal length in previous iterations, I didn’t think to recheck it. Visually, the lenses appeared identical, except for a slight shift in coating hue.
The third prototype was approved for production and became the MLAstro "75mm" compound lens used in all MLAstro SHGs. However, it was only later confirmed by the supplier that the final production lens actually had a focal length of 72mm instead of 75mm. The optimizations for blue performance and field flattening had slightly shortened the focal length."
So I changed the focal length to 72mm and got this image instead:
and a stack of 5 images processed entirely in JSol’Ex:
That’s quite a difference! So it appeared that the software was correct, and that it allowed identifying a problem in the spectrograph specifications, because a spectral line wasn’t found where it should have been! While going from 75mm to 72mm won’t make much of a difference, the fact of not using the right numbers makes a huge difference in JSol’Ex: computations are all off, which includes the pixel shifts like in this exercise, but also the measured redshifts. In addition, this would make it impossible to perform more complicated tasks like finding the ionized Fe lines when imaging the corona E. The image is stil less contrasted than it should be, which may indicate that the computation is slightly off, or it’s just due to the weather conditions that day, I didn’t have the opportunity to retry.
Lastly, we can actually see fairly easily that the new focal length is a better fit, by using JSol’Ex "profile" tab. In that tab, we compare the profile of the spectrum that you captured with a reference spectrum from the BASS2000 database: this is also how the software automatically determines what spectral line is observed, by comparing the profiles together.
With a 75mm length and an H-alpha profile, here’s what we got:
You can see that while the H-alpha profile is found, as soon as we move towards the wings, there are slight shifts between the local minimas. When we switch to a 72mm length, these are perfectly aligned:
The SHG 700 is a fairly impressive instrument: it’s robust, it’s a pleasure to use with its microfocusers, and Minh is always super responsive and very patient. Its use doesn’t come without drawbacks, though. It’s weight, for example, restricts it to refractors which have a good focuser. The dimension of the sun is also smaller than with the Sol’Ex at equivalent focal length (see this post for an explanation). However, it produces stunning images, sometimes rivalizing these produced with an etalon.
While testing it, I faced this problem that the helium line images were significantly worse than with the Sol’Ex, which didn’t quite make sense. After investigation, it turned out we had highlighted a difference in the specifications, a lens had been changed from 75mm to 72mm by the supplier, without letting MLAstro know. That’s a pity, but, in the end, the problem is very easy to fix in JSol’Ex. Be sure to upgrade to JSol’Ex 2.11.2 which includes a fix to update the focal length.
01 February 2025
Tags: solex jsolex solar astronomy
The latest release of JSol’Ex as of writing this blog post, JSol’Ex 2.9, ships with a new ability: automatically detecting active regions. In this blog post, I will explain the principles and the algorithm I used, but also show its limits.
First of all, a bit of vocabulary. An active region is a region of the Sun atmosphere where special activity occurs, such as flares or coronal mass ejections. They don’t have to be associated with sunspots, but in general, they are. In JSol’Ex, we’re essentially detecting sunspots, and the terminology "active regions" essentially comes from the fact that we can use the https://www.swpc.noaa.gov/NOAA database] to label these active regions.
An instrument like Christian Buil’s Sol’Ex, called a spectroheliograph (or SHG in short) doesn’t offer any kind of "live view" like typical solar instruments like Coronado or Lunt instruments. Instead, what we get is a video file, where each frames consists of a "slice" of the sun, observed as a portion of the light spectrum:
The video above is an excerpt from a so-called "scan": the principle is to have a slice of the sun passing through a slit, and a grating is used to spread the light into a spectrum. Here, we are observing a curved dark line, which is the H-alpha line, and the "wobbling" we see around that line is an illustration of the Doppler effect. The idea of software like Valérie Desnoux’s INTI and JSol’Ex is to extract pixels from the studied line (here H-alpha) in order to reconstruct an image of the sun.
However, as you can see in that animation, there’s a lot more information available in each frame. While we are mostly interested in the central line (here H-alpha), it’s also interesting to look "above" or "below" the line, which is often referred to "pixel shifting", in which case we’re not studying H-alpha, but a different wavelength: that’s one of the strenghts of Sol’Ex, which makes it possible to do "science" at home!
In particular, this video shows a couple interesting features:
sometimes, you see some vertical dark lines appearing: these are features of the solar atmosphere. The darker lines which spread around multiple columns are these we are mostly interested in for this blog post: they correspond to sunspots!
near the end of the video, on the right, you will see a white flash apparearing inside of of these regions: it’s a solar flare!
This video is actually an excerpt from one of the many captures I’ve done with Sol’Ex, and I captured it during an eruption, on May 5th, 2024. What you are seeing here is the massive AR3664 region which was suject to multiple X-flares and resulted in beautiful auroras on Earth!
I’ve always considered that it would be a cool feature to be able to detect these lines automatically in JSol’Ex, and provide an overlay of the detected sunspots. Here we go, it is finally implemented in JSol’Ex 2.9.0! If you select the "full processing mode" or that you check the "active regions" checkbox in a custom process, then JSol’Ex will automatically create an image of the sun with the detected active regions as an overlay:
Unfortunately that day I didn’t capture a full disk, but you can see that JSol’Ex annotated the disk with the detected active regions, but it also added the labels for these regions automatically.
Let’s take another example with an image captured in Calcium K on January 17, 2025:
You will notice 2 different colors for labels:
blue labels are the ones detected by JSol’Ex
red labels are the ones coming from the NOAA database, which were not detected
Sometimes, you will see like in the image above regions which are not detected, and others which are detected but not in the NOAA database. The reason why not all of them are detected is that I had to choose "reasonable" detection thresholds, which work for most use cases. I plan to improve the algorithm over time to make it more robust, based on your feedback.
The algorithm is based on a simple analysis of each frame. If you open your SER file in JSol’Ex video analyzer, you will see something similar to this:
On the top, you see the original frame, with the curved H-alpha line. On the bottom, you see the corrected frame, where the H-alpha line is straightened, and the active regions are highlighted in purple.
The algorithm is based on the following steps:
For each frame:
Detect the borders of the sun
For each column, compute the average intensity of the column, as well as its standard deviation
Compute a 3rd order polynomial fit of the average intensity and standard deviation
now, for each column, we compare its average intensity and standard deviation to the polynomial fits: if both of them are below a particular threshold, we consider that we have detected a candidate
The reason to only keep candidates which have both average intensity and standard deviation below a threshold is because:
sunspots characterize by a clear vertical line, which means that the standard deviation is low (most pixels have a similar value on the column)
sunspots are darker than the surrounding, which means that the average intensity is lower
At the moment, I’m using the following thresholds (note that they may change in future releases, as I improve the accuracy):
average intensity threshold: 0.95 times the predicted value
standard deviation threshold: 0.85 times the predicted value
Once we have the results for all frames, we aggregate active regions by collecting adjacent candidates.
Finally, we filter out regions which are too small, then perform clustering of regions which are close to each other.
The whole algorithm can be found here (note that it also includes redshift detection, which was discussed in a different blog post).
This algorithm proves to work relatively well in many different wavelengths (H-alpha, calcium, magnesium …). However, there are sometimes false positives. This is for example the case when I’m scanning in H-beta, due to the long focal length of my telescope and astigmatism, and the fact that H-beta provides a lot of details.
In this case, the algorithm detects areas which are actually just noise:
Note that these are close to the north and south poles. I have also noticed that the algorithm tends to detect noise as active regions when we’re close to the limb, which is also why I’m currently filtering these out.
At this stage, I have chosen not to make the detection thresholds configurable, because I consider these internal implementation details, which may change in the future, and that I don’t want to expose to the user.
Once piece of work that we didn’t explain yet is how JSol’Ex puts labels around active regions. For this, we are using the NOAA Solar Region Summary database.
This provides us, at a particular date, the list of active regions with their positions on the solar disk. JSol’Ex will compare these with the detected regions, and put the labels in different colors based on whether they were detected or not.
However, the position of active regions will only be correct if:
you have properly oriented your image (north at the top, east on the left): to help you with this, use the "GONG" tab on the right of the interface to download a reference image and compare with yours
you are using an equatorial mount. If you are not, then make sure to check the new "alt-az" mode and enter your GPS coordinates in the settings, so that JSol’Ex can compute the parallactic angle of the sun at the moment of the observation and automatically correct the orientation of the image.
The data from NOAA is cached in your local filesystem, so that we don’t have to download it every time you open a video file.
Despite only having released this a day ago, I already have received the same question multiple times: can JSol’Ex be used to annotate an existing image of the sun, that is to say, take a JPG or PNG image and annotate it?
If you have read carefully this blog post and that I explained things correctly, you will have understood that the answer is no: because my algorithm is based on the analysis of each frame, there’s no such information available in a single image.
I hope you will enjoy this new feature in JSol’Ex 2.9.0. This blog post is a tentative explanation of the algorithm, and I will be happy to answer any questions you may have about it. As always, feel free to contribute, JSol’Ex is open source!
Older posts are available in the archive.