Re: [EXTERN] Re: new link for meeting now
Hi everyone, @ Sal I would suggest adding it to the backend with a dedicated function of the wrapper class with a header looking like: “def export_brim(self, path, filepath)” and then adding the menu buttons to the “show_treeview_context_menu” of the main file of the HDF5_BLS_GUI package, ideally with a bit of logic to ensure only datasets compatible with your module can be exported to brim. The question is then do you want to make your module an independent package or to join it to either Carlo’s or mine. I would suggest adding it to HDF5_BLS because I’m sure there will be no fundamental limitations. I can do it if you want :) Enjoy Banyuls! @ Carlo & Sebastian The code on supplementary for Brim still doesn’t work. I went to dig in the sources and the problem comes from your use of the Enum class: you’re mixing literal expressions and patterns. I think your code also lacks commenting, as is it’s really hard to read and fix. After modifying your library, I tried exporting a synthetic BLS image to Brim using the code you provided. I do get a .zip file or .zarr file where I am supposed to have it, but I cannot say if it works or not since the viewer on the BioBrillouin website is unable to open it and there are no compatible softwares out there. I think it’s important this is fixed, and I would suggest doing as I did in supplementary, by showing an example of integration of your code to an existing process. Next, from what I could understand from your code, you can only add 4D PSD arrays to Brim, I think this is non intuitive and could easily be improved by adding silent dimensions in your process. As I’ve already mentioned before, this is in any case extremely limiting for a number of applications, starting with all TFP measures and time-series. Last problem I see, but that’s proper to zarr: you end up saving a zipped file, which can be corrupted easily by the user if he unzips and then re-zips his file. To limit the risks, I would strongly recommend adding a hash key to check file integrity, generated after each addition or modification to the file, and compared to the computed hash key at opening. Best, Pierre Pierre Bouvet, PhD Post-doctoral Fellow Medical University Vienna Department of Anatomy and Cell Biology Wahringer Straße 13, 1090 Wien, Austria
On 26/8/25, at 17:29, Sal La Cavera Iii <Salvatore.Lacaveraiii@nottingham.ac.uk> wrote:
Hi guys,
Sorry I am a little slow on communications right now because I am at a summer school that I have been helping organise in Banyuls sur Mer this week and last week.
Thanks for the initial feedback on the conversion library! 2 follow ups: Pierre, I've been meaning to check out where to insert the BrimConverter to HDF5_BLS based on your suggestions (Export and Actions-Export buttons) but I am currently unable to clone the HDF5_BLS repo because of the venue's wifi firewall. I can clone using my phone's hotspot but mobile network is really slow and is going to eat all my data. In essence it should be really easy though right? BrimConverter() is just a 1-liner and I imagine clicking on it should just bring up a window to select the input file and define the output file name. Carlo, I've recently upgraded my brimfile version just to check compatibility from the BrimConverter end. It looks like that use_h5 hardcoded flag is still in file_abstraction.py. Probably it would be best to remove that and the related if-statements that query use_h5's value if you're going to deactivate h5 compatibilty? If you ever did want to bring hdf5 back I have some updated versions of file_abstraction.py that play nicely with h5 in a generalised way if you ever needed/were curious.
See you guys on Thursday,
Cheers,
Sal
--------------------------------------------------------------- Salvatore La Cavera III Royal Academy of Engineering Research Fellow Nottingham Research Fellow Optics and Photonics Group University of Nottingham Email: salvatore.lacaveraiii@nottingham.ac.uk <mailto:salvatore.lacaveraiii@nottingham.ac.uk> ORCID iD: 0000-0003-0210-3102 <https://outlook.office.com/bookwithme/user/6a3f960a8e89429cb6fc693c01d10119@exmail.nottingham.ac.uk?anonymous&ep=bwmEmailSignature> Book a Coffee and Research chat with me! From: Carlo Bevilacqua via Software <software@biobrillouin.org <mailto:software@biobrillouin.org>> Sent: 11 August 2025 12:41 To: software@biobrillouin.org <mailto:software@biobrillouin.org> <software@biobrillouin.org <mailto:software@biobrillouin.org>> Subject: [Software] Re: new link for meeting now
Hi Sal, thanks a lot for sending the figures and the brim_converter module. I will try to run it in the next days and give you feedback. In the meanwhile, if you want to test it on more brim files, I am uploading on an S3 bucket a few example brim files from different modalities in our lab; you can find the link to them in the `Example brim files` section in the Supplementary of the paper draft <https://docs.google.com/document/d/1Qg6gwRQ8EzmyfbjBKin8atMTCTtewu1THTUlb_zE...>.
Regarding HDF5 support, I initially design the brimfile package to easily support different file formats (that's why I introduced the abstract class `FileAbstraction`). Later I thought that I don't see a clear advantage of HDF5 over zarr (apart from being more mature and thus more tools being available) and if we want to advocate for the use of Zarr it is better to not support both in the initial phase, otherwise people would start using both and force us to keep supporting HDF5 as well. If in the future we realize that supporting HDF5 will be beneficial, we can easily do it (as you have already experienced).
Regarding the use of R2 and RMSE, you can just ignore them if they are not on Pierre's end, as they are optional. I still believe that it makes sense to keep them in the definition of the file format though, as they are commonly used metrics to estimate the goodness of fit (even though I agree that they are by no means complete or informative about the error on the fit parameters). Regarding the covariance matrix, I feel that that is the quantity that you get directly from the fit and the actual errors on the parameters can be easily computed from there by the visualization software. The only drawback is an increase of the file size, which, in any case, won't be much (it is ~10 additional numbers for each fit while a spectrum has typically several tens of points; also, if it is indeed a purely diagonal matrix, compression should work very well). Instead the advantage of storing it would be that it helps troubleshooting the fit (e.g. in case it turns out not to be diagonal).
Regarding distributing the conversion script, as you are only importing brimfile and not HDF5_BLS, I would propose to make it a submodule of brimfile (i.e. a folder `converter` inside `brimfile` so that you can import it by `import brimfile.converter`). I would include any additional dependencies you need (e.g. h5py) into an optional dependency <https://packaging.python.org/en/latest/guides/writing-pyproject-toml/#depend...> `converter`, so people can do `pip install brimfile[converter]` only if they need it. Anyways, even if you need to use it from the HDF5_BLS library, you need to install brimfile, so by making it a submodule you are not forcing to install any unnecessary dependency. If you go for this option you can write the documentation into an '__init__.py' file inside the 'brimfile/converter' folder and it will be automatically added to the online documentation of the brimfile library (I am using a github action which triggers pdoc <https://pdoc.dev/> to compile the documentation and upload it online at every push). The docstring of each function will also be automatically compiled to the documentation. If you think that is a good option, you can just make a pull request to the brimfile repository. Let me know what you think about it and if you want to discuss some technical detail.
Best, Carlo
On Fri, Aug 8, 2025 at 21:21, Pierre Bouvet via Software <software@biobrillouin.org <mailto:software@biobrillouin.org>> wrote: Hi Sal,
In v 1.0 there are no Stokes and anti-Stokes specific attributes, to combine data from different types of peaks my go-to solution for now is to add dimensions to the shift and linewidth array (if I fit 3 peaks in a mapping of 50 by 50 elements, shift and linewidth will have the following shape: 50x50x3). I do think you have a point there, and it needs to be integrated in the next version of the format, but as is I think it’s better to leave it for the future, mainly because we don’t expect changes in Stokes and anti-Stokes parameters (at least with the way we’re using BLS). For unit extraction, for now it’s the text that is between the parenthesis after the last underscore. It’s not super elegant I agree, but it can accommodate almost anything and is relatively easy to use on the spreadsheet. Maybe we could think about updating that on the next version of the library? Regarding _std datasets, I’ve done a full conversion to a more general “_err” dataset (linewidth_std -> linewidth_err), which I plan to upload after submission (or maybe I can directly correct for that on your code if you agree? I honestly don’t remember if I have already pushed the changes or not). The idea is to patch the problem you underline, and allow other estimators of errors to be stored in the format. This being said, R2 and RMSE cannot quantify an error on both the shift and linewidth as they are essentially scalar values. This is why I advocate for the use of the standard deviation as the standard error estimator, obtained from the covariance matrix estimated from the Hessian and Jacobian matrices that are derived during the fit. Storing the covariance matrix is also not super interesting as the fitted parameters are independent (if they’re not, it means we’re essentially fitting noise). That means that the covariance matrix will always be essentially a diagonal matrix where the diagonal is the variance on the fitting of individual parameters and the covariance of any two different parameters is negligible. I think we already had this discussion a few months back though and the reason why Carlo wanted to keep R2 and RMSE is that people have used it in the past, but here again I think it’s mathematically incorrect to use them as error estimators. For integration, I propose to add it in the sub-menu “Export -> Brim” that opens when you right click on an element of the file when opened with the GUI, and to the main menu under “Actions -> Export -> Brim”. I have not implemented the dragging from the GUI to the desktop (or any other file viewer) but when I’ll get to that, I will also integrate it in the export choices.
Best,
Pierre
Pierre Bouvet, PhD Post-doctoral Fellow Medical University Vienna Department of Anatomy and Cell Biology Wahringer Straße 13, 1090 Wien, Austria
On 8/8/25, at 20:01, Sal La Cavera Iii via Software <software@biobrillouin.org <mailto:software@biobrillouin.org>> wrote:
Hi everyone,
Carlo/Sebastian, I've attached the svg file for the draft of fig 1a, the zip also contains the png output images from Blender. If you need anything, e.g., higher resolution anything, just let me know.
I think the conversion library is pretty much finished!
In the attached brim_converter.zip, the test run script is called master_BrimConverter_test.py. This allows the user (us for now) to define the input source file (either a brim or brimx file) and the desired output file, both with the correct/desired file extensions. Then both of these are passed to the BrimConverter class and the conversion mode is specified ('brim2brimX' or vice versa). I've added some user-friendly features to the run script, that aren't really needed, but the main syntax to use the conversion library is in general:
file_in = ... # string of the filepath file_out = ... # string of the filepath convert_this = BrimConverter(file_in, file_out, mode='brimX2brim') convert_this.convert()
I have tested this using Carlo's drosophila_LSBM.brim.zarr file as the test brimfile, and one of Pierre's more recent brimx files called Measures.h5 I think. However, Pierre's required a little tmp hardcoding to reshape the PSD since the spatial dimensions of the PSD didn't match the shift dataset.
I have given fairly rigorous testing on the above: for example, producing a file the brimX2brim way around, and then passing that back through the brim2brimX way around (and vice versa) and so far so good.
A couple things for Carlo/Pierre:
Carlo, I see in the manuscript google doc that you guys are discontinuing HDF5 compatibility for brimfile. No worries there. To get everything to play nicely in the BrimConverter library I slightly modified file.py and file_abstraction.py in the following ways: in file_abstraction.py you had a hardcoded flag for whether h5 mode was activated or not, use_h5 = False or something. Then you had an if statement later checking the state of this flag. And by being False this always activated the path containing Zarr etc. This prevented using file_abstraction.py for both h5 files and non-h5 files (e.g. zarr) dynamically. Switching to =True when h5 is wanted isn't a viable long term solution. I fixed this by relocating the StoreType(Enum) class definition to the beginning of file_abstraction.py and adding HDF5 = 'hdf5' as one of the defined StoreTypes (see attached file in the other zip). Then I could delete the use_h5 flag and the related if statements. _h5File class is now H5File class in file_abstraction.py is mainly unchanged (it is outside of any if statements now), except I needed to modify the create_attr() function definition to make sure that data types, and None/NaN's were dealt with correctly. That's also in the attached file_abstraction.py file. Moved the Compression class into FileAbstraction For file.py the main change is not using the single line in the __init__ self._file = _AbstractFile( filename, mode=mode, store_type=store_type) Nothing beyond these 2 files needs to be modified I'm pretty sure.
instead I take the store_type passed as input, and then load the relevant file type's class (each defined in file_abstraction still). H5File for h5 and then ZarrFile for everything else as you previously had it with your if statement flagged by use_h5.
All of the above might not be of any use anymore if you're removing hdf5 compatibility, but using the above/attached fixes I was able to get file_abstraction.py and file.py working for both h5, zarr, and zips, whereas previously it wouldn't work for h5. So now I can create .brim.h5 files and everything plays nicely in all directions.
Pierre, is there scope to have a Brillouin_type attribute for both AS and S measurements? If I pass a brimfile to BrimConverter to produce a brimX, and that brim has AS and S, currently only one of those is passed to brimX because there isn't a distinction between the 2 in the Brillouin_type attributes list. Or is there a different solution you already have for managing this? Do you have a clean way of extracting units (e.g. having them as attributes?). All I could find on the github guide was that you chuck them onto the ends of things like MEASURE.Exposure_(s) which makes scraping a little clunky.
Things to note for everyone: I haven't dealt with metadata yet, as that is lower priority atm. It is on my to-do list though, though I reckon this can be sorted during submission/review etc. Currently AS vs S distinction is lost when converting to brimX BLT and all _std's on Pierre's end don't map to anything on Carlo's end. And R2, RMSE, and Cov_matrix on Carlo's end don't map to anything on Pierre's end.
Do people have opinions on how to roll out the conversion library? Integrated into brimfile and HDF5_BLS libraries? Currently it's pretty straightforward to use in scripting, but perhaps we could integrate it as a button in Pierre's GUI?
I have integrated relevant explanations into the manuscript, but have not formally written documentation for the library as I'd prefer to wait for any feedback. I don't see any point in making online documentation that is separate to the brimfile and/or HDF5_BLS documentation, so would hope to port it to your guys' sites. HDF5_BLS one is easy enough b/c it's on github.
Any questions/comments/edits just let me know. Hope everyone has a nice weekend,
Best,
Sal
--------------------------------------------------------------- Salvatore La Cavera III Royal Academy of Engineering Research Fellow Nottingham Research Fellow Optics and Photonics Group University of Nottingham Email: salvatore.lacaveraiii@nottingham.ac.uk <mailto:salvatore.lacaveraiii@nottingham.ac.uk> ORCID iD: 0000-0003-0210-3102 <image.png> <https://outlook.office.com/bookwithme/user/6a3f960a8e89429cb6fc693c01d10119@...> Book a Coffee and Research chat with me! From: Pierre Bouvet via Software <software@biobrillouin.org <mailto:software@biobrillouin.org>> Sent: 31 July 2025 09:25 To: software@biobrillouin.org <mailto:software@biobrillouin.org> <software@biobrillouin.org <mailto:software@biobrillouin.org>> Subject: [Software] Re: [EXTERN] Re: new link for meeting now
Hi,
Following yesterday’s meeting, here are some screenshots of the HDF5_BLS GUI for the C pane of the figure. I’d probably use the 5th figure which is the most representative of what the GUI does.
Best,
Pierre
Pierre Bouvet, PhD Post-doctoral Fellow Medical University Vienna Department of Anatomy and Cell Biology Wahringer Straße 13, 1090 Wien, Austria
On 30/7/25, at 17:19, Kareem Elsayad via Software <software@biobrillouin.org <mailto:software@biobrillouin.org>> wrote:
Hi All,
Sorry for not making meeting today.
Pierre gave me an update of all that was discussed. For the most part I agree with everything concluded.
One comment on the data for the Consensus I think was brought up. The labs generally did not provide spectra (except a couple that provided one or two representative spectra shown in figures and SM). Now we could ask people to send, but this would probably not go quickly (also given that it is summer). I also fear processing these may be quite a bit of work (different spectral ranges, degrees of elastic suppression, and naming strategies for e.g. temp sweeps). So I would suggest to not make more work than needed and to keep things in a reasonable time frame, to not pursue this, but happy to if the majority think that this would add significant value.
I should over next days also be able to look at text so-far.
All the best, Kareem
From: Pierre Bouvet via Software <software@biobrillouin.org <mailto:software@biobrillouin.org>> Reply to: Pierre Bouvet <pierre.bouvet@meduniwien.ac.at <mailto:pierre.bouvet@meduniwien.ac.at>> Date: Wednesday, 30. July 2025 at 16:42 To: <software@biobrillouin.org <mailto:software@biobrillouin.org>> Subject: [EXTERN] [Software] Re: new link for meeting now
Hi,
This is the website I was mentioning to scan for viruses in files: https://www.virustotal.com/gui/home/upload Fun fact: I remember first hearing of it after they unintentionally leaked thousands of private addresses from US security officials (which got me into trusting them, kind of paradoxical).
Best,
Pierre
Pierre Bouvet, PhD Post-doctoral Fellow Medical University Vienna Department of Anatomy and Cell Biology Wahringer Straße 13, 1090 Wien, Austria
_______________________________________________ Software mailing list -- software@biobrillouin.org <mailto:software@biobrillouin.org> To unsubscribe send an email to software-leave@biobrillouin.org <mailto:software-leave@biobrillouin.org> _______________________________________________ Software mailing list -- software@biobrillouin.org <mailto:software@biobrillouin.org> To unsubscribe send an email to software-leave@biobrillouin.org <mailto:software-leave@biobrillouin.org>
This message and any attachment are intended solely for the addressee and may contain confidential information. If you have received this message in error, please contact the sender and delete the email and attachment. Any views or opinions expressed by the author of this email do not necessarily reflect the views of the University of Nottingham. Email communications with the University of Nottingham may be monitored where permitted by law.<fig1a_sal.zip><brim_converter.zip><file_abstraction.zip>_______________________________________________ Software mailing list -- software@biobrillouin.org <mailto:software@biobrillouin.org> To unsubscribe send an email to software-leave@biobrillouin.org <mailto:software-leave@biobrillouin.org>
This message and any attachment are intended solely for the addressee and may contain confidential information. If you have received this message in error, please contact the sender and delete the email and attachment. Any views or opinions expressed by the author of this email do not necessarily reflect the views of the University of Nottingham. Email communications with the University of Nottingham may be monitored where permitted by law.
Hi all, @Sal I never bother removing the h5 section as it is not affecting the code (as it is never executed). But if you think that the code will be cleaner I can remove it. @Pierre * I was aware of this issue and I fixed it already 2 weeks ago (commit (https://github.com/prevedel-lab/brimfile/commit/1e7cdfe11d1d35444699829cb314...)). I didn't push it on PyPI yet (I usually don't do it at every commit otherwise I need to increase the version) but will do it asap. In the meanwhile you can use the test code (https://github.com/prevedel-lab/brimfile/blob/main/tests/general.py)in the github repository, which works fine. * can you elaborate more on "the BioBrillouin website is unable to open it"? Is it giving an error or just not showing anything? Could you send the file, so we can check what is the problem? * as we extensively discussed in the past, the brimfile format is not limited to 4D. I created a function (https://prevedel-lab.github.io/brimfile/brimfile/file.html#File.create_data_...) to specifically save 4D arrays as I believe this is the most common scenario in imaging (timepoints can be saved by creating multiple data groups, i.e. calling the function create_data_group multiple times). For now the possibility of saving additional dimensions in a single PSD array (e.g. for angle resolved measurements) can only be done through the function create_data_group_raw (https://prevedel-lab.github.io/brimfile/brimfile/file.html#File.create_data_...). I plan to improve this once people start to use the library and give feedback (N.B the way the data is saved in the file will not change, I will only add helper functions to facilitate reading and writing to the file). * adding a checksum to the file is a good idea, but I am not sure how to implement it technically. It might be done at the level of individual chunks (as discussed here (https://github.com/zarr-developers/zarr-python/issues/392)). At the level of the whole file it might be problematic, as I am not sure that given the same input, all the zipping tools would produce the same binary output (i.e. compression level, metadata, order of the entries, etc.. might differ), so just unzipping and zipping again a file will not pass the check, even though the file is valid. Additionally there is always the issue of where to store the hash: I am not sure if appending it to the end of the file will make the zip unreadable Talk to you tomorrow. Best, Carlo On Wed, Aug 27, 2025 at 09:30, Pierre Bouvet via Software wrote: Hi everyone, @ Sal I would suggest adding it to the backend with a dedicated function of the wrapper class with a header looking like: “def export_brim(self, path, filepath)” and then adding the menu buttons to the “show_treeview_context_menu” of the main file of the HDF5_BLS_GUI package, ideally with a bit of logic to ensure only datasets compatible with your module can be exported to brim. The question is then do you want to make your module an independent package or to join it to either Carlo’s or mine. I would suggest adding it to HDF5_BLS because I’m sure there will be no fundamental limitations. I can do it if you want :) Enjoy Banyuls! @ Carlo & Sebastian The code on supplementary for Brim still doesn’t work. I went to dig in the sources and the problem comes from your use of the Enum class: you’re mixing literal expressions and patterns. I think your code also lacks commenting, as is it’s really hard to read and fix. After modifying your library, I tried exporting a synthetic BLS image to Brim using the code you provided. I do get a .zip file or .zarr file where I am supposed to have it, but I cannot say if it works or not since the viewer on the BioBrillouin website is unable to open it and there are no compatible softwares out there. I think it’s important this is fixed, and I would suggest doing as I did in supplementary, by showing an example of integration of your code to an existing process. Next, from what I could understand from your code, you can only add 4D PSD arrays to Brim, I think this is non intuitive and could easily be improved by adding silent dimensions in your process. As I’ve already mentioned before, this is in any case extremely limiting for a number of applications, starting with all TFP measures and time-series. Last problem I see, but that’s proper to zarr: you end up saving a zipped file, which can be corrupted easily by the user if he unzips and then re-zips his file. To limit the risks, I would strongly recommend adding a hash key to check file integrity, generated after each addition or modification to the file, and compared to the computed hash key at opening. Best, Pierre Pierre Bouvet, PhD Post-doctoral Fellow Medical University Vienna Department of Anatomy and Cell Biology Wahringer Straße 13, 1090 Wien, Austria On 26/8/25, at 17:29, Sal La Cavera Iii wrote: Hi guys, Sorry I am a little slow on communications right now because I am at a summer school that I have been helping organise in Banyuls sur Mer this week and last week. Thanks for the initial feedback on the conversion library! 2 follow ups: * Pierre, I've been meaning to check out where to insert the BrimConverter to HDF5_BLS based on your suggestions (Export and Actions-Export buttons) but I am currently unable to clone the HDF5_BLS repo because of the venue's wifi firewall. I can clone using my phone's hotspot but mobile network is really slow and is going to eat all my data. In essence it should be really easy though right? BrimConverter() is just a 1-liner and I imagine clicking on it should just bring up a window to select the input file and define the output file name. * Carlo, I've recently upgraded my brimfile version just to check compatibility from the BrimConverter end. It looks like that use_h5 hardcoded flag is still in file_abstraction.py. Probably it would be best to remove that and the related if-statements that query use_h5's value if you're going to deactivate h5 compatibilty? If you ever did want to bring hdf5 back I have some updated versions of file_abstraction.py that play nicely with h5 in a generalised way if you ever needed/were curious. See you guys on Thursday, Cheers, Sal ---------------------------------------------------------------Salvatore La Cavera IIIRoyal Academy of Engineering Research FellowNottingham Research FellowOptics and Photonics GroupUniversity of Nottingham Email: salvatore.lacaveraiii@nottingham.ac.uk (mailto:salvatore.lacaveraiii@nottingham.ac.uk) ORCID iD: 0000-0003-0210-3102 (https://outlook.office.com/bookwithme/user/6a3f960a8e89429cb6fc693c01d10119@...) Book a Coffee and Research chat with me! ------------------------------------ From: Carlo Bevilacqua via Software Sent: 11 August 2025 12:41 To: software@biobrillouin.org (mailto:software@biobrillouin.org) Subject: [Software] Re: new link for meeting now Hi Sal,thanks a lot for sending the figures and the brim_converter module. I will try to run it in the next days and give you feedback. In the meanwhile, if you want to test it on more brim files, I am uploading on an S3 bucket a few example brim files from different modalities in our lab; you can find the link to them in the `Example brim files` section in the Supplementary of the paper draft (https://docs.google.com/document/d/1Qg6gwRQ8EzmyfbjBKin8atMTCTtewu1THTUlb_zE...). Regarding HDF5 support, I initially design the brimfile package to easily support different file formats (that's why I introduced the abstract class `FileAbstraction`).Later I thought that I don't see a clear advantage of HDF5 over zarr (apart from being more mature and thus more tools being available) and if we want to advocate for the use of Zarr it is better to not support both in the initial phase, otherwise people would start using both and force us to keep supporting HDF5 as well.If in the future we realize that supporting HDF5 will be beneficial, we can easily do it (as you have already experienced). Regarding the use of R2 and RMSE, you can just ignore them if they are not on Pierre's end, as they are optional. I still believe that it makes sense to keep them in the definition of the file format though, as they are commonly used metrics to estimate the goodness of fit (even though I agree that they are by no means complete or informative about the error on the fit parameters).Regarding the covariance matrix, I feel that that is the quantity that you get directly from the fit and the actual errors on the parameters can be easily computed from there by the visualization software. The only drawback is an increase of the file size, which, in any case, won't be much (it is ~10 additional numbers for each fit while a spectrum has typically several tens of points; also, if it is indeed a purely diagonal matrix, compression should work very well). Instead the advantage of storing it would be that it helps troubleshooting the fit (e.g. in case it turns out not to be diagonal). Regarding distributing the conversion script, as you are only importing brimfile and not HDF5_BLS, I would propose to make it a submodule of brimfile (i.e. a folder `converter` inside `brimfile` so that you can import it by `import brimfile.converter`). I would include any additional dependencies you need (e.g. h5py) into an optional dependency (https://packaging.python.org/en/latest/guides/writing-pyproject-toml/#depend...) `converter`, so people can do `pip install brimfile[converter]` only if they need it. Anyways, even if you need to use it from the HDF5_BLS library, you need to install brimfile, so by making it a submodule you are not forcing to install any unnecessary dependency.If you go for this option you can write the documentation into an '__init__.py' file inside the 'brimfile/converter' folder and it will be automatically added to the online documentation of the brimfile library (I am using a github action which triggers pdoc (https://pdoc.dev/) to compile the documentation and upload it online at every push). The docstring of each function will also be automatically compiled to the documentation. If you think that is a good option, you can just make a pull request to the brimfile repository. Let me know what you think about it and if you want to discuss some technical detail. Best,Carlo On Fri, Aug 8, 2025 at 21:21, Pierre Bouvet via Software wrote:Hi Sal, In v 1.0 there are no Stokes and anti-Stokes specific attributes, to combine data from different types of peaks my go-to solution for now is to add dimensions to the shift and linewidth array (if I fit 3 peaks in a mapping of 50 by 50 elements, shift and linewidth will have the following shape: 50x50x3). I do think you have a point there, and it needs to be integrated in the next version of the format, but as is I think it’s better to leave it for the future, mainly because we don’t expect changes in Stokes and anti-Stokes parameters (at least with the way we’re using BLS). For unit extraction, for now it’s the text that is between the parenthesis after the last underscore. It’s not super elegant I agree, but it can accommodate almost anything and is relatively easy to use on the spreadsheet. Maybe we could think about updating that on the next version of the library? Regarding _std datasets, I’ve done a full conversion to a more general “_err” dataset (linewidth_std -> linewidth_err), which I plan to upload after submission (or maybe I can directly correct for that on your code if you agree? I honestly don’t remember if I have already pushed the changes or not). The idea is to patch the problem you underline, and allow other estimators of errors to be stored in the format. This being said, R2 and RMSE cannot quantify an error on both the shift and linewidth as they are essentially scalar values. This is why I advocate for the use of the standard deviation as the standard error estimator, obtained from the covariance matrix estimated from the Hessian and Jacobian matrices that are derived during the fit. Storing the covariance matrix is also not super interesting as the fitted parameters are independent (if they’re not, it means we’re essentially fitting noise). That means that the covariance matrix will always be essentially a diagonal matrix where the diagonal is the variance on the fitting of individual parameters and the covariance of any two different parameters is negligible. I think we already had this discussion a few months back though and the reason why Carlo wanted to keep R2 and RMSE is that people have used it in the past, but here again I think it’s mathematically incorrect to use them as error estimators.For integration, I propose to add it in the sub-menu “Export -> Brim” that opens when you right click on an element of the file when opened with the GUI, and to the main menu under “Actions -> Export -> Brim”. I have not implemented the dragging from the GUI to the desktop (or any other file viewer) but when I’ll get to that, I will also integrate it in the export choices. Best, Pierre Pierre Bouvet, PhDPost-doctoral FellowMedical University ViennaDepartment of Anatomy and Cell BiologyWahringer Straße 13, 1090 Wien, Austria On 8/8/25, at 20:01, Sal La Cavera Iii via Software wrote: Hi everyone, Carlo/Sebastian, I've attached the svg file for the draft of fig 1a, the zip also contains the png output images from Blender. If you need anything, e.g., higher resolution anything, just let me know. I think the conversion library is pretty much finished! In the attached brim_converter.zip, the test run script is called master_BrimConverter_test.py. This allows the user (us for now) to define the input source file (either a brim or brimx file) and the desired output file, both with the correct/desired file extensions. Then both of these are passed to the BrimConverter class and the conversion mode is specified ('brim2brimX' or vice versa). I've added some user-friendly features to the run script, that aren't really needed, but the main syntax to use the conversion library is in general: file_in = ... # string of the filepathfile_out = ... # string of the filepathconvert_this = BrimConverter(file_in, file_out, mode='brimX2brim')convert_this.convert() I have tested this using Carlo's drosophila_LSBM.brim.zarr file as the test brimfile, and one of Pierre's more recent brimx files called Measures.h5 I think. However, Pierre's required a little tmp hardcoding to reshape the PSD since the spatial dimensions of the PSD didn't match the shift dataset. I have given fairly rigorous testing on the above: for example, producing a file the brimX2brim way around, and then passing that back through the brim2brimX way around (and vice versa) and so far so good. A couple things for Carlo/Pierre: Carlo, I see in the manuscript google doc that you guys are discontinuing HDF5 compatibility for brimfile. No worries there. To get everything to play nicely in the BrimConverter library I slightly modified file.py and file_abstraction.py in the following ways: * in file_abstraction.py you had a hardcoded flag for whether h5 mode was activated or not, use_h5 = False or something. Then you had an if statement later checking the state of this flag. And by being False this always activated the path containing Zarr etc. This prevented using file_abstraction.py for both h5 files and non-h5 files (e.g. zarr) dynamically. Switching to =True when h5 is wanted isn't a viable long term solution. * I fixed this by relocating the StoreType(Enum) class definition to the beginning of file_abstraction.py and adding HDF5 = 'hdf5' as one of the defined StoreTypes (see attached file in the other zip). Then I could delete the use_h5 flag and the related if statements. * _h5File class is now H5File class in file_abstraction.py is mainly unchanged (it is outside of any if statements now), except I needed to modify the create_attr() function definition to make sure that data types, and None/NaN's were dealt with correctly. That's also in the attached file_abstraction.py file. * Moved the Compression class into FileAbstraction * For file.py the main change is not using the single line in the __init__ self._file = _AbstractFile( filename, mode=mode, store_type=store_type) * Nothing beyond these 2 files needs to be modified I'm pretty sure. instead I take the store_type passed as input, and then load the relevant file type's class (each defined in file_abstraction still). H5File for h5 and then ZarrFile for everything else as you previously had it with your if statement flagged by use_h5. All of the above might not be of any use anymore if you're removing hdf5 compatibility, but using the above/attached fixes I was able to get file_abstraction.py and file.py working for both h5, zarr, and zips, whereas previously it wouldn't work for h5. So now I can create .brim.h5 files and everything plays nicely in all directions. Pierre, * is there scope to have a Brillouin_type attribute for both AS and S measurements? If I pass a brimfile to BrimConverter to produce a brimX, and that brim has AS and S, currently only one of those is passed to brimX because there isn't a distinction between the 2 in the Brillouin_type attributes list. Or is there a different solution you already have for managing this? * Do you have a clean way of extracting units (e.g. having them as attributes?). All I could find on the github guide was that you chuck them onto the ends of things like MEASURE.Exposure_(s) which makes scraping a little clunky. Things to note for everyone: * I haven't dealt with metadata yet, as that is lower priority atm. It is on my to-do list though, though I reckon this can be sorted during submission/review etc. * Currently AS vs S distinction is lost when converting to brimX * BLT and all _std's on Pierre's end don't map to anything on Carlo's end. And R2, RMSE, and Cov_matrix on Carlo's end don't map to anything on Pierre's end. Do people have opinions on how to roll out the conversion library? Integrated into brimfile and HDF5_BLS libraries? Currently it's pretty straightforward to use in scripting, but perhaps we could integrate it as a button in Pierre's GUI? I have integrated relevant explanations into the manuscript, but have not formally written documentation for the library as I'd prefer to wait for any feedback. I don't see any point in making online documentation that is separate to the brimfile and/or HDF5_BLS documentation, so would hope to port it to your guys' sites. HDF5_BLS one is easy enough b/c it's on github. Any questions/comments/edits just let me know. Hope everyone has a nice weekend, Best, Sal ---------------------------------------------------------------Salvatore La Cavera IIIRoyal Academy of Engineering Research FellowNottingham Research FellowOptics and Photonics GroupUniversity of Nottingham Email: salvatore.lacaveraiii@nottingham.ac.uk (mailto:salvatore.lacaveraiii@nottingham.ac.uk) ORCID iD: 0000-0003-0210-3102 (https://outlook.office.com/bookwithme/user/6a3f960a8e89429cb6fc693c01d10119@...) Book a Coffee and Research chat with me! ------------------------------------ From: Pierre Bouvet via Software Sent: 31 July 2025 09:25 To: software@biobrillouin.org (mailto:software@biobrillouin.org) Subject: [Software] Re: [EXTERN] Re: new link for meeting now Hi, Following yesterday’s meeting, here are some screenshots of the HDF5_BLS GUI for the C pane of the figure. I’d probably use the 5th figure which is the most representative of what the GUI does. Best, Pierre Pierre Bouvet, PhDPost-doctoral FellowMedical University ViennaDepartment of Anatomy and Cell BiologyWahringer Straße 13, 1090 Wien, Austria On 30/7/25, at 17:19, Kareem Elsayad via Software wrote: Hi All, Sorry for not making meeting today. Pierre gave me an update of all that was discussed. For the most part I agree with everything concluded. One comment on the data for the Consensus I think was brought up. The labs generally did not provide spectra (except a couple that provided one or two representative spectra shown in figures and SM). Now we could ask people to send, but this would probably not go quickly (also given that it is summer). I also fear processing these may be quite a bit of work (different spectral ranges, degrees of elastic suppression, and naming strategies for e.g. temp sweeps). So I would suggest to not make more work than needed and to keep things in a reasonable time frame, to not pursue this, but happy to if the majority think that this would add significant value. I should over next days also be able to look at text so-far. All the best,Kareem From: Pierre Bouvet via Software Reply to: Pierre Bouvet Date: Wednesday, 30. July 2025 at 16:42 To: Subject: [EXTERN] [Software] Re: new link for meeting now Hi, This is the website I was mentioning to scan for viruses in files: https://www.virustotal.com/gui/home/upload (https://www.virustotal.com/gui/home/upload) Fun fact: I remember first hearing of it after they unintentionally leaked thousands of private addresses from US security officials (which got me into trusting them, kind of paradoxical). Best, Pierre Pierre Bouvet, PhDPost-doctoral FellowMedical University ViennaDepartment of Anatomy and Cell BiologyWahringer Straße 13, 1090 Wien, Austria _______________________________________________ Software mailing list -- software@biobrillouin.org (mailto:software@biobrillouin.org) To unsubscribe send an email to software-leave@biobrillouin.org (mailto:software-leave@biobrillouin.org)_______________________________________________ Software mailing list -- software@biobrillouin.org (mailto:software@biobrillouin.org) To unsubscribe send an email to software-leave@biobrillouin.org (mailto:software-leave@biobrillouin.org) This message and any attachment are intended solely for the addressee and may contain confidential information. If you have received this message in error, please contact the sender and delete the email and attachment. Any views or opinions expressed by the author of this email do not necessarily reflect the views of the University of Nottingham. Email communications with the University of Nottingham may be monitored where permitted by law._______________________________________________ Software mailing list -- software@biobrillouin.org (mailto:software@biobrillouin.org) To unsubscribe send an email to software-leave@biobrillouin.org (mailto:software-leave@biobrillouin.org) This message and any attachment are intended solely for the addressee and may contain confidential information. If you have received this message in error, please contact the sender and delete the email and attachment. Any views or opinions expressed by the author of this email do not necessarily reflect the views of the University of Nottingham. Email communications with the University of Nottingham may be monitored where permitted by law.
participants (2)
-
Carlo Bevilacqua -
Pierre Bouvet