Has anyone had success in exporting a full experiment (.pxp) file to HDF5? For example, using the HDF5-xop to create the HDF5 file and then recursively creating HDF5 groups for each of Igor's data folders, and then creating data sets for Igor's variables and waves? What are potential pitfalls of such an export? Do data attributes survive the conversion? Might there be an easier/automatic way to do this?
The HDF5 xop procedures appear to provide low-level functionality for individual groups/datasets, but there don't appear to be any higher-level commands to do such an export.
I'm a new user to Igor and I may be missing the obvious. Any insight or assistance would be greatly appreciated.
The HDF5SaveGroup operation exists. From the help file:
The HDF5SaveGroup operation saves the contents of an Igor data folder in an HDF5 file.
By default, HDF5SaveGroup saves all Igor waves, numeric variables and string variables in the specified data folder. The /L flag allows you to skip saving any of those object types.
HDF5SaveGroup writes Igor numeric and string variables as datasets with an "IGORVariable" attribute. See Saving and Reloading Igor Data for details.
If you provide the root datafolder it might recursively save everything, I don't know. Or you might have to recursively travel through datafolders saving things. Is there any reason you want to save the PXP file as HDF? Igor is very good at reading old packed experiment files (try getting the HDF library to read HDF3 files). If you want to give data to someone else then why not save a subset of waves?
Thank you very much. HDF5SaveGroup /R seems to achieve this goal perfectly.
> Is there any reason you want to save the PXP file as HDF?
We're making a large number of recordings and need to be able to bulk process the data on linux, after the fact (eg, analyze across 500 experiments). Exporting to HDF5 when saving an experiment seems to be the best option, even though it does double our storage requirements (ie, .pxp + .h5). I'm considering using igorpy as an alternative but it's not supported by WaveMetrics plus it ties us to python (python isn't necessarily a bad thing, but the dependency is a consideration)
Are others faced with similar analysis goals (eg, bulk-processing under linux), and if so, does anyone know what strategies were used? Has anyone had success/failure using igorpy? I'm _very_ open to other ideas...
What sort of bulk processing requires Linux? Surely you could do it in Igor :)
All of it -- I'm allergic to Windows and can't afford a Mac ;)
Seriously, I'm in the process of exploring options for using Igor and .pxp files on Mac, linux and Windows. Nothing is off the table.
Might you be able to suggest a way to run some arbitrary function 'foo' on 500 experiment files under Windows? We've successful loaded a single pxp file and executed foo() on Mac using Applescript, and I suspect that Applescript would allow us to process many files sequentially. A Windows solution still eludes me.
1) the pxp file layouts are documented, you could probably read direct from Linux without having to write to HDF. This is probably what igorpy does.
2) More importantly the LoadData operation allows you to load data from other experiments. Therefore, you could create analysis code in IGOR that runs through a) a whole load of selected files (use Open with the /MULT flag, or indexedfile, to select the files). b) loads the data from those files using LoadData. c) analyses the files using your 'foo' script. d) produces output.
Of course, it depends on how much work you have to do as to which way you go. If I was obtaining the data in IGOR I would lean towards analysing the data in IGOR.
It is also possible (but tricky) to do the processing using Execute/P. To learn about Execute/P, read:
DisplayHelpTopic "Operation Queue"
The easiest case would be a folder full of .pxp files, and you want to process all of them. Write a procedure file that does the processing and put it into the User Files folder, in Igor Procedures. That will make it available all the time. A function in that procedure file would use IndexedFile to get the index'th file name from the folder and do something along the lines of:
At the end of myProcessingFunc() you would increment the index, use it to get the next file name, increment it, and construct the Execute/P commands to do the next file.
This solution works on both Macintosh and Windows using the same code. You have to learn about the Operation Queue, but you don't need to learn the platform-specific scripting language.
HDF5SaveGroup
operation exists. From the help file:If you provide the root datafolder it might recursively save everything, I don't know. Or you might have to recursively travel through datafolders saving things. Is there any reason you want to save the PXP file as HDF? Igor is very good at reading old packed experiment files (try getting the HDF library to read HDF3 files). If you want to give data to someone else then why not save a subset of waves?
August 26, 2014 at 04:40 pm - Permalink
August 26, 2014 at 05:45 pm - Permalink
> Is there any reason you want to save the PXP file as HDF?
We're making a large number of recordings and need to be able to bulk process the data on linux, after the fact (eg, analyze across 500 experiments). Exporting to HDF5 when saving an experiment seems to be the best option, even though it does double our storage requirements (ie, .pxp + .h5). I'm considering using igorpy as an alternative but it's not supported by WaveMetrics plus it ties us to python (python isn't necessarily a bad thing, but the dependency is a consideration)
Are others faced with similar analysis goals (eg, bulk-processing under linux), and if so, does anyone know what strategies were used? Has anyone had success/failure using igorpy? I'm _very_ open to other ideas...
August 27, 2014 at 12:56 pm - Permalink
John Weeks
WaveMetrics, Inc.
support@wavemetrics.com
August 27, 2014 at 01:25 pm - Permalink
All of it -- I'm allergic to Windows and can't afford a Mac ;)
Seriously, I'm in the process of exploring options for using Igor and .pxp files on Mac, linux and Windows. Nothing is off the table.
Might you be able to suggest a way to run some arbitrary function 'foo' on 500 experiment files under Windows? We've successful loaded a single pxp file and executed foo() on Mac using Applescript, and I suspect that Applescript would allow us to process many files sequentially. A Windows solution still eludes me.
August 27, 2014 at 04:31 pm - Permalink
1) the pxp file layouts are documented, you could probably read direct from Linux without having to write to HDF. This is probably what igorpy does.
2) More importantly the
LoadData
operation allows you to load data from other experiments. Therefore, you could create analysis code in IGOR that runs through a) a whole load of selected files (useOpen
with the /MULT flag, orindexedfile
, to select the files). b) loads the data from those files usingLoadData
. c) analyses the files using your 'foo' script. d) produces output.Of course, it depends on how much work you have to do as to which way you go. If I was obtaining the data in IGOR I would lean towards analysing the data in IGOR.
August 27, 2014 at 05:00 pm - Permalink
"run()"
and close igor.The windows batch part is
@echo off
REM Script for automatic test execution and logging from the command line
REM Opens all experiment files in the current directory in autorun mode
set IgorPath="%PROGRAMFILES(x86)%\WaveMetrics\Igor Pro Folder\Igor.exe"
set StateFile="DO_AUTORUN.TXT"
if exist %IgorPath% goto foundIgor
echo Igor Pro could not be found in %IgorPath%, please adjust the variable IgorPath in the script
goto done
:foundIgor
echo "" > %StateFile%
for /F "tokens=*" %%f IN ('dir /b *.pxp') do (
echo Running experiment %%f
%IgorPath% /I "%%f"
)
del %StateFile%
:done
and the relevant igor part is in unit-testing-autorun.ipf in [1].
[1]: http://www.igorexchange.com/project/unitTesting
August 28, 2014 at 02:35 am - Permalink
DisplayHelpTopic "Operation Queue"
The easiest case would be a folder full of .pxp files, and you want to process all of them. Write a procedure file that does the processing and put it into the User Files folder, in Igor Procedures. That will make it available all the time. A function in that procedure file would use IndexedFile to get the index'th file name from the folder and do something along the lines of:
Execute/P "COMPILEPROCEDURES "
Execute/P "myProcessingFunc("+num2str(fileindex)+")"
At the end of myProcessingFunc() you would increment the index, use it to get the next file name, increment it, and construct the Execute/P commands to do the next file.
This solution works on both Macintosh and Windows using the same code. You have to learn about the Operation Queue, but you don't need to learn the platform-specific scripting language.
John Weeks
WaveMetrics, Inc.
support@wavemetrics.com
August 29, 2014 at 10:00 am - Permalink
For what it's worth, here's some code to write out HDF5, giving a working solution to the original question
Function convert_to_hdf5(filename)<br />
String filename<br />
Variable root_id, h5_id<br />
SetDataFolder root:<br />
HDF5CreateFile /O /Z h5_id as filename<br />
HDF5CreateGroup /Z h5_id, "/", root_id<br />
HDF5SaveGroup /O /R :, root_id, "/"<br />
HDF5CloseGroup root_id<br />
HDF5CloseFile h5_id<br />
end<br />
August 29, 2014 at 10:50 am - Permalink