Fbinread equivalent for textfiles
Callisto
Fbinread file, tmpwave
for reading a fixed number of lines containing some numbers into a wave for a text file?
Loadwave does somthing like that but it does not read from the current position of an opened file. I would have to open the file separately and counted the lines until the current position.
FReadLine
can read lines into a string.You can then use
sscanf
to get out numeric data.Compared to LoadWave this will be really slow of course.
July 4, 2014 at 02:12 am - Permalink
Right now I am using Freadline in a loop. And yes thats very slow for big arrays and I want to speed that up with a function similar to Fbinread.
July 4, 2014 at 04:40 am - Permalink
If you know the line number (zero-based) where you want to start loading the data you can use LoadWave with the /L flag. This is how I would do it.
One way to determine the line number of interest is to read the entire file, or the first so-many bytes, into a string variable and then search through the string variable.
Here is code that shows how to read the file into a string variable:
http://www.igorexchange.com/node/5846
Another approach is to load the whole file into a text wave and then analyze that wave.
This all depends on factors such as how big the file is, what are you trying to get out of it and the general organization of the file.
July 4, 2014 at 08:30 am - Permalink
I have huge text files 30++MB with different data blocks (each 20k+ lines) separated by text headers (and subheaders ...) which I also have to evaluate.
And loading one of these files takes several minutes when I have to load each data line in a loop. Thats why I was looking for something similar to the fbinread function.
But I guess the only option is really to use the loadwave function with /L flag and increase a linecounter each time I read a line to get the offsets to the datablocks.
Loading the file up to the current position into a string and then do some fast line counting will also not be effective for these big files.
July 4, 2014 at 12:33 pm - Permalink
Whether this is practical or possible depends on the details of the file format.
July 4, 2014 at 02:42 pm - Permalink
July 5, 2014 at 12:29 am - Permalink
Ok, is there a way to load the complete file in a single text wave?
It uses some kind of XML structure.
July 5, 2014 at 11:58 am - Permalink
This command should load the entire file into a text wave with one line of the file stored in the corresponding point of the text wave:
This command specifies no delimiter character (first "" in /V) and that all waves are to be created as text waves (/K=2).
With the text in the text wave you can then access the elements of the text wave quickly. If you change elements in the text wave it will be very slow but just reading the elements should be quick.
How you proceed from there depends on the details of the file and what you want to do with it.
There are some projects for loading XML:
http://www.igorexchange.com/project/XMLutils
http://www.igorexchange.com/project/udStFiLrXML
I don't have any experience with them and don't know if they would be fast with very large files.
July 5, 2014 at 03:55 pm - Permalink
Thank you. This improved the loading time by about 50%.
https://github.com/Yohko/importtool/blob/master/Igor%20Procedures/impor…
July 6, 2014 at 04:10 am - Permalink
Perhaps with XML and an XSL converter, you could convert to a data structure that might load faster in to Igor.
--
J. J. Weimer
Chemistry / Chemical & Materials Engineering, UAHuntsville
July 7, 2014 at 05:52 am - Permalink