I am just now introducing data folders to some old code that I want to streamline. My code has users prompts that are dumped into a bunch of string and variable waves in a subfolder, root:ProcedureWaves. This is partially legacy so that when the user makes a mistake or wants to slightly adjust the analysis, rerunning the master function brings up the previously entered data. I want to save these user parameters for posterity, and decided it would be useful to duplicate root:ProcedureWaves and store it in a particular datasets dedicated subfolder, i.e. root:Data:ParticularDataSet:ProcedureWaves.
This works well if the code is run only once. But if the same dataset has to be reanalyzed, I get an error since root:Data:ParticularDataSet:ProcedureWaves already exists. There doesn't appear to be an overwrite flag for DuplicateDataFolder. Any suggestions about the most expedient way forward?
Feel free to request clarification or actual code (not sure what would help at this point). My thanks to community in advance.
One more thing: my current workaround is to kill the destination subfolder and then recreate it with DuplicateDataFolder. However, this isn't the same as overwriting, as anything using objects in the destination subfolder (e.g., a table) cause an error that stops the duplication. This is bad if I ever intend overwrites to update a table... Help!
NewDataFolder/O <name of new data folder> should do what you want. If the data folder already exists, it is basically a no-op. If you want the current data folder to be the new one after the command, use NewDataFolder/O/S<name of new data folder>.
First, as a general rule, you should put package-related stuff in a root:Packages:MYPACKAGESTUFF folder. So, create and move your new stuff to root:Packages:ProcedureWavesPackage:Data ...
Second, you can get past the problem message "folder exists" in two ways.
* Check for the existence of the destination data folder before you create or duplicate something to it. When it already exists, kill it. Then create or duplicate it fresh.
(ps - John beat me to suggest alternatively using /O or /O/S)
* Each time you create or duplicate a new data folder, add a unique "date+time stamp" as a suffix. For example, the "FirstDataSet" folder that you want to duplicate becomes "FirstDataSet1611021216" as its destination.
--
J. J. Weimer
Chemistry / Chemical & Materials Engineering, UAH
I get most of that. I don't have a problem creating a NewDataFolder, or overwriting a new one. My problem is in duplicating an existing folder into a new folder (wherever in the directory structure). DuplicateDataFolder works great, as long as the destination folder doesn't already exist. If it does exist, than DuplicateDataFolder doesn't appear to be an option.
My understanding is if I use NewDataFolder, all I've done is create a new folder (or overwrite an existing one). This still leaves me needing to copy over all the objects in the original folder to thew newly created/overwritten one. And for that I have another question that I will put in a new post.
... DuplicateDataFolder works great, as long as the destination folder doesn't already exist. If it does exist, than DuplicateDataFolder doesn't appear to be an option. ...
Then use Option 1. Check for the existence of the data folder at the location where you will duplicate. If it exists, delete it. Better still, rename it with a date+time stamp. Then, duplicate.
--
J. J. Weimer
Chemistry / Chemical & Materials Engineering, UAH
... DuplicateDataFolder works great, as long as the destination folder doesn't already exist. If it does exist, than DuplicateDataFolder doesn't appear to be an option. ...
Then use Option 1. Check for the existence of the data folder at the location where you will duplicate. If it exists, delete it. Better still, rename it with a date+time stamp. Then, duplicate.
--
J. J. Weimer
Chemistry / Chemical & Materials Engineering, UAH
Got it. It isn't clear from my post, but the string ParticularDataSet already contains the date and a filename index. Basically I want subfolders that are particular to each data run, each of which contains raw data, processed data, and processing parameters, so that the exact same analysis could be easily recreated without having to reload waves and figure out exactly which parameters were used.
November 2, 2016 at 09:19 am - Permalink
NewDataFolder/O <name of new data folder>
should do what you want. If the data folder already exists, it is basically a no-op. If you want the current data folder to be the new one after the command, useNewDataFolder/O/S <name of new data folder>
.John Weeks
WaveMetrics, Inc.
support@wavemetrics.com
November 2, 2016 at 09:25 am - Permalink
Second, you can get past the problem message "folder exists" in two ways.
* Check for the existence of the destination data folder before you create or duplicate something to it. When it already exists, kill it. Then create or duplicate it fresh.
(ps - John beat me to suggest alternatively using /O or /O/S)
* Each time you create or duplicate a new data folder, add a unique "date+time stamp" as a suffix. For example, the "FirstDataSet" folder that you want to duplicate becomes "FirstDataSet1611021216" as its destination.
--
J. J. Weimer
Chemistry / Chemical & Materials Engineering, UAH
November 2, 2016 at 09:31 am - Permalink
My understanding is if I use NewDataFolder, all I've done is create a new folder (or overwrite an existing one). This still leaves me needing to copy over all the objects in the original folder to thew newly created/overwritten one. And for that I have another question that I will put in a new post.
November 2, 2016 at 10:02 am - Permalink
Then use Option 1. Check for the existence of the data folder at the location where you will duplicate. If it exists, delete it. Better still, rename it with a date+time stamp. Then, duplicate.
--
J. J. Weimer
Chemistry / Chemical & Materials Engineering, UAH
November 2, 2016 at 10:17 am - Permalink
Got it. It isn't clear from my post, but the string ParticularDataSet already contains the date and a filename index. Basically I want subfolders that are particular to each data run, each of which contains raw data, processed data, and processing parameters, so that the exact same analysis could be easily recreated without having to reload waves and figure out exactly which parameters were used.
November 2, 2016 at 10:25 am - Permalink