These are in several categories:
- Experimental Design Tools
- Data Farming Tools
- Agent-based Modeling Platforms and Other Tools
- User's Manuals
Experimental Design Tools
We currently have several tools for generating experimental designs:
- Excel spreadsheets for
- NOLH designs: (nearly) orthogonal designs (up to 7 factors) and nearly orthogonal designs (up to 29 factors) (see Cioppa and Lucas 2007)
- NOB mixed designs: nearly orthogonal & balanced designs for mixed factor types (continuous, discrete with differing numbers of factor levels); (see Vieira Jr. et al, 2012)
- 2nd order NOLH designs: these continuous-factor designs have good properties for fitting full second-order models (main effects, quadratic effects, and two-way interactions); (see MacCalman 2013 on theses page)
- S-NOLH designs: nearly saturated designs for up to 63 factors in 64 design points (see Hernandez, Lucas, and Carlyle 2012)
- GAMS code to generate NOLH designs (see Hernandez, Lucas, and Carlyle 2012)
- Ruby and Java software for generating very large fractional factorials that allow simultaneous estimation of all main effects and all two-way interactions (see Sanchez and Sanchez 2005)
- Java software for generating stacked square random LH designs (integer-valued)
- A flexible S-plus script for generating random LH designs
The instructions are brief, so ask for help if you need it!
Orthogonal and nearly orthogonal LH worksheet
This worksheet allows you to specify factor names, low and high levels of interest, and outputs a (nearly) orthogonal LH design in the units of the problem. Version 5 has protected cells for the designs, allows you to specify the number of decimal places of interest, and clarifies that you don't need to use the smallest possible design for a given number of factors. A brief description is available on the "readme" page.
NOB mixed design worksheet
This single worksheet provides flexibility for constructing a 512-design point design suitable for exploring effects of up to 20 discrete k-level factors (k=2,...,11) and 100 continuous factors, for a total of 300 factors. Version 2 has reordered the discrete-valued columns for better balance and pairwise correlation if you use only a subset of the available columns. A brief description is available on the "readme" page.
2nd order NOLH design workbook
This workbook is a tool for making it easier to design large-scale simulation experiments with high-order complex response output behavior. The designs in this worksheet nearly guarantee that all first and second order terms are not confounded with others and provide an excellent space-filling property that enables the detection of model bias and the presence of step functions. Version 2 has designs for up to 12 factors. A User Manual,"Design Creator User Manualv2", is included in the zip file.
S-NOLH design worksheet
This workbook contains nearly saturated, nearly orthogonal designs for a variety of design points ranging from 14 to 64. The designs in this worksheet may be of particular interest for those who have larger numbers of factors. The method for constructing the S-NOLH designs can also be used to construct larger designs. Check back soon for an updated version of the spreadsheet that, like the others above, will make it easy to interactively construct a design in the natural units of your problem.
The GAMS code used to generate this worksheet is described below in the next paragraph.
NOLH design GAMS code
GAMS code to generate NOLH designs. The overall program is a heuristic to find an effective design. For any given column (instance) of the design, the program guarantees an optimal solution. However, the overall resulting design is not guaranteed optimal. The heuristic stops when the design meets a given threshold for correlation. The optimal design will have zero correlation among the design columns. There are a number of cases in which the program results in an OLH.
The data entry for the program is a file that contains an initial design. A .txt or .csv file works best. The columns should be designated ï¿½k1 to k**" and the rows ï¿½n1 to n**" as the dimension of the design warrants. The upper left most cell of the csv should be designated as ï¿½levels.ï¿½ The program requires the user to specify the number of factors and runs for the design.
Resolution V fractional factorials
These two programs will generate Resolution V fractional factorials for up to 120 factors. A Resolution V design guarantees that all main effects and two-way interactions will be unconfounded. Both programs produce identical designs given the same command-line arguments and options (as described below), the only difference is the implementation language. Rename the ruby version to be "scaled_rf_cubed.rb" after downloading.
To run the programs, at the command prompt type
- ruby scaled_rf_cubed.rb
- java -jar scaled_rf_cubed.jar
- Run-time options:
- --design-only (or -d): print the design without labeling
- --csv (or -c): print the design as comma separated
- Use command-line args to specify either:
- 1) the number of factors for the design; or
- 2) low and hi values for all factors.
Run# X1 X2 X3 1 1 1 1 2 -1 1 1 3 1 -1 1 4 -1 -1 1 5 1 1 -1 6 -1 1 -1 7 1 -1 -1 8 -1 -1 -1to the console. Running the command ruby scaled_rf_cubed.rb -d -c yes no right left up down > mydesign.csv will produce the output
yes,right,up no,right,up yes,left,up no,left,up yes,right,down no,right,down yes,left,down no,left,down
and write it to a file named mydesign.csv.
doe java program
The inputs (as you can see from the two sample files) are the factor name, low and high levels. This version assumes integer-valued variables, and automatically does the necessary rounding. One note: we assumed that the "low" and "high" values input by the user are strict bounds. The interval (high-low) is then divided into the appropriate number of bins, and the midpoint of each bin is used (or rounded to the nearest integer). This means that an range of 0-2 with 4 levels will result in a base set of values 0, 1, 1, 2 (i.e., sampling more heavily in the center).
The program reads inputs token by token, where a token is defined to be non-white-space surrounded by white-space. In other words, you can put all of the input on one line, one line per factor, or one line per value, it doesn't matter to this version. I've included two input files for testing, one with data one element per line and one with one factor per line. You can also specify a command line argument for the number of (independent) LH designs to generate and append to one another. This is a way of reducing multicollinearity among the columns of factor settings. For example, Eric Wolf appended 4 40-run LH's to come up with his design.
The "doe3.jar" file is an executable jar file that contains the software and Javadoc documentation. The files "testlh1" and "testlh2" are sample input files. You can run this from a command window using the command "java -jar doe3.jar [#] <designspecfile", where "designspecfile" is an input file containing the factors and their bounds and "[#]" is an optional number specifying how many times to replicate the design.
S-plus script for Latin Hypercubes
Here is an S-plus script that will go ahead and generate LH designs. The "test material" at the end shows how to generate the design, so if you run the script you can get the output.
Note that the output design is given in the units of the problem, rather than scaled from -1 to 1 or given in integers corresponding to the ranks
Data Farming Tools
XStudy, Version 1.2.0
The XStudy application is a graphical user interface for generating a study.xml file. A study.xml file is an XML file that specifies how a user wants to conduct a simulation experiment (Note: more details on what is included in the study.xml file can be found in the OldMcData documentation, downloaded separately -see below). The file has meta-data information about the study, including such elements as the name of the experimenter, and a description of the study, among other things. It also includes information about the model used, the number of replications desired, initial random seeds, and specification of the algorithm to use for generating the parameter variations, as well as what variables are to be used for that variation. Specifying the variables is done using the XPath specification (http://www.w3.org/TR/xpath) and its use is at the heart of the study.xml. As such, XStudy uses the XPaths of the variables within a scenario file to identify the parameters that are to be varied
To download the latest version, click on the link below. File size is approximately 1 MB.
To install, double-click on the downloaded jar file (this assumes you have java installed on your machine). This will start an installation wizard. If you have any questions, comments, or suggestions, please email Steve Upton.
OldMcData, Version 1.1
OldMcData - The Data Farmer (OMD) is a software application designed to do data farming runs, from running large simulation experiments on a distributed computer cluster to multiple replications of a single excursion on a single machine. For runs on a distributed computing cluster, OMD uses Condor (http://www.cs.wisc.edu/condor/) an open-source distributed computing environment, to handle the scheduling and managing of the running jobs.
OMD uses an XML formatted specification of your simulation experiment, called a study.xml file (see the OMD User's Manual for more details on this specification). The study.xml file includes information about the model, a listing of the input variables or factors in your experiment, the type of algorithm you want to use to read in or create the values for the input variables, and other administrative information such as the user's contact information. This file can be created manually using any text editor, or using the graphical front-end called XStudy (see above).
To download version 1.1, click on the link below. File size is approximately 12.9 MB.
To install, unzip omd1.1.zip into a directory of your choice, e.g., C:\omd1.1 if you are on a Windows-based machine, or Users/username/omd1.1 for a Unix-based machine. Once unzipped, take a look at the Quick Start guide, QuickStart.html, or the OMD User's Manual in the "docs" folder. If you have any questions, comments, or suggestions, please email Steve Upton
ARTeMIS, version 0.3
ARTeMIS (Automated Red Teaming Multiobjective Innovation Seeker) is a software application, written in the Scala language, that uses evolutionary algorithms to find good solutions satisfying multiple objectives. The primary problem domain is military and defense applications. This version is configured to use the MANA (Map-Aware Non Uniform Automata ) agent-based model, developed by the Defence Technical Agency, New Zealand Defence Force. ARTeMIS was developed as a project of the Naval Research Program at NPS (“Exploring Potential Alternatives Using Simulation and Evolutionary Algorithms,” Project Number: NPS-N16-M163-A), in collaboration with the Marine Corps Combat Development Command’s (MCCDC) Operations Analysis Division (OAD).
Agent-based Modeling Platforms and other tools (Limited Access)
Several versions of the MANA and Pythagaros Agent-based software are available for downloading by authorized users. Your affiliation determines which model and versions you are authorized to download.
If you are interested in downloading MANA or Pythagoras, please send email to request a username and password. Please include one of the below affiliations so we can properly handle your request.
- NPS Students and Faculty
- US Government or affiliate - for US Government purposes, meaning any activity in which the United States (U.S.) Government is a party, including cooperative agreements with international or multi-national defense organizations or sales or transfers by the U.S. Government to foreign governments or international organizations.
NOTE: We are only authorized to distribute MANA to NPS Students and Faculty; all other users can contact David Galligan of the MANA team at firstname.lastname@example.org.
NPS Students and Faculty
For NPS Students and Faculty, the following models and tools are available.
- Pythagoras Agent-based Model, version 2.1.1
- Pythagoras Agent-based Model, version 2.1.1
- MANA Agent-based Model, version 5.0 (V)
- MANA Agent-based Model, version 4.4.4
- MANA Agent-based Model, version 4.1.1
- RSG and DOE tools, version 2.0
Please email to request to be added to the MANA or Pythagorus Sakai sites. Your NPS username and password will then give you access.
US Government and their affiliates
For government users and their affiliates, the following models and tools are available:
- STORMMiner, version 1.2, for use with STORM 2.5 and 2.6
- STORMMiner, version 1.0, for use with an earlier version of STORM
- Pythagoras Agent-based Model, version 2.1.1
- Pythagoras Agent-based Model, version 2.0.3
- JTEAM (Joint Test and Evaluation Agent-based Model), version 1.0 (alpha)
If you are interested in downloading any of the models or tools above but you do not have a username and password, please email to request a username and password that will allow you to access the appropriate Sakai website for the software. Note that NPS cannot distribute the STORM campaign analysis model, just the STORMMiner software for postprocessing and visualization.
All Other Users
For other, authorized users, the following models and tools are available on request.
- Pythagoras Agent-based Model, version 1.10.5
- TheTester Agent-based Model, version 1.0.1a
- a DB tool to support Rapid Scenario Generation - Thanks! to Lloyd Brown
User's ManualsPlanned Resource Optimization Model With Experimental Design (PROM-WED) User Manual can be found here.