In this paper, we outline and show an analytical workflow to help researchers better probe threats to response consistency and to evaluate — and communicate — confidence in statistical models using conjoint data
Disclaimer: Please carefully read the instructions and use the app at your own risk
R Shiny App written by: Jens Schüler
Factorial Designs
Factorial designs are typically used to reduce/minimize the number of decision profiles in conjoint studies
However, these designs are not always easy to come by and researchers tend to rely on old ortho-plan publications
While more modern solutions are available e.g., in R, using these requires some familiarity with the software
Therefore, we bundle two R packages into an accessible user interface:
The app provides you with a factorial design plus its correlation structure (all main effects and two-way interactions)
Two-level designs
Choose this tab if all attributes are manipulated into two conditions e.g. high and low
Use the slider to set the number of attributes
Choose if you want a full or fractional design
In case of a fractional design, select the model:
Resolution III: If none of the manipulated attributes are to be used as a moderator
Resolution IV: If you want to use one of the manipulate attributes as a moderator
Note: A resolution V design (not covered by this app) might be required when you want to manipulate more than one moderator (check the correlation structure for confounded two-way interactions)
This information is passed to the DoE.base and FrF2 package:
Function call for full factorial designs: fac.design(nlevels = attributes)
Function call for fractional factorial designs: FrF2(nfactors = attributes, resolution = resolution)
N-level/mixed-level designs
Choose this tab if some or all attributes are manipulated into more than two conditions e.g. low, medium, and high
Enter the number of levels for each attribute, separated by a comma, e.g., 4,4,4 or 2,4,4,3 - no more than 4 levels and 7 attributes are supported
Choose if you want a full or fractional design
In case of a fractional design, select a resolution III or IV design
The following procedure relies on the DoE.base package:
Function call for full factorial designs: fac.design(nlevels = attributes)
Fractional designs use the oa.design function, which searches a list of orthogonal arrays for the smallest array representing the specified number of attributes and levels
If an orthogonal array can map more attributes than specified, an automatic column selection is used to minimize the number of profiles for the main effects model (resolution III)
The corresponding function call for resolution III models: oa.design(nlevels = attributes, columns = "min3")
If one of the manipulated variables is a moderator, an interative procedure is used (use with caution!):
Some resolution III arrays can accomodate more attributes than requested - a subset of columns could result in a resolution IV design
These arrays are screened for resolution IV results: oa.design(id, nlevels = attributes, columns = "min3")
If no resolution IV result was found, resolution IV arrays are searched analogously: show.oas(nlevels = attributes, regular = "all", GRgt3 = "all", Rgt3 = TRUE)
However, these resolution IV arrays may be larger than necessary for the given attribute and level combination
Note: The correlation table plot only supports resolutions up to IV (see corrPlot function of the DoE.base package)
The search can take some time - refresh the app to cancel the process
Disclaimer: Always evaluate the suitability of the generated designs BEFORE you start your experiment
Test-Retest Reliability in Metric Conjoint Experiments: A New Workflow to Evaluate Confidence in Model Results
Link to our paper published in Entreprenuership Theory and Practice:
Link to Paper
All code and data used in this publication is available on the Open Science Framework:
Link to Repository
Our workflow should be used to evaluate manipulated variables prior to fitting the actual model - level 2 variables (measured variables) are not part of our workflow
This app makes our workflow accessible and easy to use with just one upload and two clicks
How To
You can upload your data as a .csv or .xlsx file - other file formats are not supported
The data must be in long format, contain certain columns with specific values and data classes
If your data is not meeting these requirements, you will receive error messages or the app is not working
Follow this procedure:
Step 1: Upload a .csv or .xlsx file
Step 2: Click the check button to evaluate if your supplied data seems to be okay
Step 3: Click the workflow button to start the workflow
Step 4: Reset the app by clicking the reset button - also deletes your uploaded data
Optional: Inspect your data after uploading or checking by clicking the inspect table button
Optional: Inspect data classes and types after uploading or checking your data by clicking the inspect type button
Optional: You can download a demo data set as a .csv or .xlsx file
Requirements
See the included sample datasets on how your data must be prepared
Your uploaded data must be in long format
When reading a csv file, the delimiter is automatically detected, but we recommend using ","
The column names should all be in lower case and if not, they are automatically converted to lower case
If your data has missings, leave cells with missing values empty or write NA into them
Only level 1 variables are considered, i.e., manipulated attributes and the outcome(s)
Your data must include specific columns that contain specific data types:
The "respondent" column designates the respondent id and should be numeric or character
The "round" column designates the first (1) and replication (2) round - only use 1 and 2
The "profile" column designates the respective profile numbers e.g. 1 to x
The "dv" column contains the dependent variable. Only one dependent variable can be considered at a time
There must be at least two attributes and the attributes must be named consecutively with "att_1", "att_2" to "att_x"
"round", "profile", "dv", and all attributes, must be numeric
The app checks whether your data meets these requirements
While the app tries to fix some issues e.g. coercing data to numeric and enforcing lower case, it can't handle all eventualities
Should you run into issues, it is best to stick to the guidelines and inspect the provided demo data sets
As soon as the server ends your session, your data will be deleted automatically, but you can also delete it manually by using the reset button