Information

When to use standard versus moderator regression techniques for the most parsimonious approach?

When to use standard versus moderator regression techniques for the most parsimonious approach?

I know how to run multiple regressions (SMR, HMR, and MMR) in SPSS but I'm still a little vague on when it's appropriate to use each of them to achieve the best and most parsimonious result. (SPSS being the mathematical software, SMR being standard multiple regression, HMR being hierarchical multiple regression, and MMR being moderated multiple regression, SMR and SPSS are further explained in this video, hierarchical multiple regression in this video, and moderated multiple regression in this blog post).

e.g. in a case where I want to see the impact of social anxiety, and sense of humour on life satisfaction, I hypothesise that both social anxiety and sense of humour will be individually associated with life satisfaction.

When controlling for age, gender and anxiety, participants high in sense of humour will have higher life satisfaction.

Also, when excluding variables of age and gender, I predict that the negative association between anxiety and life satisfaction will be stronger for those with low sense of humour ratings.

I feel that for most of these questions, the answers could be found using an SMR, and the last question, with an MMR, but is there a simpler and more parsimonious way to do this? Can I gather the data for the first few questions from the MMR analysis, for example?


You write:

When controlling for age, gender and anxiety, participants high in sense of humour will have higher life satisfaction.

This is standard regression. Predictors that are control variables are the same as any other predictors, it's just that their inclusion in the model is mostly there to facilitate a desired interpretation of a focal predictor.

Also, when excluding variables of age and gender, I predict that the negative association between anxiety and life satisfaction will be stronger for those with low sense of humour ratings.

Here you have described a moderation hypothesis. i.e., in regression terms, you might express it like this:

lifesat ~ anxiety * humour

However, technically, you have not assigned anxiety or life satisfaction a particular status as the outcome variable. Whether you "exclude age and gender" is irrelevant to the status of the hypothesis as a moderation hypothesis.

You ask specifically:

I feel that for most of these questions, the answers could be found using an SMR, and the last question, with an MMR, but is there a simpler and more parsimonious way to do this? Can I gather the data for the first few questions from the MMR analysis, for example?

Don't be afraid to run several regression models. Ultimately, a moderator regression is simply a model with an interaction term. Adding predictors generally alters the regression coefficients for other predictors (an exception would be for orthogonal variables, such as in experiments).


Confusing Statistical Term #4: Hierarchical Regression vs. Hierarchical Model

This one is relatively simple. Very similar names for two totally different concepts.

Hierarchical Models (aka Hierarchical Linear Models or HLM) are a type of linear regression models in which the observations fall into hierarchical, or completely nested levels.

Hierarchical Models are a type of Multilevel Models.

So what is a hierarchical data structure, which requires a hierarchical model?

The classic example is data from children nested within schools. The dependent variable could be something like math scores, and the predictors a whole host of things measured about the child and the school.

Child-level predictors could be things like GPA, grade, and gender. School-level predictors could be things like: total enrollment, private vs. public, mean SES.

Because multiple children are measured from the same school, their measurements are not independent. Hierarchical modeling takes that into account.

Hierarchical regression is a model-building technique in any regression model. It is the practice of building successive linear regression models, each adding more predictors.

For example, one common practice is to start by adding only demographic control variables to the model. In the next model, you can add predictors of interest, to see if they predict the DV above and beyond the effect of the controls.

You’re actually building separate but related models in each step. But SPSS has a nice function where it will compare the models, and actually test if successive models fit better than previous ones.

So hierarchical regression is really a series of regular old OLS regression models–nothing fancy, really.


Moderation in Management Research: What, Why, When, and How

Many theories in management, psychology, and other disciplines rely on moderating variables: those which affect the strength or nature of the relationship between two other variables. Despite the near-ubiquitous nature of such effects, the methods for testing and interpreting them are not always well understood. This article introduces the concept of moderation and describes how moderator effects are tested and interpreted for a series of model types, beginning with straightforward two-way interactions with Normal outcomes, moving to three-way and curvilinear interactions, and then to models with non-Normal outcomes including binary logistic regression and Poisson regression. In particular, methods of interpreting and probing these latter model types, such as simple slope analysis and slope difference tests, are described. It then gives answers to twelve frequently asked questions about testing and interpreting moderator effects.

This is a preview of subscription content, access via your institution.


Assumptions of Multiple Linear Regression

Multiple linear regression analysis makes several key assumptions:

There must be a linear relationship between the outcome variable and the independent variables. Scatterplots can show whether there is a linear or curvilinear relationship.

Multivariate Normality–Multiple regression assumes that the residuals are normally distributed.

No Multicollinearity—Multiple regression assumes that the independent variables are not highly correlated with each other. This assumption is tested using Variance Inflation Factor (VIF) values.

Get Your Dissertation Approved
We work with graduate students every day and know what it takes to get your research approved.

  • Address committee feedback
  • Roadmap to completion
  • Understand your needs and timeframe

Homoscedasticity–This assumption states that the variance of error terms are similar across the values of the independent variables. A plot of standardized residuals versus predicted values can show whether points are equally distributed across all values of the independent variables.

Multiple linear regression requires at least two independent variables, which can be nominal, ordinal, or interval/ratio level variables. A rule of thumb for the sample size is that regression analysis requires at least 20 cases per independent variable in the analysis. Learn more about sample size here.

Multiple Linear Regression Assumptions

First, multiple linear regression requires the relationship between the independent and dependent variables to be linear. The linearity assumption can best be tested with scatterplots. The following two examples depict a curvilinear relationship (left) and a linear relationship (right).

Second, the multiple linear regression analysis requires that the errors between observed and predicted values (i.e., the residuals of the regression) should be normally distributed. This assumption may be checked by looking at a histogram or a Q-Q-Plot. Normality can also be checked with a goodness of fit test (e.g., the Kolmogorov-Smirnov test), though this test must be conducted on the residuals themselves.

Third, multiple linear regression assumes that there is no multicollinearity in the data. Multicollinearity occurs when the independent variables are too highly correlated with each other.

Multicollinearity may be checked multiple ways:

1) Correlation matrix – When computing a matrix of Pearson’s bivariate correlations among all independent variables, the magnitude of the correlation coefficients should be less than .80.

2) Variance Inflation Factor (VIF) – The VIFs of the linear regression indicate the degree that the variances in the regression estimates are increased due to multicollinearity. VIF values higher than 10 indicate that multicollinearity is a problem.

If multicollinearity is found in the data, one possible solution is to center the data. To center the data, subtract the mean score from each observation for each independent variable. However, the simplest solution is to identify the variables causing multicollinearity issues (i.e., through correlations or VIF values) and removing those variables from the regression.

The last assumption of multiple linear regression is homoscedasticity. A scatterplot of residuals versus predicted values is good way to check for homoscedasticity. There should be no clear pattern in the distribution if there is a cone-shaped pattern (as shown below), the data is heteroscedastic.

If the data are heteroscedastic, a non-linear data transformation or addition of a quadratic term might fix the problem.

Intellectus allows you to conduct and interpret your analysis in minutes. Assumptions are pre-loaded and the narrative interpretation of your results includes APA tables and figures. Click the link below to create a free account, and get started analyzing your data now!

Statistics Solutions can assist with your quantitative analysis by assisting you to develop your methodology and results chapters. The services that we offer include:

Edit your research questions and null/alternative hypotheses

Write your data analysis plan specify specific statistics to address the research questions, the assumptions of the statistics, and justify why they are the appropriate statistics provide references

Justify your sample size/power analysis, provide references

Explain your data analysis plan to you so you are comfortable and confident

Two hours of additional support with your statistician

Quantitative Results Section (Descriptive Statistics, Bivariate and Multivariate Analyses, Structural Equation Modeling, Path analysis, HLM, Cluster Analysis)

Conduct descriptive statistics (i.e., mean, standard deviation, frequency and percent, as appropriate)

Conduct analyses to examine each of your research questions

Provide APA 6 th edition tables and figures

Explain chapter 4 findings

Ongoing support for entire results chapter statistics

Please call 727-442-4290 to request a quote based on the specifics of your research, schedule using the calendar on this page, or email [email protected]


Abstract

We used a novel meta regression analysis approach to examine the effectiveness of psychological skills training and behavioral interventions in sport assessed using single-case experimental designs (SCEDs). One hundred and twenty-one papers met the inclusion criteria applied to eight database searches and key sport psychology journals. Seventy-one studies reported sufficient detail for effect sizes to be calculated for the effects of psychological skills training on psychological, behavioral, and performance variables. The unconditional mean effect size for weighted (Δ = 2.40) and unweighted (Δ = 2.83) models suggested large improvements in psychological, behavioral, and performance outcomes associated with implementing cognitive-behavioral psychological skills training and behavioral interventions with a SCED. However, meta-regression analysis revealed important heterogeneities and sources of bias within this literature. First, studies using a group-based approach reported lower effect sizes compared to studies using single-case approaches. Second, the single-case studies, (over 90 per cent the effect sizes), revealed upwardly biased effect sizes arising from: (i) positive publication bias such that studies using lower numbers of baseline observations reported larger effects, while studies using larger numbers of baseline observations reported smaller – but still substantial – effects (ii) not adopting a multiple baseline design and (iii) not establishing procedural reliability. We recommend that future researchers using SCED’s should consider these methodological issues.


Introduction

Mediation refers to the covariance relationships among three variables: an independent variable (1), an assumed mediating variable (2), and a dependent variable (3). Mediation analysis investigates whether the mediating variable accounts for a significant amount of the shared variance between the independent and the dependent variables–the mediator changes in regard to the independent variable, in turn, affecting the dependent one [1], [2]. On the other hand, moderation refers to the examination of the statistical interaction between independent variables in predicting a dependent variable [1], [3]. In contrast to the mediator, the moderator is not expected to be correlated with both the independent and the dependent variable–Baron and Kenny [1] actually recommend that it is best if the moderator is not correlated with the independent variable and if the moderator is relatively stable, like a demographic variable (e.g., gender, socio-economic status) or a personality trait (e.g., affectivity).

Although both types of analysis lead to different conclusions [3] and the distinction between statistical procedures is part of the current literature [2], there is still confusion about the use of moderation and mediation analyses using data pertaining to the prediction of depression. There are, for example, contradictions among studies that investigate mediating and moderating effects of anxiety, stress, self-esteem, and affect on depression. Depression, anxiety and stress are suggested to influence individuals' social relations and activities, work, and studies, as well as compromising decision-making and coping strategies [4], [5], [6]. Successfully coping with anxiety, depressiveness, and stressful situations may contribute to high levels of self-esteem and self-confidence, in addition increasing well-being, and psychological and physical health [6]. Thus, it is important to disentangle how these variables are related to each other. However, while some researchers perform mediation analysis with some of the variables mentioned here, other researchers conduct moderation analysis with the same variables. Seldom are both moderation and mediation performed on the same dataset. Before disentangling mediation and moderation effects on depression in the current literature, we briefly present the methodology behind the analysis performed in this study.

Mediation and moderation

Baron and Kenny [1] postulated several criteria for the analysis of a mediating effect: a significant correlation between the independent and the dependent variable, the independent variable must be significantly associated with the mediator, the mediator predicts the dependent variable even when the independent variable is controlled for, and the correlation between the independent and the dependent variable must be eliminated or reduced when the mediator is controlled for. All the criteria is then tested using the Sobel test which shows whether indirect effects are significant or not [1], [7]. A complete mediating effect occurs when the correlation between the independent and the dependent variable are eliminated when the mediator is controlled for [8]. Analyses of mediation can, for example, help researchers to move beyond answering if high levels of stress lead to high levels of depression. With mediation analysis researchers might instead answer how stress is related to depression.

In contrast to mediation, moderation investigates the unique conditions under which two variables are related [3]. The third variable here, the moderator, is not an intermediate variable in the causal sequence from the independent to the dependent variable. For the analysis of moderation effects, the relation between the independent and dependent variable must be different at different levels of the moderator [3]. Moderators are included in the statistical analysis as an interaction term [1]. When analyzing moderating effects the variables should first be centered (i.e., calculating the mean to become 0 and the standard deviation to become 1) in order to avoid problems with multi-colinearity [8]. Moderating effects can be calculated using multiple hierarchical linear regressions whereby main effects are presented in the first step and interactions in the second step [1]. Analysis of moderation, for example, helps researchers to answer when or under which conditions stress is related to depression.

Mediation and moderation effects on depression

Cognitive vulnerability models suggest that maladaptive self-schema mirroring helplessness and low self-esteem explain the development and maintenance of depression (for a review see [9]). These cognitive vulnerability factors become activated by negative life events or negative moods [10] and are suggested to interact with environmental stressors to increase risk for depression and other emotional disorders [11], [10]. In this line of thinking, the experience of stress, low self-esteem, and negative emotions can cause depression, but also be used to explain how (i.e., mediation) and under which conditions (i.e., moderation) specific variables influence depression.

Using mediational analyses to investigate how cognitive therapy intervations reduced depression, researchers have showed that the intervention reduced anxiety, which in turn was responsible for 91% of the reduction in depression [12]. In the same study, reductions in depression, by the intervention, accounted only for 6% of the reduction in anxiety. Thus, anxiety seems to affect depression more than depression affects anxiety and, together with stress, is both a cause of and a powerful mediator influencing depression (See also [13]). Indeed, there are positive relationships between depression, anxiety and stress in different cultures [14]. Moreover, while some studies show that stress (independent variable) increases anxiety (mediator), which in turn increased depression (dependent variable) [14], other studies show that stress (moderator) interacts with maladaptive self-schemata (dependent variable) to increase depression (independent variable) [15], [16].

The present study

In order to illustrate how mediation and moderation can be used to address different research questions we first focus our attention to anxiety and stress as mediators of different variables that earlier have been shown to be related to depression. Secondly, we use all variables to find which of these variables moderate the effects on depression.

The specific aims of the present study were:

  1. To investigate if anxiety mediated the effect of stress, self-esteem, and affect on depression.
  2. To investigate if stress mediated the effects of anxiety, self-esteem, and affect on depression.
  3. To examine moderation effects between anxiety, stress, self-esteem, and affect on depression.

Introduction to Mediation Analysis

Let’s say previous studies have suggested that higher grades predict higher happiness: X (grades) → Y (happiness). (This research example is made up for illustration purposes. Please don’t consider it a scientific statement.)

I think, however, grades are not the real reason that happiness increases. I hypothesize that good grades boost one’s self-esteem and then high self-esteem boosts one’s happiness: X (grades) → M (self-esteem) → Y (happiness).

This is a typical case of mediation analysis. Self-esteem is a mediator that explains the underlying mechanism of the relationship between grades (IV) and happiness (DV).

How to analyze mediation effects?

Before we start, please keep in mind that, as any other regression analysis, mediation analysis does not imply causal relationships unless it is based on experimental design.

To analyze mediation:
1. Follow Baron & Kenny’s steps
2. Use either the Sobel test or bootstrapping for significance testing.

The following shows the basic steps for mediation analysis suggested by Baron & Kenny (1986). A mediation analysis is comprised of three sets of regression: X → Y, X → M, and X + M → Y. This post will show examples using R, but you can use any statistical software. They are just three regression analyses!

Is (b_<1>) significant? We want X to affect Y. If there is no relationship between X and Y, there is nothing to mediate.

Although this is what Baron and Kenny originally suggested, this step is controversial. Even if we don’t find a significant association between X and Y, we could move forward to the next step if we have a good theoretical background about their relationship. See Shrout & Bolger (2002) for details.

Is (b_<2>) significant? We want X to affect M. If X and M have no relationship, M is just a third variable that may or may not be associated with Y. A mediation makes sense only if X affects M.

Is (b_<4>) non-significant or smaller than before? We want M to affect Y, but X to no longer affect Y (or X to still affect Y but in a smaller magnitude). If a mediation effect exists, the effect of X on Y will disappear (or at least weaken) when M is included in the regression. The effect of X on Y goes through M.

If the effect of X on Y completely disappears, M fully mediates between X and Y (full mediation). If the effect of X on Y still exists, but in a smaller magnitude, M partially mediates between X and Y (partial mediation). The example shows a full mediation, yet a full mediation rarely happens in practice.

Once we find these relationships, we want to see if this mediation effect is statistically significant (different from zero or not). To do so, there are two main approaches: the Sobel test (Sobel, 1982) and bootstrapping (Preacher & Hayes, 2004). In R, you can use sobel() in ‘multilevel’ package for the Sobel test and mediate() in ‘mediation’ package for bootstrapping. Because bootstrapping is strongly recommended in recent years (although Sobel test was widely used before), I’ll show only the bootstrapping method in this example.

mediate() takes two model objects as input (X → M and X + M → Y) and we need to specify which variable is an IV (treatment) and a mediator (mediator). For bootstrapping, set boot = TRUE and sims to at least 500 . After running it, look for ACME (Average Causal Mediation Effects) in the results and see if it’s different from zero. For details of mediate() , please refer to Tingley, Yamamoto, Hirose, Keele, & Imai (2014).

Note that the Total Effect in the summary ( 0.3961 ) is (b_<1>) in the first step: a total effect of X on Y (without M). The direct effect (ADE, 0.0396 ) is (b_<4>) in the third step: a direct effect of X on Y after taking into account a mediation (indirect) effect of M. Finally, the mediation effect (ACME) is the total effect minus the direct effect ((b_ <1>– b_<4>), or 0.3961 - 0.0396 = 0.3565 ), which equals to a product of a coefficient of X in the second step and a coefficient of M in the last step ((b_ <2> imes b_<3>), or 0.56102 * 0.6355 = 0.3565 ). The goal of mediation analysis is to obtain this indirect effect and see if it’s statistically significant.

By the way, we don’t have to follow all three steps as Baron and Kenny suggested. We could simply run two regressions (X → M and X + M → Y) and test its significance using the two models. However, the suggested steps help you understand how it works!

Mediation analysis is not limited to linear regression we can use logistic regression or polynomial regression and more. Also, we can add more variables and relationships, for example, moderated mediation or mediated moderation. However, if your model is very complex and cannot be expressed as a small set of regressions, you might want to consider structural equation modeling instead.

To sum up, here’s a flowchart for mediation analysis!

  • Baron, R. M., & Kenny, D. A. (1986). The moderator–mediator variable distinction in social psychological research: Conceptual, strategic, and statistical considerations. Journal of Personality and Social Psychology, 5, 1173-1182.
  • Shrout, P. E., & Bolger, N. (2002). Mediation in experimental and nonexperimental studies: new procedures and recommendations. Psychological Methods, 7, 422-445.
  • Tingley, D., Yamamoto, T., Hirose, K., Keele, L., & Imai, K. (2014). Mediation: R package for causal mediation analysis.

For questions or clarifications regarding this article, contact the UVA Library StatLab: [email protected]

View the entire collection of UVA Library StatLab articles.


Bommae Kim
Statistical Consulting Associate
University of Virginia Library
April 18, 2016 (published)
July 12, 2016 (typos in flowchart corrected)


What are multilevel models and why should I use them?

Many kinds of data, including observational data collected in the human and biological sciences, have a hierarchical or clustered structure. For example, children with the same parents tend to be more alike in their physical and mental characteristics than individuals chosen at random from the population at large. Individuals may be further nested within geographical areas or institutions such as schools or employers. Multilevel data structures also arise in longitudinal studies where an individual&rsquos responses over time are correlated with each other.

Multilevel models recognise the existence of such data hierarchies by allowing for residual components at each level in the hierarchy. For example, a two-level model which allows for grouping of child outcomes within schools would include residuals at the child and school level. Thus the residual variance is partitioned into a between-school component (the variance of the school-level residuals) and a within-school component (the variance of the child-level residuals). The school residuals, often called &lsquoschool effects&rsquo, represent unobserved school characteristics that affect child outcomes. It is these unobserved variables which lead to correlation between outcomes for children from the same school.

Multilevel models can also be fitted to non-hierarchical structures. For instance, children might be nested within a cross-classification of neighbourhoods of residence and schools.


Use of Regression Analysis in Business

Regression analysis can be very helpful for business and below we have discussed some of the main uses.

Predictive Analytics

It helps in determining the future risks and opportunities. It is the most used application of regression analysis in business.

Increase Efficiency

Regression can help you to optimize the business process. Because through this you can create the data-driven decisions which eliminate the guesswork, corporate politics, and hypothesis from decision making.

Support Decisions

Nowadays businesses are overloaded with the data of finance, purchase and other company-related data. So, it is very difficult to get some useful information from it. But with the help of regression analysis, you can get the actionable information from the big raw data.


Hierarchical Linear Regression

Hierarchical regression is a way to show if variables of your interest explain a statistically significant amount of variance in your Dependent Variable (DV) after accounting for all other variables. This is a framework for model comparison rather than a statistical method. In this framework, you build several regression models by adding variables to a previous model at each step later models always include smaller models in previous steps. In many cases, our interest is to determine whether newly added variables show a significant improvement in (R^2) (the proportion of explained variance in DV by the model).

Let’s say we’re interested in the relationships of social interaction and happiness. In this line of research, the number of friends has been a known predictor in addition to demographic characteristics. However, we’d like to investigate if the number of pets could be an important predictor for happiness.

The first model (Model 1) typically includes demographic information such as age, gender, ethnicity, and education. In the next step (Model 2), we could add known important variables in this line of research. Here we would replicate previous research in this subject matter. In the following step (Model 3), we could add the variables that we’re interested in.

Model 1: Happiness = Intercept + Age + Gender ((R^2) = .029)
Model 2: Happiness = Intercept + Age + Gender + # of friends ((R^2) = .131)
Model 3: Happiness = Intercept + Age + Gender + # of friends + # of pets ((R^2) = .197, (Delta R^2) = .066)

Our interest is whether Model 3 explains the DV better than Model 2. If the difference of (R^2) between Model 2 and 3 is statistically significant, we can say the added variables in Model 3 explain the DV above and beyond the variables in Model 2. In this example, we’d like to know if the increased (R^2) .066 (.197 – .131 = .066) is statistically significant. If so, we can say that the number of pets explains an additional 6% of the variance in happiness and it is statistically significant.

There are many different ways to examine research questions using hierarchical regression. We can add multiple variables at each step. We can have only two models or more than three models depending on research questions. We can run regressions on multiple different DVs and compare the results for each DV.

Depending on statistical software, we can run hierarchical regression with one click (SPSS) or do it manually step-by-step (R). Regardless, it’s good to understand how this works conceptually.

  1. Build sequential (nested) regression models by adding variables at each step.
  2. Run ANOVAs (to compute (R^2)) and regressions (to obtain coefficients).
  3. Compare sum of squares between models from ANOVA results.
    1. Compute a difference in sum of squares ((SS)) at each step.
    2. Find corresponding F-statistics and p-values for the (SS) differences.
    1. (R^2 = frac<>><>>)

    In R, we can find sum of squares and corresponding F-statistics and p-values using anova() . When we use anova() with a single model, it shows analysis of variance for each variable. However, when we use anova() with multiple models, it does model comparisons. Either way, to use anova() , we need to run linear regressions first.

    After regressions are run (obtaining lm objects), anova() is run with the lm objects. When we regress the DV on an intercept without predictors ( m0 in this example), anova() results show Total (SS).

    Total (SS) is 240.84. We will use this value to compute (R^2)s later. Next, compare (SS) of the three models that we have built.

    Model 0: (SS_) = 240.84 (no predictors)
    Model 1: (SS_) = 233.97 (after adding age and gender)
    Model 2: (SS_) = 209.27, (SS_) = 24.696, (F)(1,96) = 12.1293, (p) = 0.0007521 (after adding friends)
    Model 3: (SS_) = 193.42, (SS_) = 15.846, (F)(1,95) = 7.7828, (p) = 0.0063739 (after adding pets)

    By adding friends, the model accounts for additional (SS) 24.696 and it was a statistically significant change according to the corresponding F-statistic and p-value. The (R^2) increased by .103 (24.6957 / 240.84 = 0.1025399) in Model 2. By adding pets, the model accounts for additional (SS) 15.846 and it was statistically significant again. The (R^2) increased by .066 (15.8461 / 240.84 = 0.06579513) in Model 3.

    summary() of an lm object shows coefficients of variables:

    Aside from the coefficients of variables, let’s take a look at (R^2)s of Model 1, 2, and 3, which are 0.02855, 0.1311, and 0.1969 respectively. The (R^2) changes computed using anova() results correspond to differences in (R^2)s in lm() results for each model: 0.1311 – 0.02855 = 0.10255 for Model 2 and 0.1969 – 0.1311 = 0.0658 for Model 3 (with rounding errors). Although we can compute (R^2) differences between models using lm() results, lm() results don’t provide corresponding F-statistics and p-values to an increased (R^2). And it’s important to remember that adding variables always increases (R^2), whether or not it actually explains additional variation in the DV. That’s why it’s crucial to perform F-tests and not just rely on the difference in (R^2) between models.

    What to report as the results?

    It is common to report coefficients of all variables in each model and differences in (R^2) between models. In research articles, the results are typically presented in tables as below. Note that the second example (Lankau & Scandura, 2002) had multiple DVs and ran hierarchical regressions for each DV.

    For questions or clarifications regarding this article, contact the UVA Library StatLab: [email protected]

    View the entire collection of UVA Library StatLab articles.


    Bommae Kim
    Statistical Consulting Associate
    University of Virginia Library
    May 20, 2016