Model Diagnostics Overview
Why Model Diagnostics Matter
Model diagnostics are essential for ensuring your Marketing Mix Model is reliable, accurate, and provides trustworthy insights for business decisions. Running diagnostic tests validates the statistical assumptions underlying your model and identifies potential issues that could compromise your results.
Purpose: Ensure models meet minimum quality thresholds before use in optimization and forecasting.
What Diagnostics Test
Model diagnostics examine whether your regression model satisfies key statistical assumptions:
Statistical Validity: Tests confirm that your model's coefficients, confidence intervals, and p-values are trustworthy and that the model structure is appropriate for your data.
Data Quality: Diagnostics identify outliers, influential observations, and data issues that might distort your results.
Model Reliability: Tests ensure predictions are accurate and the model generalizes well beyond the training data.
Available Diagnostic Tests
MixModeler provides six comprehensive diagnostic test categories:
Residual Normality
Whether model errors follow a normal distribution
Validates confidence intervals and hypothesis tests
Autocorrelation
Whether residuals are correlated over time
Ensures standard errors are reliable
Heteroscedasticity
Whether error variance is constant
Confirms prediction intervals are accurate
Multicollinearity
Whether predictor variables are highly correlated
Ensures coefficient estimates are stable
Influential Points
Whether specific observations disproportionately affect results
Identifies data quality issues and outliers
Actual vs Predicted
How well the model fits the data
Evaluates overall prediction accuracy
Test Result Indicators
MixModeler uses a clear visual system to communicate test results:
Green Checkmark (✓): Test passed - no issues detected
Red Warning (⚠): Test failed - potential issue found
P-values: Lower values indicate stronger evidence of issues
- * p < 0.1 (marginal significance) 
- ** p < 0.05 (significant) 
- *** p < 0.01 (highly significant) 
Test Statistics: Higher values generally indicate more severe issues (varies by test)
Typical Workflow
Follow this workflow to run comprehensive model diagnostics:
- Select Model: Choose your fitted model from the dropdown in Model Diagnostics 
- Continue to Diagnostics: Click the button to see available test options 
- Select Tests: Choose which tests to run (all are selected by default) 
- Run Tests: Click "Run Selected Tests" to execute diagnostics 
- Review Summary: Check summary cards for each test with pass/fail indicators 
- View Details: Click "View Details" on any test for in-depth analysis and visualizations 
- Download Reports: Generate PDF reports for detailed documentation 
- Address Issues: Fix any problems identified before using the model for business decisions 
When to Run Diagnostics
After Every Model Fit: Always run diagnostics when you create a new model or modify an existing one.
Before Business Use: Never use a model for optimization, forecasting, or budget allocation without first validating it through diagnostics.
After Adding Variables: Rerun diagnostics whenever you add or remove variables from your model.
Periodic Checks: Revalidate models periodically, especially if using them over extended time periods.
What to Do When Tests Fail
Not all test failures are critical. Here's how to prioritize:
Critical Issues (Address Immediately):
- Multicollinearity with VIF > 10 
- Many influential points or outliers 
- R² < 0.5 (poor model fit) 
Moderate Issues (Monitor Carefully):
- Autocorrelation (Durbin-Watson outside 1.5-2.5 range) 
- Mild heteroscedasticity 
- Multicollinearity with VIF 5-10 
Minor Issues (Often Acceptable in MMM):
- Slight non-normality of residuals 
- Few isolated outliers 
- Mild violations in large datasets 
Interpretation Guidelines
Model Quality Benchmarks:
- R² > 0.7: Good fit 
- R² 0.5-0.7: Moderate fit, room for improvement 
- R² < 0.5: Needs significant work 
Business Context Matters: Some statistical violations may be acceptable depending on your use case and the practical significance of the results. Focus on business validity alongside statistical validity.
Diagnostic Tools Available: MixModeler provides interactive visualizations, statistical test results, and downloadable PDF reports to help you interpret and communicate diagnostic findings.
Next Steps
Explore the following sections for detailed information on each diagnostic test:
- Residual Normality: Learn about Jarque-Bera, Shapiro-Wilk tests and Q-Q plots 
- Autocorrelation: Understand Durbin-Watson and Breusch-Godfrey tests 
- Heteroscedasticity: Explore Breusch-Pagan and White tests 
- Multicollinearity: Master VIF interpretation and correlation analysis 
- Influential Points: Identify outliers using Cook's Distance and leverage 
- Actual vs Predicted: Evaluate model fit with R², RMSE, and MAE metrics 
Last updated