Pre-Testing Variables

Evaluate Variable Performance Before Model Addition

What is Pre-Testing?

Pre-testing evaluates how variables would perform in your model WITHOUT actually adding them. It answers: "If I add this variable, what happens?"

Key benefit: Risk-free evaluation of candidates

How Pre-Testing Works

The Process

  1. Select variables to test

  2. System temporarily adds each to model

  3. Runs regression with variable included

  4. Calculates performance metrics

  5. Removes variable (model unchanged)

  6. Returns results for comparison

Result: Performance preview without commitment

Pre-Testing Workflow

Step 1: Preparation

Have a stable baseline model:

  • Working model with core variables

  • Passing diagnostics

  • Reasonable R²

Identify candidates:

  • Variables you're considering

  • New marketing channels

  • Control factors

  • Transformations to test

Step 2: Selection

In Variable Testing interface:

  1. Select baseline model

  2. Check variables to pre-test

  3. Configure adstock if applicable

  4. Click "Test Variables"

Step 3: Analysis

Review results table:

  • Sorted by T-statistic

  • Shows coefficient, significance, VIF

  • R² increase for each

Identify winners:

  • T-stat > 2.0

  • P-value < 0.05

  • Correct sign

  • VIF < 10

Step 4: Decision

Add winners to model:

  • Navigate to Model Builder

  • Add top 2-3 performers

  • Check diagnostics

  • Validate results

What to Look For

Statistical Criteria

Significance:

  • T-statistic > 2.0

  • P-value < 0.05

  • 95% confidence it matters

Model Improvement:

  • R² increase > 0.01 (1%)

  • Meaningful contribution

  • Worth the complexity

No Multicollinearity:

  • VIF < 10

  • Not redundant

  • Adds unique information

Business Criteria

Correct Sign:

  • Marketing spend: Positive

  • Price: Usually negative

  • Matches expectations

Reasonable Magnitude:

  • Coefficient makes business sense

  • Neither too large nor too small

  • Validates with domain knowledge

Relevance:

  • Variable available going forward

  • Data quality acceptable

  • Actionable insight

Pre-Testing Strategies

Compare Similar Variables

Scenario: Have TV_GRPs and TV_Spend

Test both:

  • See which performs better

  • Check VIF (likely correlated)

  • Choose one, not both

Decision: Keep higher T-stat, lower VIF

Find Optimal Transformation

Test multiple versions:

  • Raw variable

  • With adstock

  • With saturation curve

  • With both

Decision: Use best-performing transformation

Screen Many Candidates

Test 10-20 variables:

  • Rank by performance

  • Identify top 3-5

  • Add only winners

Efficiency: Pre-test before trial-and-error

Category Evaluation

Test all from one category:

  • All digital channels

  • All control variables

  • All seasonality indicators

Decision: Add top performers from each category

Common Pre-Testing Scenarios

New Variable Evaluation

Question: Should I add Radio_Spend?

Pre-test:

  • Radio_Spend with multiple adstock rates

  • Check significance and R² increase

Decision criteria:

  • If t > 2.0 and R² +0.02: Add

  • If t < 1.5 and R² +0.001: Skip

Transformation Comparison

Question: Raw or transformed?

Test:

  • TV_Spend (raw)

  • TV_Spend_adstock_70

  • TV_Spend_adstock_70|ICP_ATAN

Compare:

  • Statistical significance

  • R² improvement

  • Business interpretability

Choose: Best combination of fit and interpretability

Multicollinearity Resolution

Problem: Multiple correlated variables

Pre-test all:

  • Check individual VIF

  • Compare T-statistics

  • Review R² contributions

Solution: Keep best, drop others

Interpreting Pre-Test Results

Good Candidate

✅ T-stat: 3.5 (highly significant) ✅ P-value: 0.001 (< 0.05) ✅ VIF: 2.3 (< 5, excellent) ✅ R² increase: +0.03 (meaningful) ✅ Coefficient: +450 (positive, reasonable)

Action: Strong candidate, add to model

Marginal Candidate

⚠️ T-stat: 2.1 (barely significant) ⚠️ P-value: 0.04 (< 0.05 but close) ⚠️ VIF: 7.8 (< 10 but moderate) ⚠️ R² increase: +0.008 (small) ⚠️ Coefficient: +120 (positive)

Action: Consider if theoretically important, otherwise skip

Poor Candidate

❌ T-stat: 0.8 (not significant) ❌ P-value: 0.42 (> 0.05) ❌ VIF: 15 (> 10, multicollinear) ❌ R² increase: +0.002 (negligible) ❌ Coefficient: -30 (wrong sign)

Action: Do not add

Best Practices

Pre-Testing Principles

Test before committing:

  • Never add without pre-testing

  • Systematic > trial-and-error

  • Data-driven decisions

Test in context:

  • Use realistic baseline model

  • Test against actual model you'll use

  • Consider existing variables

Test multiple options:

  • Don't stop at first decent result

  • Compare alternatives

  • Find truly best option

What NOT to Do

❌ Add variable because "it might help" ❌ Skip pre-testing to save time ❌ Add all variables with t > 1.5 ❌ Ignore VIF warnings ❌ Add variables with wrong signs

✅ Pre-test systematically ✅ Use statistical AND business criteria ✅ Add only clear winners ✅ Check multicollinearity ✅ Validate coefficient signs

Integration with Workflow

Phase 1: Baseline Model

Build core model without pre-testing:

  • Obvious variables (main media channels)

  • Add directly in Model Builder

  • Establish foundation

Phase 2: Expansion via Pre-Testing

Use Variable Testing to grow model:

  • Pre-test 10-20 candidates

  • Identify top 3-5

  • Add winners to Model Builder

  • Validate with diagnostics

Phase 3: Optimization

Fine-tune using pre-testing:

  • Test transformed versions

  • Optimize adstock rates

  • Compare specifications

  • Select best configuration

Phase 4: Final Validation

After pre-testing and addition:

  • Run full diagnostics

  • Check decomposition

  • Validate business logic

  • Export final model

Key Takeaways

  • Pre-testing evaluates variables without changing model

  • Enables risk-free comparison of many candidates

  • Look for t > 2.0, p < 0.05, VIF < 10, positive R² increase

  • Use to compare transformations and optimize adstock

  • Test systematically before adding to model

  • Add only variables that pass both statistical and business criteria

  • Combine pre-testing with Model Builder for efficient development

Last updated