Performance Optimization

Overview

Optimizing your MixModeler workflow saves time and enables faster iteration. This guide covers strategies to maximize efficiency from data preparation through final insights delivery.

Hardware Optimization

For Best Performance:

  • RAM: 16 GB minimum, 32 GB recommended

  • CPU: Modern quad-core (3.0+ GHz)

  • GPU: Dedicated GPU (NVIDIA/AMD) for 10-100x speedup

  • Browser: Chrome or Edge (latest version)

  • Internet: 5+ Mbps for initial load (then works offline)

Budget Option (Still Works):

  • RAM: 8 GB

  • CPU: Dual-core (2.5+ GHz)

  • GPU: Integrated graphics (WASM-only acceleration)

  • Browser: Any modern browser

  • Internet: 1+ Mbps

GPU Acceleration

Enable GPU Processing:

  1. Use Chrome or Edge (best GPU support)

  2. Update graphics drivers

  3. Enable hardware acceleration in browser settings

  4. Verify GPU badge appears in MixModeler toolbar

Expected Speedups:

  • Small models (30 vars): 5-10x faster

  • Medium models (100 vars): 15-30x faster

  • Large models (300+ vars): 30-100x faster

If No GPU:

  • WASM acceleration still provides 5-10x speedup

  • All features work, just slower

  • Consider cloud GPU instance for large models

Browser Settings

Chrome/Edge Optimization:

  1. Enable Hardware Acceleration:

    • Settings → System → Use hardware acceleration

    • Toggle ON

    • Restart browser

  2. Allow Sufficient Memory:

    • Close unnecessary tabs

    • Disable memory-intensive extensions

    • Restart browser periodically

  3. Enable WebGPU (if available):

    • chrome://flags → Search "WebGPU"

    • Enable "Unsafe WebGPU"

    • Restart browser

Data Preparation Optimization

Minimize Pre-Processing

In Excel:

  • Keep one clean sheet (delete others)

  • Remove unnecessary columns before upload

  • Pre-calculate any complex formulas

  • Save as .xlsx (not .xls or .xlsm)

Benefits:

  • Faster upload

  • Reduced file size

  • Cleaner data validation

Optimal Data Size

Sweet Spot:

  • Variables: 20-100 (most projects)

  • Observations: 52-156 weeks

  • File size: <5 MB

If Larger Dataset:

  • Remove unused variables

  • Aggregate to weekly (if daily)

  • Focus on relevant time period

  • Split into multiple projects if needed

Performance Impact:

Dataset Size
Upload Time
Model Fitting

30 vars × 52 obs

<1 sec

0.5-1 sec

100 vars × 104 obs

1-2 sec

1-3 sec

300 vars × 260 obs

3-5 sec

5-15 sec

Workflow Efficiency

Start Simple, Scale Up

Phase 1: Quick Baseline (30 minutes)

  1. Upload data

  2. Select top 5-10 variables

  3. Build simple OLS model

  4. Quick diagnostic check

Phase 2: Refinement (2-3 hours)

  1. Add control variables

  2. Test transformations

  3. Run full diagnostics

  4. Compare 3-5 model specifications

Phase 3: Finalization (1-2 hours)

  1. Optimize best model

  2. Generate decomposition

  3. Create reports

  4. Prepare presentation

Benefits:

  • Early feedback on data quality

  • Catch issues before deep work

  • Iterative improvement

  • Faster to useful insights

Use OLS Before Bayesian

Speed Comparison:

  • OLS model: 0.5-3 seconds

  • Bayesian model: 60-600 seconds (2-10 minutes)

Strategy:

  1. Build and refine with OLS (fast iteration)

  2. Run diagnostics on OLS version

  3. Optimize transformations with OLS

  4. Only switch to Bayesian when OLS model finalized

  5. Run Bayesian once for final uncertainty quantification

Time Saved: Hours during model development

Batch Operations

Variable Workshop:

  • Select multiple variables (checkbox)

  • Apply same transformation to all

  • Set group assignments in bulk

  • Faster than one-by-one

Variable Testing:

  • Test multiple adstock rates simultaneously

  • Run all diagnostic tests at once

  • Compare multiple models side-by-side

Model Library:

  • Clone models instead of rebuilding

  • Export multiple models at once

  • Batch delete old iterations

Model Building Efficiency

Reuse and Clone

Clone Existing Models:

  1. Find similar past model in library

  2. Click "Clone"

  3. Modify variables/settings

  4. Fit new model

Benefits:

  • Preserves transformations

  • Copies decomposition groups

  • Saves variable selection time

  • Maintains best practices

Use Cases:

  • Test different time periods

  • Add/remove few variables

  • Compare OLS vs Bayesian

  • Try different adstock rates

Variable Testing First

Before Adding to Model:

  1. Go to Variable Testing

  2. Test variable significance

  3. Optimize adstock rate

  4. Check VIF with existing variables

Add to Model Only If:

  • Significant in testing

  • Low VIF (<5)

  • Makes business sense

  • Improves model fit

Saves Time:

  • Avoid adding useless variables

  • Reduce model iterations

  • Faster to final model

Incremental Building

One Variable at a Time:

  • Add variable

  • Check impact on R², coefficients

  • Run quick diagnostics

  • Keep or discard

  • Document decision

Faster Than:

  • Adding 10 variables at once

  • Debugging which ones caused problems

  • Removing one-by-one to find culprit

Diagnostic Shortcuts

Quick Check (30 seconds):

  • Run only VIF test

  • Check R² and Adjusted R²

  • Review coefficient signs

Full Validation (2-3 minutes):

  • Run all diagnostic tests

  • Only on serious model candidates

  • Not every iteration

When to Skip Diagnostics:

  • Very exploratory models

  • Testing single variable impact

  • Preliminary comparisons

Always Run Before:

  • Final model selection

  • Presenting to stakeholders

  • Making business decisions

Bayesian Model Optimization

Fast Inference Mode

For Rapid Exploration:

  • Enable "Fast Inference" toggle

  • Uses SVI instead of MCMC

  • 10-20x faster than standard Bayesian

  • Approximate but good for iteration

Workflow:

  1. Explore with Fast Inference

  2. Compare multiple specifications

  3. Select best model

  4. Run full MCMC on final choice only

Time Saved: 80-90% during exploration

MCMC Settings Optimization

For Quick Testing:

  • Chains: 2 (instead of 4)

  • Draws: 1,000 (instead of 2,000)

  • Tune: 500 (instead of 1,000)

Runtime: 50-60% faster

For Final Model:

  • Chains: 4

  • Draws: 2,000-3,000

  • Tune: 1,000-1,500

Check Convergence: Always verify R-hat <1.01

Prior Optimization

Start with Defaults:

  • Weakly informative priors

  • Let data drive estimates

  • Fast convergence

Use Informative Priors When:

  • Small dataset (n <75)

  • Convergence issues

  • Strong prior knowledge

Avoid:

  • Over-specifying priors unnecessarily

  • Testing many prior configurations

  • Use priors strategically, not exhaustively

Large Dataset Strategies

Variable Reduction

If 200+ Variables:

Strategy 1: Pre-screen

  • Calculate correlation with KPI

  • Keep top 50 by correlation

  • Add back theoretically important variables

Strategy 2: Group and Aggregate

  • Combine similar channels

  • Create category totals

  • Reduce from 200 to 30-50 key variables

Strategy 3: Staged Modeling

  • Model by category (Digital, TV, Print separately)

  • Identify top performers in each

  • Build combined model with winners

Benefits: 5-10x faster processing

Chunking Time Periods

For Very Long Time Series (300+ weeks):

Approach 1: Rolling Window

  • Model most recent 104 weeks

  • Update quarterly with new data

  • Drop oldest data

Approach 2: Split Sample

  • Build separate models per year

  • Compare coefficients over time

  • Identify structural changes

Approach 3: Monthly Aggregation

  • Aggregate weekly to monthly

  • Reduces observations by 75%

  • Loses some granularity but much faster

Memory Management

If Running Out of Memory:

  1. Close Other Applications

    • Especially browser tabs

    • Memory-intensive programs

    • Background processes

  2. Restart Browser

    • Clears memory leaks

    • Fresh start

    • Do every few hours of heavy use

  3. Reduce Model Size

    • Fewer variables

    • Shorter time period

    • Simplify transformations

  4. Use Faster Hardware

    • More RAM (16-32 GB)

    • Faster CPU

    • SSD instead of HDD

Reporting Optimization

Template Creation

Build Once, Reuse:

Excel Template:

  1. Export best model

  2. Add executive summary sheet

  3. Format beautifully

  4. Save as template

  5. Reuse for future models (replace data)

PowerPoint Template:

  1. Create slide deck structure

  2. Add placeholder charts

  3. Save as template

  4. Update with new model results

Benefits: 70-80% less time on reporting

Automated Exports

Batch Export:

  1. Export model to Excel (includes all data)

  2. Generate decomposition (included in export)

  3. Create PDF diagnostics for key tests

  4. Package together for stakeholders

One-Click Package:

  • Model specifications

  • Diagnostic validation

  • Attribution insights

  • Ready to share

Visualization Reuse

Save Chart Configurations:

  • Screenshot favorite views

  • Document chart settings

  • Recreate quickly in new models

Export Charts:

  • Right-click → Save Image

  • Insert into presentations

  • No need to recreate externally

Collaboration Efficiency

Model Sharing

Export-Import Workflow:

Analyst A:

  1. Build model

  2. Export to Excel

  3. Share file with Analyst B

Analyst B:

  1. Upload same data

  2. Import model from Excel

  3. Modify and iterate

  4. Export and share back

Use Cases:

  • Team collaboration

  • Peer review

  • Client deliverables

  • Version control

Documentation Standards

Consistent Naming:

ProjectName_ModelType_Version_Date.xlsx

Examples:
Q4_Campaign_OLS_v1_20250104.xlsx
Annual_Bayesian_Final_20250115.xlsx

Benefits:

  • Easy to find files

  • Clear version history

  • Team alignment

Model Notes:

  • Include README in exports

  • Document key decisions

  • List any data issues

  • Note model limitations

Performance Monitoring

Track Your Speed

Benchmark Standard Operations:

Task
Target Time
Your Time

Data upload (100 vars)

<3 sec

___

OLS model fit

<2 sec

___

Full diagnostics

<5 sec

___

Bayesian MCMC

<5 min

___

Decomposition

<3 sec

___

If Slower Than Target:

  • Check GPU acceleration active

  • Close other applications

  • Restart browser

  • Consider hardware upgrade

Identify Bottlenecks

Common Slowdowns:

  1. Data Upload (>5 sec):

    • File too large

    • Too many unnecessary variables

    • Network issues

  2. Model Fitting (>10 sec OLS):

    • Too many variables

    • Large dataset

    • No acceleration active

  3. Diagnostics (>10 sec):

    • Complex tests

    • Large models

    • CPU bottleneck

  4. Bayesian (>15 min):

    • Too many draws

    • Convergence issues

    • Settings too conservative

Solutions: See respective sections above

Workflow Automation

Keyboard Shortcuts

Learn Common Actions:

  • Ctrl/Cmd + S: Save model

  • Ctrl/Cmd + C/V: Copy/paste

  • Click + Drag: Select multiple variables

  • Shift + Click: Select range

Saves: Seconds per action, hours over project

Browser Bookmarks

Bookmark Key Pages:

  • Data Upload

  • Model Builder

  • Model Library

  • Variable Workshop

Quick Navigation: Click bookmark instead of menu navigation

Workspace Organization

Single Monitor:

  • Use browser tabs efficiently

  • Keep MixModeler in dedicated window

  • Excel/PowerPoint in separate window

Dual Monitor:

  • MixModeler on primary monitor

  • Excel/analysis on secondary monitor

  • Drag-and-drop between screens

Best Practices Summary

Hardware:

  • Use GPU acceleration when available

  • 16+ GB RAM for large models

  • Keep browser updated

  • Restart periodically

Workflow:

  • Start simple, add complexity

  • Use OLS before Bayesian

  • Clone models to save time

  • Test variables before adding

Data:

  • Clean data thoroughly upfront

  • Minimize file size

  • Remove unnecessary variables

  • Use optimal granularity

Modeling:

  • Incremental building (one variable at a time)

  • Quick diagnostics for iterations

  • Full validation for finals

  • Fast Inference for Bayesian exploration

Reporting:

  • Create templates

  • Batch export

  • Reuse visualizations

  • Document consistently

Time Allocation (for typical project):

  • Data prep: 30% of time

  • Model building: 40% of time

  • Validation: 15% of time

  • Reporting: 15% of time

Optimize Each Phase: Following these practices can reduce total project time by 40-60%


Congratulations! You've completed the Best Practices section. Apply these techniques to build better models faster and deliver more impactful insights to stakeholders.

Last updated