Performance Optimization
Overview
Optimizing your MixModeler workflow saves time and enables faster iteration. This guide covers strategies to maximize efficiency from data preparation through final insights delivery.
Hardware Optimization
Recommended Setup
For Best Performance:
- RAM: 16 GB minimum, 32 GB recommended 
- CPU: Modern quad-core (3.0+ GHz) 
- GPU: Dedicated GPU (NVIDIA/AMD) for 10-100x speedup 
- Browser: Chrome or Edge (latest version) 
- Internet: 5+ Mbps for initial load (then works offline) 
Budget Option (Still Works):
- RAM: 8 GB 
- CPU: Dual-core (2.5+ GHz) 
- GPU: Integrated graphics (WASM-only acceleration) 
- Browser: Any modern browser 
- Internet: 1+ Mbps 
GPU Acceleration
Enable GPU Processing:
- Use Chrome or Edge (best GPU support) 
- Update graphics drivers 
- Enable hardware acceleration in browser settings 
- Verify GPU badge appears in MixModeler toolbar 
Expected Speedups:
- Small models (30 vars): 5-10x faster 
- Medium models (100 vars): 15-30x faster 
- Large models (300+ vars): 30-100x faster 
If No GPU:
- WASM acceleration still provides 5-10x speedup 
- All features work, just slower 
- Consider cloud GPU instance for large models 
Browser Settings
Chrome/Edge Optimization:
- Enable Hardware Acceleration: - Settings → System → Use hardware acceleration 
- Toggle ON 
- Restart browser 
 
- Allow Sufficient Memory: - Close unnecessary tabs 
- Disable memory-intensive extensions 
- Restart browser periodically 
 
- Enable WebGPU (if available): - chrome://flags→ Search "WebGPU"
- Enable "Unsafe WebGPU" 
- Restart browser 
 
Data Preparation Optimization
Minimize Pre-Processing
In Excel:
- Keep one clean sheet (delete others) 
- Remove unnecessary columns before upload 
- Pre-calculate any complex formulas 
- Save as - .xlsx(not- .xlsor- .xlsm)
Benefits:
- Faster upload 
- Reduced file size 
- Cleaner data validation 
Optimal Data Size
Sweet Spot:
- Variables: 20-100 (most projects) 
- Observations: 52-156 weeks 
- File size: <5 MB 
If Larger Dataset:
- Remove unused variables 
- Aggregate to weekly (if daily) 
- Focus on relevant time period 
- Split into multiple projects if needed 
Performance Impact:
30 vars × 52 obs
<1 sec
0.5-1 sec
100 vars × 104 obs
1-2 sec
1-3 sec
300 vars × 260 obs
3-5 sec
5-15 sec
Workflow Efficiency
Start Simple, Scale Up
Phase 1: Quick Baseline (30 minutes)
- Upload data 
- Select top 5-10 variables 
- Build simple OLS model 
- Quick diagnostic check 
Phase 2: Refinement (2-3 hours)
- Add control variables 
- Test transformations 
- Run full diagnostics 
- Compare 3-5 model specifications 
Phase 3: Finalization (1-2 hours)
- Optimize best model 
- Generate decomposition 
- Create reports 
- Prepare presentation 
Benefits:
- Early feedback on data quality 
- Catch issues before deep work 
- Iterative improvement 
- Faster to useful insights 
Use OLS Before Bayesian
Speed Comparison:
- OLS model: 0.5-3 seconds 
- Bayesian model: 60-600 seconds (2-10 minutes) 
Strategy:
- Build and refine with OLS (fast iteration) 
- Run diagnostics on OLS version 
- Optimize transformations with OLS 
- Only switch to Bayesian when OLS model finalized 
- Run Bayesian once for final uncertainty quantification 
Time Saved: Hours during model development
Batch Operations
Variable Workshop:
- Select multiple variables (checkbox) 
- Apply same transformation to all 
- Set group assignments in bulk 
- Faster than one-by-one 
Variable Testing:
- Test multiple adstock rates simultaneously 
- Run all diagnostic tests at once 
- Compare multiple models side-by-side 
Model Library:
- Clone models instead of rebuilding 
- Export multiple models at once 
- Batch delete old iterations 
Model Building Efficiency
Reuse and Clone
Clone Existing Models:
- Find similar past model in library 
- Click "Clone" 
- Modify variables/settings 
- Fit new model 
Benefits:
- Preserves transformations 
- Copies decomposition groups 
- Saves variable selection time 
- Maintains best practices 
Use Cases:
- Test different time periods 
- Add/remove few variables 
- Compare OLS vs Bayesian 
- Try different adstock rates 
Variable Testing First
Before Adding to Model:
- Go to Variable Testing 
- Test variable significance 
- Optimize adstock rate 
- Check VIF with existing variables 
Add to Model Only If:
- Significant in testing 
- Low VIF (<5) 
- Makes business sense 
- Improves model fit 
Saves Time:
- Avoid adding useless variables 
- Reduce model iterations 
- Faster to final model 
Incremental Building
One Variable at a Time:
- Add variable 
- Check impact on R², coefficients 
- Run quick diagnostics 
- Keep or discard 
- Document decision 
Faster Than:
- Adding 10 variables at once 
- Debugging which ones caused problems 
- Removing one-by-one to find culprit 
Diagnostic Shortcuts
Quick Check (30 seconds):
- Run only VIF test 
- Check R² and Adjusted R² 
- Review coefficient signs 
Full Validation (2-3 minutes):
- Run all diagnostic tests 
- Only on serious model candidates 
- Not every iteration 
When to Skip Diagnostics:
- Very exploratory models 
- Testing single variable impact 
- Preliminary comparisons 
Always Run Before:
- Final model selection 
- Presenting to stakeholders 
- Making business decisions 
Bayesian Model Optimization
Fast Inference Mode
For Rapid Exploration:
- Enable "Fast Inference" toggle 
- Uses SVI instead of MCMC 
- 10-20x faster than standard Bayesian 
- Approximate but good for iteration 
Workflow:
- Explore with Fast Inference 
- Compare multiple specifications 
- Select best model 
- Run full MCMC on final choice only 
Time Saved: 80-90% during exploration
MCMC Settings Optimization
For Quick Testing:
- Chains: 2 (instead of 4) 
- Draws: 1,000 (instead of 2,000) 
- Tune: 500 (instead of 1,000) 
Runtime: 50-60% faster
For Final Model:
- Chains: 4 
- Draws: 2,000-3,000 
- Tune: 1,000-1,500 
Check Convergence: Always verify R-hat <1.01
Prior Optimization
Start with Defaults:
- Weakly informative priors 
- Let data drive estimates 
- Fast convergence 
Use Informative Priors When:
- Small dataset (n <75) 
- Convergence issues 
- Strong prior knowledge 
Avoid:
- Over-specifying priors unnecessarily 
- Testing many prior configurations 
- Use priors strategically, not exhaustively 
Large Dataset Strategies
Variable Reduction
If 200+ Variables:
Strategy 1: Pre-screen
- Calculate correlation with KPI 
- Keep top 50 by correlation 
- Add back theoretically important variables 
Strategy 2: Group and Aggregate
- Combine similar channels 
- Create category totals 
- Reduce from 200 to 30-50 key variables 
Strategy 3: Staged Modeling
- Model by category (Digital, TV, Print separately) 
- Identify top performers in each 
- Build combined model with winners 
Benefits: 5-10x faster processing
Chunking Time Periods
For Very Long Time Series (300+ weeks):
Approach 1: Rolling Window
- Model most recent 104 weeks 
- Update quarterly with new data 
- Drop oldest data 
Approach 2: Split Sample
- Build separate models per year 
- Compare coefficients over time 
- Identify structural changes 
Approach 3: Monthly Aggregation
- Aggregate weekly to monthly 
- Reduces observations by 75% 
- Loses some granularity but much faster 
Memory Management
If Running Out of Memory:
- Close Other Applications - Especially browser tabs 
- Memory-intensive programs 
- Background processes 
 
- Restart Browser - Clears memory leaks 
- Fresh start 
- Do every few hours of heavy use 
 
- Reduce Model Size - Fewer variables 
- Shorter time period 
- Simplify transformations 
 
- Use Faster Hardware - More RAM (16-32 GB) 
- Faster CPU 
- SSD instead of HDD 
 
Reporting Optimization
Template Creation
Build Once, Reuse:
Excel Template:
- Export best model 
- Add executive summary sheet 
- Format beautifully 
- Save as template 
- Reuse for future models (replace data) 
PowerPoint Template:
- Create slide deck structure 
- Add placeholder charts 
- Save as template 
- Update with new model results 
Benefits: 70-80% less time on reporting
Automated Exports
Batch Export:
- Export model to Excel (includes all data) 
- Generate decomposition (included in export) 
- Create PDF diagnostics for key tests 
- Package together for stakeholders 
One-Click Package:
- Model specifications 
- Diagnostic validation 
- Attribution insights 
- Ready to share 
Visualization Reuse
Save Chart Configurations:
- Screenshot favorite views 
- Document chart settings 
- Recreate quickly in new models 
Export Charts:
- Right-click → Save Image 
- Insert into presentations 
- No need to recreate externally 
Collaboration Efficiency
Model Sharing
Export-Import Workflow:
Analyst A:
- Build model 
- Export to Excel 
- Share file with Analyst B 
Analyst B:
- Upload same data 
- Import model from Excel 
- Modify and iterate 
- Export and share back 
Use Cases:
- Team collaboration 
- Peer review 
- Client deliverables 
- Version control 
Documentation Standards
Consistent Naming:
ProjectName_ModelType_Version_Date.xlsx
Examples:
Q4_Campaign_OLS_v1_20250104.xlsx
Annual_Bayesian_Final_20250115.xlsxBenefits:
- Easy to find files 
- Clear version history 
- Team alignment 
Model Notes:
- Include README in exports 
- Document key decisions 
- List any data issues 
- Note model limitations 
Performance Monitoring
Track Your Speed
Benchmark Standard Operations:
Data upload (100 vars)
<3 sec
___
OLS model fit
<2 sec
___
Full diagnostics
<5 sec
___
Bayesian MCMC
<5 min
___
Decomposition
<3 sec
___
If Slower Than Target:
- Check GPU acceleration active 
- Close other applications 
- Restart browser 
- Consider hardware upgrade 
Identify Bottlenecks
Common Slowdowns:
- Data Upload (>5 sec): - File too large 
- Too many unnecessary variables 
- Network issues 
 
- Model Fitting (>10 sec OLS): - Too many variables 
- Large dataset 
- No acceleration active 
 
- Diagnostics (>10 sec): - Complex tests 
- Large models 
- CPU bottleneck 
 
- Bayesian (>15 min): - Too many draws 
- Convergence issues 
- Settings too conservative 
 
Solutions: See respective sections above
Workflow Automation
Keyboard Shortcuts
Learn Common Actions:
- Ctrl/Cmd + S: Save model 
- Ctrl/Cmd + C/V: Copy/paste 
- Click + Drag: Select multiple variables 
- Shift + Click: Select range 
Saves: Seconds per action, hours over project
Browser Bookmarks
Bookmark Key Pages:
- Data Upload 
- Model Builder 
- Model Library 
- Variable Workshop 
Quick Navigation: Click bookmark instead of menu navigation
Workspace Organization
Single Monitor:
- Use browser tabs efficiently 
- Keep MixModeler in dedicated window 
- Excel/PowerPoint in separate window 
Dual Monitor:
- MixModeler on primary monitor 
- Excel/analysis on secondary monitor 
- Drag-and-drop between screens 
Best Practices Summary
Hardware:
- Use GPU acceleration when available 
- 16+ GB RAM for large models 
- Keep browser updated 
- Restart periodically 
Workflow:
- Start simple, add complexity 
- Use OLS before Bayesian 
- Clone models to save time 
- Test variables before adding 
Data:
- Clean data thoroughly upfront 
- Minimize file size 
- Remove unnecessary variables 
- Use optimal granularity 
Modeling:
- Incremental building (one variable at a time) 
- Quick diagnostics for iterations 
- Full validation for finals 
- Fast Inference for Bayesian exploration 
Reporting:
- Create templates 
- Batch export 
- Reuse visualizations 
- Document consistently 
Time Allocation (for typical project):
- Data prep: 30% of time 
- Model building: 40% of time 
- Validation: 15% of time 
- Reporting: 15% of time 
Optimize Each Phase: Following these practices can reduce total project time by 40-60%
Congratulations! You've completed the Best Practices section. Apply these techniques to build better models faster and deliver more impactful insights to stakeholders.
Last updated