Performance Monitoring
Overview
Performance monitoring in MixModeler provides real-time visibility into how efficiently your models are running and which acceleration methods are being used. Understanding performance metrics helps you optimize your workflow, identify bottlenecks, and make informed decisions about hardware upgrades or model simplification.
Performance Indicators
Acceleration Badges
Located in the top-right toolbar, acceleration badges show active performance enhancements:
🚀 WASM Badge (Always Present):
Indicates WebAssembly acceleration active
Hover to see WASM engine version
Green color confirms optimal performance
🖥️ GPU Badge (When Available):
Indicates GPU acceleration active
Hover to see GPU model and utilization
Click for detailed GPU performance metrics
Green = active, Yellow = limited, Absent = unavailable
Operation Timing Display
After major operations, MixModeler displays performance information:
Bottom-Right Notification:
Information Shown:
Operation completed (checkmark indicates success)
Duration in seconds
Acceleration method used (WASM, WebGPU, or CPU)
Interpretation:
<1 second: Excellent performance
1-5 seconds: Good performance
5-15 seconds: Acceptable for complex operations
15 seconds: Consider optimization
Console Performance Logs
For detailed analysis, browser console shows comprehensive timing:
Example Log Output:
Access Console:
Chrome/Edge: F12 or Ctrl+Shift+J
Firefox: F12 or Ctrl+Shift+K
Safari: Cmd+Option+C
Performance Metrics
Operation Duration
Time from operation start to completion.
What It Measures: Total wall-clock time including all processing steps
Benchmarks by Operation:
Data Upload
<0.5s
0.5-2s
2-5s
Model Fitting (OLS)
0.3-0.7s
0.7-2s
2-8s
Model Fitting (Bayesian)
60-120s
120-300s
300-600s
Diagnostics Suite
0.3-0.6s
0.6-1.5s
1.5-4s
Correlation Matrix
0.2-0.5s
0.5-1.2s
1.2-3s
Variable Testing
0.5-1.5s
1.5-4s
4-10s
Decomposition Analysis
0.4-0.8s
0.8-2s
2-5s
Small: <30 vars, <100 obs | Medium: 30-100 vars, 100-300 obs | Large: 100+ vars, 300+ obs
Acceleration Method
Which technology powered the operation.
Methods:
WebGPU: Highest performance, GPU-accelerated
WASM: Good performance, always available
CPU/JavaScript: Baseline, fallback method
Priority: MixModeler tries WebGPU first, then WASM, then CPU
Typical Distribution:
With GPU: 70% WebGPU, 25% WASM, 5% CPU
Without GPU: 0% WebGPU, 85% WASM, 15% CPU
Speedup Factor
How much faster accelerated methods are compared to baseline.
Calculation: Baseline Time / Accelerated Time
Example:
CPU time (estimated): 4.2s
WASM time (actual): 0.7s
Speedup: 6x faster
Typical Speedups:
WASM vs CPU: 5-10x
GPU vs CPU: 15-60x (when available)
GPU vs WASM: 3-8x (when both available)
Memory Usage
RAM consumed during operations (particularly relevant for Bayesian modeling).
Display: Shows peak memory during operation
Benchmarks:
Small OLS Model: 50-150 MB
Medium OLS Model: 150-400 MB
Large OLS Model: 400-1000 MB
Bayesian MCMC (4 chains, 2000 draws): 500-2000 MB
Concern Thresholds:
<1 GB: No concerns
1-2 GB: Monitor if multiple tabs open
2-4 GB: Close other applications
4 GB: Consider reducing model complexity
Detailed Performance View
Accessing Performance Dashboard
Click acceleration badge (WASM or GPU) in toolbar
Select "Performance Details" from dropdown
View comprehensive performance breakdown
Alternative Access: Settings → Performance → View Detailed Metrics
Dashboard Components
Recent Operations Table:
Model Fit
0.68s
WASM
6.2x
14:23:45
Diagnostics
0.41s
WASM
7.8x
14:23:46
Correlation
0.15s
WebGPU
23.1x
14:24:02
Variable Test
1.24s
WASM
6.8x
14:24:15
Session Summary:
Total operations: 47
Total time saved: 3 minutes 24 seconds
Average speedup: 8.3x
Primary method: WASM (68%), WebGPU (32%)
GPU Utilization (when available):
Current usage: 34%
Peak usage: 78%
Average usage: 42%
Total GPU time: 8.3 seconds
System Information:
Browser: Chrome 120.0
WASM version: 1.0
GPU: NVIDIA RTX 3060 (detected)
Available RAM: 14.2 GB / 16 GB
Performance Optimization
Identifying Bottlenecks
Slow Data Upload (>5s for medium dataset):
Possible Cause: Large file, slow disk, browser cache issues
Check: File size, number of variables
Solution: Reduce variables, clear cache, use faster storage
Slow Model Fitting (>10s for OLS):
Possible Cause: Too many variables, multicollinearity, no acceleration
Check: Variable count, acceleration badge status
Solution: Remove correlated variables, enable GPU, simplify model
Slow Diagnostics (>5s):
Possible Cause: Many diagnostic tests, large dataset
Check: Number of tests enabled, dataset size
Solution: Run only essential tests initially
Slow Bayesian MCMC (>10 minutes):
Possible Cause: Too many draws, poor convergence, complex model
Check: MCMC settings, convergence diagnostics
Solution: Use Fast Inference mode, reduce draws initially, simplify model
Optimization Strategies
For Faster Iterations:
Start with subset of variables (10-20)
Use OLS before Bayesian
Enable Fast Inference for Bayesian exploration
Reduce diagnostic frequency during development
Leverage GPU if available
For Large Datasets:
Ensure GPU acceleration active
Close unnecessary browser tabs
Process in batches if needed
Use standardized variables (improves numerical stability)
Consider data reduction techniques
For Bayesian Models:
Use Fast Inference (SVI) for initial exploration
Start with fewer chains (2) and draws (1000)
Increase gradually only if convergence poor
Monitor memory usage
Switch to full MCMC only for final model
For Memory-Constrained Systems:
Close other applications
Use single browser tab
Reduce Bayesian chains and draws
Process variables in groups
Clear browser cache regularly
Benchmarking Your System
Running a Standard Benchmark
To understand your system's baseline performance:
Load the demo dataset (50 variables, 104 observations)
Build model with all variables
Run OLS model
Note timing displayed
Run full diagnostics suite
Note timing displayed
Generate correlation matrix
Note timing displayed
Interpreting Benchmark Results
High Performance (with GPU):
Model fitting: <0.5s
Diagnostics: <0.3s
Correlation: <0.1s
Total workflow: <1.5s
Good Performance (WASM only):
Model fitting: 0.5-1.0s
Diagnostics: 0.3-0.6s
Correlation: 0.2-0.4s
Total workflow: 1.5-3s
Adequate Performance (older hardware):
Model fitting: 1.0-2.0s
Diagnostics: 0.6-1.2s
Correlation: 0.4-0.8s
Total workflow: 3-5s
Slow Performance (needs upgrade):
Model fitting: >2s
Diagnostics: >1.2s
Correlation: >0.8s
Total workflow: >5s
Comparing to Reference Systems
Budget Laptop (Intel i3, 8GB RAM, integrated graphics):
WASM only
Model fitting: ~1.5s
Full workflow: ~4s
Mid-Range Laptop (Intel i5/AMD Ryzen 5, 16GB RAM, no dedicated GPU):
WASM only
Model fitting: ~0.8s
Full workflow: ~2.5s
Gaming Laptop (Intel i7/AMD Ryzen 7, 16GB RAM, NVIDIA GTX 1660):
WASM + GPU
Model fitting: ~0.3s
Full workflow: ~0.8s
Workstation (Intel i9/AMD Ryzen 9, 32GB RAM, NVIDIA RTX 3070):
WASM + GPU
Model fitting: ~0.2s
Full workflow: ~0.5s
Mac M1/M2 (8-16GB unified memory):
WASM + GPU (Metal)
Model fitting: ~0.4s
Full workflow: ~1.0s
Performance Troubleshooting
Diagnosis Flowchart
Is WASM badge present?
No → Browser issue, try update/reinstall
Yes → Proceed
Are operations taking >5s for medium models?
No → Performance is normal
Yes → Proceed
Is GPU badge present?
No → GPU unavailable, expect WASM-only speeds
Yes → GPU should be helping, proceed
Check console logs - are operations using GPU?
Yes → GPU active, may need better GPU
No → GPU fallback occurring, investigate why
Common causes of GPU fallback:
Insufficient VRAM
GPU busy with other tasks
Driver compatibility issues
Operation too small for GPU benefit
Quick Performance Fixes
Fix 1: Clear Browser Cache
Often resolves slow loading and acceleration issues
Chrome: Ctrl+Shift+Delete → Clear cached images and files
Restart browser after clearing
Fix 2: Close Other Tabs
Each tab consumes memory and may use GPU
Close unused tabs before intensive operations
Particularly important for Bayesian modeling
Fix 3: Update Graphics Drivers
Outdated drivers limit GPU performance
Visit GPU manufacturer website (NVIDIA, AMD, Intel)
Download and install latest drivers
Restart computer after installation
Fix 4: Enable Hardware Acceleration
Chrome: Settings → System → Use hardware acceleration
Ensure toggle is ON
Restart browser
Fix 5: Restart Browser
Memory leaks can slow performance over time
Restart browser every few hours during heavy usage
Particularly important during long modeling sessions
Export Performance Data
For Technical Support
If experiencing persistent performance issues:
Open Performance Dashboard
Click "Export Performance Report"
Saves JSON file with:
Operation timings
Acceleration methods used
System information
Error logs (if any)
Send to: support@mixmodeler.com with description of issues
For Internal Documentation
Track performance over time for capacity planning:
Export performance data monthly
Compare trends in operation timing
Identify if hardware upgrades needed
Document baseline vs current performance
Best Practices
Monitor Periodically: Glance at performance indicators occasionally, not constantly
Set Expectations: Know your system's baseline from benchmarking
Optimize Strategically: Focus optimization on actual bottlenecks, not all operations
Document Baselines: Record initial benchmark results for future comparison
Report Anomalies: If performance suddenly degrades, investigate immediately
Plan Upgrades: Use performance data to justify hardware investments
Educate Stakeholders: Share typical timing expectations to set realistic project timelines
Performance Impact on Workflow
Development Phase
With Good Performance (GPU + WASM):
Rapid iteration (5-10 models per hour)
Immediate feedback on changes
Encourages experimentation
Reduces fatigue and errors
With Poor Performance (CPU only):
Slow iteration (1-2 models per hour)
Waiting reduces focus
Discourages exploration
More likely to settle for suboptimal models
Time Multiplier: Good performance can make analysts 3-5x more productive
Production Phase
Impact on Deliverables:
Faster final model validation
Quick scenario analysis for stakeholders
Responsive to last-minute changes
Professional, efficient client interactions
Impact on Quality:
More time for thorough testing
Better explored model space
Higher confidence in results
More robust final recommendations
Next Steps: Explore Large Dataset Handling to optimize performance for models with hundreds of variables, or return to Advanced Features overview.
Last updated