Cloning & Versioning

Safe Model Experimentation Through Version Control

Overview

Cloning creates an exact copy of an existing model, allowing you to experiment with different specifications while preserving your original working model. This is essential for iterative model development, testing hypotheses, and maintaining version history.

Purpose: Create model copies for safe experimentation and version management

Access: Model Library → Select model → Click "Clone" button

What is Model Cloning?

What Gets Cloned

When you clone a model, MixModeler creates an exact duplicate including:

All variables: Every independent variable in the model ✅ Variable transformations: Adstock rates, curve parameters ✅ Model specification: Which variables are included ✅ KPI selection: Same dependent variable ✅ Observation filters: If you filtered time periods ✅ Model type: OLS or Bayesian setting

What Doesn't Get Cloned

Model name: You must provide a new unique name ❌ Export path: Clone starts with empty export path ❌ Last modified date: Clone gets current timestamp ❌ Regression results: Must re-run after cloning (automatically done)

Clone vs. Create New

Clone when:

  • Starting from existing working model

  • Testing variable additions/removals

  • Experimenting with transformations

  • Creating alternative specifications

  • Preserving model versions

Create new when:

  • Different KPI needed

  • Completely different modeling approach

  • Starting from scratch

  • No similar model exists

How to Clone a Model

Step 1: Select Model to Clone

In Model Library table:

  1. Find the model you want to clone

  2. Optional: Select its checkbox (not required)

  3. Locate the Clone button in the Actions column

Step 2: Click Clone Button

Click the 📋 Clone button for the model.

A dialog appears prompting for the new model name.

Step 3: Enter New Model Name

Default suggestion: OriginalModelName_clone

Recommended naming:

  • Add version number: Sales_Model_v2

  • Add description: Sales_Model_With_Radio

  • Add date: Sales_Model_2024_12_15

  • Add experiment type: Sales_Model_Curves

Name requirements:

  • Must be unique (cannot duplicate existing model)

  • Can use letters, numbers, underscores, hyphens

  • Avoid special characters

Step 4: Confirm Clone

Click OK or Clone to create the copy.

What happens:

  1. New model created with all specifications

  2. Model appears in Model Library table

  3. Regression automatically re-run

  4. Ready for modifications in Model Builder

Cloning Strategies

Version Control Approach

Linear versioning:

Sales_Model_v1  →  Sales_Model_v2  →  Sales_Model_v3

Process:

  1. Build v1 with core variables

  2. Clone to v2, add more variables

  3. Clone to v3, optimize transformations

  4. Keep best performing version

  5. Delete inferior versions

Benefits:

  • Clear progression

  • Easy to track changes

  • Can revert if needed

Experimental Branches

Branching for different approaches:

Sales_Model_Base
    ├── Sales_Model_Linear (no transformations)
    ├── Sales_Model_Curves (with saturation curves)
    └── Sales_Model_Bayesian (Bayesian inference)

Process:

  1. Create stable base model

  2. Clone for each experiment

  3. Test different approaches independently

  4. Compare results

  5. Select best approach

Benefits:

  • Parallel experimentation

  • Safe testing environment

  • Easy comparison

Feature Testing

Clone to test specific additions:

Sales_Model_Main
    ├── Sales_Model_Plus_Radio (test Radio addition)
    ├── Sales_Model_Plus_Outdoor (test Outdoor addition)
    └── Sales_Model_Plus_Weather (test Weather addition)

Process:

  1. Clone base model

  2. Add single new feature to each clone

  3. Compare each to base

  4. Keep features that improve model

  5. Merge into final model

Benefits:

  • Isolate variable effects

  • Clear attribution of improvements

  • Systematic testing

Best Practices

Naming Conventions

Version-based naming:

  • ModelName_v1, ModelName_v2, ModelName_v3

  • Clear progression

  • Easy sorting

Description-based naming:

  • ModelName_Baseline

  • ModelName_With_Curves

  • ModelName_Digital_Focus

  • Describes key difference

Date-based naming:

  • ModelName_2024_12_15

  • ModelName_Q4_2024

  • Timestamps experiments

Hybrid approach (recommended):

  • Sales_Model_v2_Curves_2024_12

  • Combines version, description, and date

  • Maximum clarity

When to Clone

Before major changes:

  • Adding/removing multiple variables

  • Changing transformation strategy

  • Switching from OLS to Bayesian

  • Major specification changes

For experimentation:

  • Testing hypotheses

  • Trying alternative approaches

  • Sensitivity analysis

  • What-if scenarios

For documentation:

  • Preserving milestone versions

  • Creating reference models

  • Maintaining audit trail

Don't clone for:

  • Minor tweaks (just modify original)

  • Single variable addition (compare first)

  • Fixing obvious errors

  • Temporary testing (use Model Builder)

Managing Clones

Keep it clean:

  • Delete failed experiments after comparison

  • Maintain 2-3 active versions maximum

  • Archive important milestones by exporting

  • Use consistent naming for easy identification

Documentation:

  • Note why you cloned in a separate document

  • Track key differences between versions

  • Record which version is "production"

  • Document decision rationale

Lifecycle management:

Create → Experiment → Compare → Decide → Delete/Keep

Typical model lifecycle:

  1. Clone for experiment

  2. Make modifications

  3. Compare to original

  4. Keep if better, delete if worse

  5. Repeat

Common Cloning Workflows

Workflow 1: Incremental Improvement

Goal: Systematically improve model quality

Steps:

  1. Start: Sales_Model_v1 (R² = 65%)

  2. Clone: Sales_Model_v2

  3. Add: Saturation curves to v2

  4. Compare: v2 R² = 72%, keep v2

  5. Clone v2: Sales_Model_v3

  6. Add: Additional control variables to v3

  7. Compare: v3 R² = 75%, keep v3

  8. Delete: v1 and v2 (superseded)

  9. Result: Final model = v3

Workflow 2: A/B Testing Specifications

Goal: Test two different approaches simultaneously

Steps:

  1. Start: Sales_Model_Base

  2. Clone: Sales_Model_A (linear)

  3. Clone: Sales_Model_B (curves)

  4. Develop A: Keep variables linear

  5. Develop B: Apply saturation curves

  6. Compare: A vs B in model comparison

  7. Decision: B performs better (R² +7%)

  8. Keep: Sales_Model_B

  9. Delete: A and Base

  10. Rename B: Sales_Model_Final

Workflow 3: Seasonal Analysis

Goal: Compare seasonal vs full-year models

Steps:

  1. Start: Sales_Full_Year

  2. Clone: Sales_Q4_Only

  3. Filter Q4: In cloned model, filter observations to Q4

  4. Compare: Coefficients between full-year and Q4

  5. Analyze: Seasonal effects on channels

  6. Keep both: For different purposes

  7. Use Full_Year: For annual planning

  8. Use Q4_Only: For holiday campaign optimization

Workflow 4: Transformation Testing

Goal: Find optimal transformation approach

Steps:

  1. Start: Model_Raw (no transformations)

  2. Clone: Model_Adstock_Only

  3. Clone: Model_Curves_Only

  4. Clone: Model_Both

  5. Develop: Each clone with specified transformations

  6. Compare: All four versions

  7. Result: Model_Both has highest R² and best diagnostics

  8. Keep: Model_Both

  9. Delete: Other three versions

  10. Rename: Model_Both → Model_Final

Version History Tracking

Why Track Versions

Benefits:

  • Understand model evolution

  • Justify final specification

  • Replicate analysis

  • Communicate changes to stakeholders

  • Learn from failed experiments

What to Document

For each version:

  1. Version name and number

  2. Creation date

  3. Changes from previous version:

    • Variables added/removed

    • Transformations applied

    • Filters changed

  4. Performance metrics:

    • Key coefficients

    • Diagnostic results

  5. Decision:

    • Keep or delete

    • Rationale

Documentation Example

Model Version History: Sales Analysis

v1 - 2024-12-01
- Base model with TV, Digital, Seasonality
- R² = 65%
- All variables significant
- Decision: Good start, but missing channels

v2 - 2024-12-05 (cloned from v1)
- Added: Radio, Print
- R² = 68%
- Radio significant (t=3.2), Print not (t=1.1)
- Decision: Keep Radio, remove Print

v3 - 2024-12-10 (cloned from v2)
- Applied saturation curves to TV and Digital
- R² = 75%
- Curves significant, better diagnostics
- Decision: Keep as final model

Final: v3 renamed to Sales_Model_Final
- Exported: C:\Projects\MMM\Sales_Model_Final.xlsx
- Production model for 2024 planning

Advanced Cloning Techniques

Cloning for Scenario Planning

Use case: Testing different budget allocation scenarios

Approach:

  1. Clone base model

  2. Filter to future period (if available)

  3. Modify variables to reflect scenario

  4. Run decomposition

  5. Compare scenario outputs

Note: This is advanced use beyond typical cloning

Cloning for Sensitivity Analysis

Use case: Understanding parameter sensitivity

Approach:

  1. Clone model multiple times

  2. Vary one parameter systematically

  3. Compare results across clones

  4. Identify sensitive vs. robust parameters

Example:

  • Base model with adstock=70

  • Clone 1: adstock=60

  • Clone 2: adstock=80

  • Compare coefficient stability

Cloning for Collaboration

Use case: Multiple analysts working on same project

Approach:

  1. Create base model (team agreed specification)

  2. Each analyst clones with their name

  3. Analysts experiment independently

  4. Team meeting to compare and select best

  5. Merge best features into final model

Example naming:

  • Sales_Model_Analyst_John

  • Sales_Model_Analyst_Sarah

  • Sales_Model_Team_Final

Troubleshooting

"Clone name already exists"

Cause: Name you entered is already used by another model

Solution:

  • Add version number or distinguishing suffix

  • Check Model Library for existing names

  • Use more descriptive name

"Clone failed"

Possible causes:

  • Original model has errors

  • Data issue in backend

  • Session timeout

Solutions:

  • Refresh page and try again

  • Verify original model works (run regression)

  • Check if you can create new model (tests data access)

  • Contact support if persists

Cloned model shows different R²

Cause: This is normal - regression is re-run

Explanation:

  • Small numerical differences possible

  • Random seed differences in Bayesian

  • Should be <0.1% different

Solution:

  • If difference >1%, investigate

  • Otherwise, ignore minor variation

Too many clones cluttering library

Cause: Not cleaning up experimental clones

Solution:

  • Delete failed experiments immediately after comparison

  • Keep only 2-3 active versions

  • Export important versions before deleting

  • Use descriptive names to identify keepers

Key Takeaways

  • Cloning creates exact copy for safe experimentation

  • Always clone before major changes to preserve working model

  • Use clear naming conventions for version tracking

  • Keep 2-3 active versions maximum, delete others

  • Compare clones to original to validate improvements

  • Document what changed and why for each version

  • Delete failed experiments after comparison

  • Export milestone versions before deletion for archival

Last updated