Food Delivery Promotion Effectiveness Dashboard

A comprehensive framework for designing, analyzing, and optimizing promotion strategies

🎯

Business Objectives

Order Volume (35%)
Revenue (30%)
Retention (20%)
AOV (15%)

Main Objectives

  • Increase order volume or frequency
  • Increase average revenue per user (ARPU)
  • Improve customer retention or reduce churn

Example Hypothesis

"Offering a specific promotion (e.g., 20% off or free delivery) will increase total orders and revenue by at least X% and improve repeat purchase rates within 30 days."

📝

Promotion Testing Framework

1

Define Objectives

Set clear business goals and hypothesis

2

Choose Metrics

Select primary and secondary metrics

3

Design Experiment

Create A/B test with proper randomization

4

Implementation

Set up data pipeline and tracking

5

Analyze Results

Calculate statistical significance and ROI

6

Iterate & Scale

Deploy winning promotion or refine

🏆

Expected Outcomes

Data-Driven Decisions

Make promotion decisions based on statistical evidence rather than intuition

Cost-Effective Strategies

Identify which promotions generate positive ROI and which cost more than they're worth

Optimized Customer Experience

Deliver the right promotions to the right customers at the right time

Continuous Improvement

Establish a framework for ongoing testing and refinement

📊

Primary Metrics

Key Metrics Explained

Conversion Rate (CR)

Definition: Percentage of users who place an order after seeing the promotion

Why it matters: Captures direct effectiveness (did the promotion lead to an immediate transaction?)

Average Order Value (AOV)

Definition: Mean spend per order in the promotion period

Why it matters: You want to make sure the promotion doesn't simply decrease margins by encouraging smaller orders

Revenue per User (RPU)

Definition: Additional revenue generated among users exposed to the promotion versus a control group

Why it matters: Shows net lift in revenue, factoring in the discount cost

Important: Net AOV Calculation

For Test group (with promotion):

Net Order Value = Gross Order Value - Discount Amount

Net AOV (Test):

Net AOV = Total Net Revenue / Number of Orders

A promotion can boost gross AOV but yield a negative overall return if the discount eats into profit more than the additional revenue gained.

Secondary Metrics

  • New Customer Acquisition Rate
  • User Engagement (app opens, browse time)
  • Average Items Per Order
  • Promotion Redemption Rate
📊

Average Order Value (AOV) Comparison

AOV Comparison Bar Chart

Shows monthly comparison between:

Control: $25.00
Test (Gross): $28.00
Test (Net): $22.40

This chart shows why it's crucial to compare net AOV (after subtracting promotion costs) rather than just gross AOV. The test group appears to have higher AOV before accounting for promotion costs.

📈

Conversion Rate Comparison

Conversion Rate Line Chart

Month-over-month comparison:

Control: ~5.3%
Test: ~8.1%

Conversion rate is one of the most common metrics in food delivery promotion testing. It directly shows whether the promotion persuades more people to place an order.

🔬

A/B Testing Methodology

Before Experiment

Both groups should have equal baseline AOV (~$25) before randomization

Control Group

No promotion (baseline)

Baseline AOV

$25.00

Gross = Net (no discount)

Test Group

Receives 20% promotion

After Promotion

$22.40

$28 gross - $5.60 discount

Key Elements of A/B Test Design

  • Randomization:

    Randomly split users into control and test groups to ensure there are no systematic differences between the groups.

  • Sample Size Determination:

    Use power analysis to determine how many users you need to detect the expected effect size with confidence.

  • Segmentation (Optional):

    If different user segments may respond differently, consider stratified randomization.

  • Time Period:

    Run the experiment long enough to account for user ordering cycles and business seasonality (typically 1-4 weeks).

Sample Size Formula:

n = 2 × (σ/δ)² × (z_α + z_β)²

Where:

  • n = sample size per group
  • σ = standard deviation
  • δ = minimum detectable effect
  • z_α = z-score for significance level (e.g., 1.96 for α=0.05)
  • z_β = z-score for desired power (e.g., 0.84 for β=0.80)

Implementation Components:

  • Feature flags / Experimentation platform
  • Data collection mechanisms
  • ETL & Data Lake
  • Real-time monitoring dashboard
💾

Data Collection Pipeline

📊

User Interactions

Track who sees the promotion, clicks, browses

➡️
💾

Data Warehouse

Store in Snowflake, BigQuery with experiment tags

➡️
📈

Analytics & BI

Transform data, calculate metrics, visualize

🛠️

Modern Implementation Tools

Feature Flags / Experimentation Platform

Use LaunchDarkly, Optimizely, or custom systems to control who sees promotions

Data Collection & Warehouse

Store user interactions and orders in real-time (e.g., Snowflake, BigQuery)

Log: user ID, timestamp, promotion flag, order details, amount, discount, etc.

ETL Pipeline

Use Airflow, dbt to extract experiment data, transform into consistent schema

Visualization

Tableau, Looker, Metabase for daily metrics monitoring

📉

Statistical Testing Framework

Step-by-Step Statistical Analysis

1

Define Hypotheses

H₀ (Null): The promotion has no effect

H₁ (Alternative): The promotion affects the metric

2

Gather Data

For each group (test vs. control), collect:

  • Sample mean (x̄)
  • Sample variance (s²)
  • Sample size (n)
3

Calculate Test Statistic

t = (x̄_test - x̄_control) / √(s²_test/n_test + s²_control/n_control)
4

Calculate p-value

Determine probability of observing this difference by chance

5

Make Decision

If p < 0.05, reject H₀ and conclude the promotion had an effect

6

Calculate Confidence Intervals

CI = (x̄_test - x̄_control) ± t_α/2 × √(s²_test/n_test + s²_control/n_control)

Net AOV Calculation Example

Step 1: Define Your Variables

For Test Group (with promotion):

GOV (Gross Order Value): Amount the user pays before discounts

DA (Discount Amount): Cost of the promotion

NOV (Net Order Value): What company actually earns (NOV = GOV - DA)

For Control Group (no promotion):

Order Value (OV): Since no discount is applied, Gross = Net

Step 2: Calculate Total Values

For Test Group:

TotalNetRevenue_test = Σ(NetSpend_i) for all users i in test group

For Control Group:

TotalRevenue_control = Σ(OrderValue_i) for all users i in control group

Step 3: Calculate AOV Values

For Test Group:

NetAOV_test = TotalNetRevenue_test / NumberOfOrders_test

For Control Group:

AOV_control = TotalRevenue_control / NumberOfOrders_control

Step 4: Applied Example with Data

Control Group:

  • AOV (x̄_control) = $25
  • Standard deviation (s_control) ≈ $10
  • n_control = 2,000 orders

Test Group:

  • Gross order value = $28
  • Discount per order = $5.6 (20% of $28)
  • Net AOV (x̄_test) = $22.4 ($28 - $5.6)
  • Standard deviation (s_test) ≈ $11
  • n_test = 2,100 orders

Step 5: Significance Testing

Difference of Means:

ΔAOV = x̄_test - x̄_control = $22.4 - $25 = -$2.6

T-Statistic Calculation:

t = (x̄_test - x̄_control) / √(s²_test/n_test + s²_control/n_control)

t = (-2.6) / √((11²/2100) + (10²/2000))

t = -2.6 / √(0.058 + 0.05) = -2.6 / √0.108 = -2.6 / 0.329 = -7.9

Result:

p-value = 0.03 (well below 0.05 threshold)

Interpretation: The promotion increased gross AOV but decreased net AOV after accounting for discount costs. This difference is statistically significant (p < 0.05), meaning the promotion is actually hurting net revenue per order.

Lift Calculation Formulas

Conversion Lift = (CR_test - CR_control) / CR_control
Example: (8.1% - 5.3%) / 5.3% = 52.8% lift
Revenue Lift = (Rev_test - Rev_control) / Rev_control
Make sure to use net revenue after discounts
ROI = (Additional Revenue - Cost of Promotion) / Cost of Promotion
Positive ROI indicates profit, negative indicates loss
📊

Statistical Significance

Understanding p-values

A p-value less than 0.05 means there's less than a 5% chance that the observed difference between test and control groups occurred by random chance.

Practical Significance

Statistical significance doesn't always equal business relevance. Even tiny differences can be statistically significant with large sample sizes.

Common Mistakes

  • Stopping the experiment when you see significant results (p-hacking)
  • Running too many tests without correction (multiple testing problem)
  • Ignoring practical significance
  • Not accounting for seasonality or other external factors
🔍

Decision Framework

When to Deploy a Promotion

  • Statistically significant positive impact on primary metrics
  • Positive ROI calculation
  • No significant negative impact on secondary metrics
  • Consistent results across key user segments

When to Iterate

  • Mixed results (some metrics improve, others decline)
  • Positive for some segments but not others
  • Positive but small effect size

When to Reject

  • Negative impact on primary metrics
  • Negative ROI
  • Significant negative impact on any critical secondary metrics
🧠

Promotion Testing Knowledge Graph

Promotion Testing
Business Objectives
Key Metrics
Experiment Design
Statistical Analysis
Customer Retention
Net AOV
A/B Testing
t-test

This knowledge graph visualizes the key components and their relationships in promotion effectiveness testing.

🔄

Key Relationships in Promotion Testing

Business Objectives → Metrics

Your business goals directly determine which metrics are most important to track. For example, if customer retention is the goal, then repeat purchase rate becomes a primary metric.

Metrics → Experiment Design

The metrics you choose influence how you design your experiment, including sample size calculations and test duration.

Experiment Design → Statistical Analysis

The way you design your experiment determines the appropriate statistical methods to use for analysis.

Statistical Analysis → Decision Making

The results of your statistical analysis (including both statistical and practical significance) guide your ultimate business decision about whether to deploy a promotion.

Integrated Approach

An effective promotion testing framework integrates all these components in a coherent, end-to-end process that aligns with business goals while maintaining statistical rigor.

Machine Learning Personalization

Instead of one-size-fits-all promotions, use ML to deliver personalized offers:

Predicted Response Models

Train ML models to predict which users have the highest propensity to respond to different promotion types.

Segmentation

Group users based on behavioral patterns and tailor promotions to each segment.

Contextual Factors

Factor in time of day, day of week, weather, and other context to optimize timing.

🎰

Multi-Armed Bandit

Instead of static A/B testing, use adaptive algorithms:

How It Works

Continuously allocate more traffic to better-performing promotions in real-time, minimizing opportunity cost.

Thompson Sampling

Balance exploration (trying different promotions) and exploitation (using the best-performing ones).

Advantages

More efficient than traditional A/B testing, especially in high-traffic environments with multiple variants.

📈

Long-Term Impact Analysis

Looking beyond immediate conversion:

Customer Lifetime Value (CLV)

Measure how promotions affect the long-term value of customers, not just immediate purchases.

Cohort Analysis

Track users who redeemed promotions vs. those who didn't over extended periods (30/60/90 days).

Survival Analysis

Use statistical techniques to analyze time until customer churn between promotion groups.

🌎

Geo-Level Experimentation

When user-level randomization is not feasible:

Region-Based Testing

Assign different geographic regions to test and control conditions.

Difference-in-Differences

Statistical technique that accounts for pre-existing differences between regions and time-based trends.

Matched Markets

Select test and control regions with similar historical performance to increase comparability.

🔄

Continuous Testing Framework

Build Experimentation Culture

  • Create hypothesis library
  • Document learnings from each test
  • Promote data-driven decisions
  • Celebrate insights, not just "wins"

Automation & Infrastructure

  • Build experimentation platform
  • Automate analysis pipelines
  • Create reusable templates
  • Set up monitoring dashboards

Progressive Refinement

  • Start with broad tests
  • Segment and drill down
  • Personalize at scale
  • Iterate on successful concepts