Things to consider for Fractional Revenue Attribution

Revenue attribution is the hottest topic these days.  Proliferation of online media, requires reshuffling marketing spend across many more spend categories.  Traditional funnel-engineering type work is good, but static, and doesn’t address a few key issues.

1) The transient nature of marketing spend effectiveness that comes and goes with changing keywords, banners offers

2) It does not address the problem in a customer-centric manner (in fact orders are placed by customers who clicked on a keyword, or received a catalog)

The new marketing spend effectiveness paradigm involves understanding causality of relationship between marketing and sales at a transactional level using statistical methods to fractionally attribute.  There are five elements at play;

  1. Order of events:  what sequencing (0rder) of actions lead to sales transactions
  2. Combined effects: what is the joint effects of marketing touches
  3. Frequency: how many touches are required to convert a prospect to a buyer
  4. Time decay:  How the effects of marketing->sales decay with time passed
  5. Effectiveness: what is the relative efficacy of each vehicle is different (e.g., banner view is not the same effectiveness as a 52 pg. catalog)

How this problem can be expressed in mathematical terms and the solution is quite sophisticated and i can not get into it since this is our core IP at Agilone.

Once an attribution could be made, the next issue is how to measure the effects of overspending, which i will get into in the next post.  The inherent problem in fractional attribution is how to make sure that by increasing marketing spend on one vehicle will most likely (and not by causality) reduce the effectiveness of other existing spend elements.

Simpsons Paradox: Fundamental but common mistakes when analyzing multiple A/B tests

Most marketers test various concepts through simple AB testing or other more advanced structured testing approaches (fractional factorials, Taguchi methods etc.).  A common mistake that most analysts make involves calculating the lift on the overall gains, rather then individual tests.  In cases where the test/control sizes are similar proportions, they lead to the same answer, however the reverse is not true.  The answers could even lead to contradicting conclusions, as they will be outlined below.

Suppose we have two groups within our file (this could be more, but for simplicity, we’ll stick with two).  These two groups could be anything, for example, it could be

  • Male versus female,
  • People who bought X and who didn’t buy X
  • People who are highly responsive to marketing or non-responsive
  • High value customers , Low value customers

Suppose we’re trying to examine the effectiveness and lift of a specific campaign on these groups. 

Examine the calculation below:

    A B C D E F
Test Control =C-F =(C-F)*A
Total Marketed Responders Response Rate Total Marketed Responders Response Rate Lift Incremental Responders
X Group 1 1000 60 6.0% 200 4 2.0% 4.0% 40
Y Group 2 500 10 2.0% 400 4 1.0% 1.0% 5
Group 1+2 45.00
X+Y Total 1500 70 4.7% 600 8 1.3% 3.3% 50.00

 

If you follow the above example, the incremental customers coming in from each group is 40+5 = 45.  However, when we sum up the test and control groups into one group, where we have 1500 test and 600 control subjects, then the incremental customers is calculated at 50.

How come?  Why are the two results different? Which one is correct?  Let me know at omer.artun@agilone.com what you think.  Answer will come in a few days…

 

Ömer Artun, Ph.D.,

Managing Director | Agilone