When I send out weekly performance summaries to my clients, I often focus on just a few key take-aways and insights. For instance:
Campaign A is providing leads at $5/lead while Campaign B is converting at $15/lead. I’ve shifted most of the budget from Campaign B to Campaign A, but started an A/B test on Campaign B’s landing page to see if its performance can be improved.
These reports focus on what happened and what is about to happen. What’s missing in these emails, and discussions around measurement in general, is what didn’t happen. In other words, what mistakes did we avoid because we had data pointing us in another direction? Read More