On my last post, I compared AEM vs SKAN vs Adjust’s own modelling. There were many people reacting to it so thank you!
One of the pieces of advice I got on the post was from Matej 🦩 Lancaric. Let me copy-paste the comment for you:
“Add CPP to the AEM campaign and then benchmark against Appstore connect. Plus take 2-3 days to measure performance. Check your 8-14.10 dates on Friday and share the numbers. Should be closer”
So… here I am again 😁😂
I couldn’t get a CPP in these 2 days but luckily, the AEM campaign I shared is the only one I have for iOS in this moment so I can still look at the app referrer traffic in ASC and assume that all the installs and purchases that come from that source are generated by my campaign.
I will share again differences, following the same Excel as the last post but updating the numbers.
👉Analyzed period: 08.10-14.10
𝐂𝐎𝐍𝐂𝐋𝐔𝐒𝐈𝐎𝐍𝐒
🟢Matej was f*** right hahaha 👏 👏 👏 (not a surprise for me tbh 😂 but for the few people who don’t know him, you must follow him!). The lowest difference has been found between AEM and ASC. I guess that with a CPP, the discrepancies will be even lower so I recommend testing this as a direct way of tracking iOS campaigns with real effectiveness.
🟢I have checked the “Sales” in ASC and compared them to the purchase value modeled by AEM and the difference is just $19!!!! 🤯🤯
I will still monitor the 3 scenarios but I was not expecting such a high accuracy coming from ASC to be honest. I always say the positive and negative things of the ad networks and in this case, I have to admit that Meta is ahead of the iOS-SKAN game with this.