Background

The 11 Sins of Mobile Marketing: Lessons from Growth Audits

Peter Fodor

As the mobile game industry matures, many companies struggle to keep up with the latest trends and adapt through innovative strategies. As a solution, mobile growth audits like GamePlan™ have become an increasingly popular method for identifying gaps in businesses’ growth plans and unlocking their maximum scalability potential.

GamePlan™ is a 360° growth diagnostic for mobile games. Through it, we analyze your past UA efforts, ASO performance, competitive landscape, tech stack, and monetization strategies. We then use these insights to build a growth strategy and an actionable growth backlog. Read about how GamePlan™ helped Etermax’s Trivia Crack in this case study.

In this article, we will present 11 key learnings we’ve gained from conducting over 30 mobile growth audits at AppAgent. 

The 11 Sins of Mobile Marketing

  1. Wrath: The Absence of an “Integrated” Growth Model 
  2. Stagnation: Lacking a Clear UA Creative Testing Methodology 
  3. Sand-bagging: Brand Keeps Creative Testing Stuck in the Mud 
  4. Gluttony: Failure to Take Advantage of Pricing Optimization & Discount Flow 
  5. Envy: Missing a Proper ASO Methodology 
  6. Neglect: Failure to Reap the App Store’s Low-Hanging Fruit 
  7. Sloth: Struggles with iOS Evaluation and SKAN 
  8. Greed: Lacking the Methodology to Predict User LTVs 
  9. Over-simplification: The Use of Only a Basic Prediction Event 
  10. Ignorance: Not Understanding User Behavior 
  11. Pride: Not Capitalizing on New Opportunities

1. Wrath: The Absence of an “Integrated” Growth Model

8. The Absence of an “Integrated” Growth Model

“Sir, the good news is that we have finally created the Growth Model. The bad news is that we have no idea how to use it!”

Do you have a system that can easily answer questions like:

  • “What would be the revenue impact if we increased the store conversion rate by 5%?”
  • “If the CPM dropped from $15 to $7, would we still break even before D90?”
  • “If we adjusted our subscription offerings and shifted 50% of users from Monthly to Yearly, would we save enough cash to triple our UA budget when the high season begins?”

If you don’t have such a system, don’t worry, you’re not alone.

While there may be a few exceptions, we rarely come across a company that has a properly integrated Growth Model. Such a model would enable individuals to understand the funnel metrics in context and comprehend how certain growth drivers impact overall performance, ultimately leading to better business decisions.

Teams often work with disconnected components of a Growth Model, where each initiative has its own set of KPIs (i.e. increase the store conversion rate by 5%). Without the full context, these KPIs tend to dominate the focus of their owners, restricting their perspective and occasionally leading to a limited (and sometimes incorrect) interpretation of success.

On the other hand, being able to understand and model various what-if scenarios throughout the whole funnel (ASO, paid UA, monetization) greatly empowers teams to prioritize what truly matters.

To address this, businesses should strive to implement a system that:

  • Can model different “what-if” scenarios to assess the impact of metrics on crucial KPIs (revenue, margin, cLTV, payback curves, cash flow).
  • Allows teams to understand the entire UA funnel in context, as well as the level of their potential influence.
  • Simple spreadsheets with easy-to-adjust variables are sufficient for most cases, even when simply ingesting raw data from your game. More advanced solutions include BI tools such as Looker or Tableau.

Pro Tip: Given the notable decline in VC funding, many companies rely solely on their own revenue to support paid UA. Therefore, it is crucial, now more than ever, to prioritize modeling, understanding, and effectively managing cash flow.

2. Stagnation: Lacking a Clear UA Creative Testing Methodology

2. The Lack of a Clear UA Creative Testing Methodology

“People told me that cats work best. At least on TikTok.”

I’m sure we can all agree that it’s tough to find new ad creatives that outperform previous “winners.” . It can often feel like a game of luck, but there can be a method to the madness (check the Creative Testing Guide we’ve published).

In reality, we still come across teams that lack a proper methodology for testing creatives. Even when teams do have a methodology in place, they often fail to follow up on the results. There is a lack of effort to dissect the findings of creative testing, and the learnings are typically limited to simplistic conclusions of “works” or “does not work,” without delving into the reasons why. It’s not difficult to understand that in such an environment, learning and improvement become practically impossible.

To address this, businesses should:

  • Establish a reliable creative testing methodology. Creative testing is all about discovering the “recipe” for ads that truly resonate with your audience. It’s also about being prepared to pivot when your winning creative starts to show signs of fatigue, ensuring you’re always one step ahead in driving performance.
  • Implement a creative strategy and a plan of creative directions to be tested. This strategy should be regularly refined based on the testing results as well as ongoing competitive research.
  • Introduce “Creative Deconstruction” meetings. These involve analyzing the hook, core, and end parts of different groups of creatives to identify patterns and gather insights. The first group includes your own ads, the second focuses on your closest competitors, and the third examines winning ads in your category—and often beyond—to seek inspiration. This process can be streamlined by using AI creative tagging tools, which handle initial classification and provide valuable starting points for the creative strategist.

Pro Tip:  Enhance your creative testing approach with our Creative Testing Guide. This comprehensive resource is the result of years of experience and insights gathered from UA managers at renowned companies like Scopely, Miniclip, Applovin, Small Giant Games, Pixel Federation, and many others.

3. Sand-bagging: Brand Keeps Creative Testing Stuck in the Mud

11. Brand Keeps Creative Testing Stuck in the Mud

“Quit lookin’ on ‘em. Those ain’t no brand colors of ours.“

The eternal struggle between performance and brand appears to be as ancient as the battle between good and evil. However, like all things, achieving a balance is essential.

Attempting to ensure that every ad being tested aligns perfectly with every brand-related detail is an enormous undertaking that consumes significant time and effort. From our experience, we find this task to be mostly futile, yet regrettably common.

To address this, businesses should:

  • Test first, stress second. By allocating a small budget, you can test out ideas without the need to be overly concerned about the brand aspect since exposure is typically limited. Only when the ideas prove to be successful, should you circle back and allow the brand team to fine-tune them.

Pro Tip: Think of the brand as a definition of boundaries that define the game personality – it’s easier to say what is “over the edge” and then letting everything pass unrestricted. Drop the obsession over details and give your team space for experimentation to maximize ROI of performance campaigns.

4.  Gluttony: Failure to Take Advantage of Pricing Optimization & Discount Flow

3. Failure to Take Advantage of Pricing Optimization & Discount Flow

“If only I pushed the yearly subs a bit more… “

The fundamentals of Player Relationship Management (PRM, more broadly referred as CRM) are often overlooked or neglected altogether. 

While CRM can be a complex realm with endless possibilities for personalization and intricate workflows, sometimes even the basic elements are not executed effectively. Essential components like “discount flows,” “win-back scenarios,” or simple yet impactful in-app and push messages are frequently missed. Implementing these elements properly can have a substantial positive impact on revenue and customer retention.

To address this, businesses should:

  • Prioritize the essentials. In gaming, focusing on UA, monetization, retention, and A/B testing is critical for sustained growth. Even simple actions, like optimizing ad creatives or refining in-game purchases, can lead to significant improvements. By consistently testing and iterating on each of these areas, you’ll see measurable gains in both short-term performance and long-term player loyalty.
  • Measure the impact. Before implementing any changes, ensure that you have a clear understanding of the potential effects they may have and how to measure them.
  • Explore available options. Keep an eye out for new tools like Assetario, which can dynamically suggest personalized discounts to users who are less likely to pay the full price, and adjust prices for those who show potential to pay more. These straightforward tactics can generate additional revenue of 20% or more, all without the need for SDK integration.

Pro tip: It’s important not to overlook elements like personalization in your communication strategy. Neglecting personalization can lead to generic messaging that feels disconnected from the user’s experience.

5. Envy: Missing a Proper ASO Methodology

6. The Lack of a Proper ASO Methodology

“ASO Methodo…mehot… lodo.. Go… dology… logo.. Ho… “

Although very few companies neglect ASO (if anything, many tend to overinvest in it), we often observe a lack of a systematic approach. Instead of building structured hypotheses (we use the 4-Step framework by Meclabs Institute at AppAgent) and maintaining a backlog of experiments to learn from, we often see the “throw spaghetti at the wall and see what sticks” approach.

This holds especially true for Conversion Rate Optimization (CRO), as many marketers lack a proper understanding of how to approach it. As a result, they often end up wasting time and resources testing insignificant changes (like swapping colors and and tweaking layouts) that are nearly impossible to measure in terms of their impact.

To address this, businesses should:

  • Calculate the business value associated with potential CRO uplifts before they hire an agency and potentially waste resources. 
  • Appoint an ASO manager (or agency) who possesses a comprehensive understanding of testing and evaluation methodologies, including sample size determination, frequentist and Bayesian evaluations.
  • Adopt a recurring framework and develop a long-term plan. Several frameworks are available, each of which significantly surpasses having no framework at all.

Pro Tip: As with every tool and tactic, there’s a time and place for everything. In our eyes, this is especially true with ASO, which should be priority when launching a new app (the initial setup and CRO) and then revisited when scaling paid user acquisition. At that point, increasing the store conversion by linking top performing ads with your listing can dramatically improve your ROI.  

6. Neglect: Failure to Reap the App Store’s Low-Hanging Fruit

10. Failure to Reap the App Store’s Low-Hanging Fruit

“That low hanging fruit is hanging pretty high for my standards!“

The three most accessible and effective ASO activities, commonly referred to as “low-hanging fruit,” include localization, addressing reviews, and leveraging the simple yet powerful “Mexican Hack” to exponentially increase indexed keywords. 

To address this, businesses should:

  • Localize their store listings. This simple and labor-efficient task can yield significant improvements in conversion rate (CVR), consequently boosting rankings and downloads in specific countries.
  • Reply to reviews. This is a great opportunity to utilize brand language!
  • Implement the “Mexican Hack” strategy. Interestingly, Mexican localization is indexed by the US App Store as well. Unless you have a substantial presence in Mexico, leveraging the Mexican localization can be a clever tactic to add more English keywords and ensure their indexing. 

Pro Tip: Both activities can be greatly simplified by using AI tools like ChatGPT. Appfollow, for example, has already implemented auto-responses in their UI–now you don’t even have to leave the ASO tool’s interface to address positive and negative reviews in both app stores.

7. Sloth: Struggles with iOS Evaluation and SKAN

1. The Struggle with iOS Evaluation and SKAN

“If a machine, a terminator, can learn the value of human life, maybe we can, too.” — Tim Apple

User acquisition on iOS is cordially tied to Apple Transparency Tracking (ATT). However, teams almost always lack a complete understanding of its intricacies, limitations, and how to effectively incorporate it into their specific use cases.

To address this, businesses should:

  • Know their enemy well. There are a lot of resources out there to read and study (Appsflyer GuideAdjust GuideSingular Guide), so delegate one team member to fully grasp the concept and understand its limitations to the best of their ability.
  • Evaluate Blended Traffic/Revenue:
    • When to use it: Ideal for initial tests or when launching in new markets to establish a baseline for performance trends.
    • How to implement: Use MMP dashboards to analyze aggregate data or App Store Connect Revenue to identify patterns without relying solely on SKAN postback.
  • Leverage Custom Product Pages (CPP)
    • When to use it: Best for running campaigns in verticals like gaming, where creative performance and audience alignment are crucial.
    • How to implement: Create and test different CPPs in App Store Connect. Pair them with tailored ad creatives to improve conversion rates and attribution clarity.

8. Greed: Lacking the Methodology to Predict User LTVs

5. The Lack of Methodology to Predict User LTVs

“Maybe I should have just told them it was 20 USD..”

Despite the fact that everyone (now even your Grandma) knows that Customer Lifetime Value (cLTV) is an essential element of any sensible marketing operation, calculating and predicting this metric properly is still a challenge for many.

Teams are aware that Lifetime Value (LTV) and cLTV greatly differ from segment to segment, but we rarely see long-term curves with relevant breakdowns (platform, country, channel, and subscription type) that can be observed in different timeframes for proper backcheck and analysis.

Shaky methodology leads to inaccurate predictions and bad decisions–fix it!

To address this, businesses should:

  • Always break down LTV and cLTV by platform, country, UA channel, and timeframe.
  • Invest in implementing a robust system that utilizes raw subscription data and enables additional analysis and enhanced accuracy.

9. Over-simplification: The Use of Only a Basic Prediction Event

9. The Use of Only a Basic Prediction Event

“My balls say you shall optimize for three trials and three AI fingers..“

Although the basic rules for selecting an optimization event are well known and used by just about everyone (the event has to happen frequently enough, and fairly soon after the first launch), the focus tends to end there.

Despite the fact that such an event provides a critically important signal to the optimization algorithm, not much attention is given to its definition and testing.

To address this, businesses should:

  • Consider more events. Two different products can attract very different audiences (think significantly different price tiers). Optimizing for just a single event (i.e. a purchase) will average these out and thus greatly dilute the power of optimization. Consider testing these additional events to capture more accurate user signals:
    • Level 5 Reached on Day 1 (D1)
      • It shows high engagement, signaling a player’s potential for long-term retention and higher LTV.
    • Three Rewarded Video Ads Watched in the First Session.
      • It indicates strong monetization potential and helps the algorithm target users who engage with ads.
    • Joined a Clan Within 72 Hours After First Open
      • It reflects social engagement, suggesting a higher likelihood of retention and organic growth through referrals.
  • Enrich “basic” events with engagement factors. For example, not all players are equally engaged, as many tend to drop off after their first session. You might prefer to optimize for players who show stronger engagement signals. For example, rather than just tracking tutorial completion, consider optimizing for players who complete the tutorial and reach the first 3 levels or watch rewarded ads and participate in day 0 events. By optimizing for mixed events like these, you capture a broader and more accurate picture of engagement, which is key to finding users who will stick around and are potential future purchasers.

Pro Tip: move beyond the basics and unleash a data analyst to perform a deeper review of user segments and correlations between their behaviors. Don’t be afraid to test and experiment!

10. Ignorance: Not Understanding User Behavior

7. The Lack of Understanding of User Behavior

“Excuse me user, do you have time to talk about our lord and savior Product Analytics?“

Product managers should aim for easy access to in-app insights and user behavior. This goes beyond implementing a good analytics tool—it requires a solid understanding of how to implement analytics effectively, including defining metrics and events accurately. 

Some businesses: 

  1. Lack an adequate tracking architecture for product data. Sometimes, documentation remains accessible only to developers, the metric names are misleading and not fully understood by the product team, or there are tools implemented that only provide basic insights.
  2. Lack clear definitions of the key questions to ask and the metrics to answer them. Surprisingly often, we get asked to help define these by the people who are responsible for growth priorities and should know the product by heart.

It’s no surprise that gaps in these areas quickly lead to skewed insights, ambiguity, and guesswork. In our experience, there’s a clear link between “data competency” and “success”.

To address this, businesses should:

  • Ensure the utilization of proper analytics documentation that is easily accessible to the product and marketing teams.
  • Less experienced organizations should assign an “insights architect” who acts as a bridge between the data team and the product and marketing teams. Such a person helps the product / marketing teams define their data requirements, translates them into a more technical language, and makes sure everything is implemented with functional UX.

11. Pride: Not Capitalizing on New Opportunities

“If she finds out I played Candy Crush instead.. “

We frequently see teams focus on implementing tactics and features that only have a chance to provide small improvements. Meanwhile, they’re overlooking bigger opportunities that are emerging.

For example, imagine the amazing performance of UGC ads when they first started trending, or the value of jumping into a new user acquisition (UA) channel at the right time.

With the landscape changing as fast as it does and given the cutthroat competition, it’s essential to adopt new tactics as early as possible in order to reap the greatest benefits. But even if you know what to do, actually doing it requires the ability to quickly reroute the ship. This will naturally disrupt already running initiatives and may cause A LOT of friction.

Smaller teams are usually limited in this aspect by a lack of time and expertise (it’s important to have your pulse on the industry, but have fun doing that with a task list that’s two pages long!). 

On the other hand, larger teams often fall victim to the “stop the tanker” syndrome*, where changing direction becomes a slow and arduous process due to internal bureaucracy.

To address this, businesses should:

  • Establish growth models to understand the possible impact of each initiative on the business. When the potential effects are known, it becomes easier to prioritize and align the team toward implementing necessary changes.
  • Implement a regular check and reprioritization of the Growth Backlog (ie. Quarterly).
  • Consider an external audit from time to time. 

*The braking distance of an oil tanker can be as much as 10 km.

Final Thoughts

After reflecting on the GamePlan™ diagnostics we conducted over the past year across multiple games and verticals, it’s clear that, despite each game being unique, several patterns and overlaps persist.

One consistent observation has been the ongoing challenges and “sins” that continue to hinder success, many of which have evolved – while others have remained stagnant over time.

One area that consistently stands out is the critical role of creatives in driving success or failure. Creative optimization remains both a persistent challenge and a massive opportunity. Many games still overlook the potential of leveraging AI to scale their creative production processes. AI allows faster pivots when creatives fatigue and ensures a constant supply of high-performing creatives tailored to user preferences.

While the complexity of SKAdNetwork adds to the mix, the fundamental hurdles – poor retention and conversion rate optimization have existed since the early days. By prioritizing AI solutions and focusing on creative performance, games can address these issues and unlock new paths to scaling their games.

Login to enjoy full advantages

Please login or subscribe to continue.

Go Premium!

Enjoy the full advantage of the premium access.

Stop following

Unfollow Cancel

Cancel subscription

Are you sure you want to cancel your subscription? You will lose your Premium access and stored playlists.

Go back Confirm cancellation