Mobile App Performance Optimization

Led a company-wide initiative to dramatically improve mobile startup performance across Intuit’s flagship apps, with a primary focus on TurboTax Mobile and QuickBooks Mobile. The goal was to reduce cold start times to meet rising user expectations and reverse growing dissatisfaction reflected in churn, Product Recommender Scores (PRS), and App Store ratings. The effort required aligning multiple engineering, observability, and product teams around a shared performance strategy, while embedding long-term habits into the development lifecycle.

Scope: Company-wide startup performance uplift across TurboTax, QuickBooks, and flagship Intuit mobile apps

Impact: 75%+ cold start reduction, 50% drop in complaints, uplift in retention and PRS

The Problem: Cold Start Times Were Eroding User Confidence

As we shifted focus toward improving Product Recommender Scores (PRS), a core metric for user trust and satisfaction, persistent performance issues began to surface across the mobile experience. Cold start times were a major contributor.

Cold start times were a major contributor.

  • 49% of users expect apps to launch in under 2 seconds, but TurboTax Mobile averaged over 8 seconds.

  • On Android, the 95th percentile cold start time peaked at 55 seconds.

  • Performance degradation was tied to declining app store ratings and increased user frustration.

Despite ongoing investments in new features, poor performance was degrading user experience, impacting retention, and eroding trust. Years of fragmented tooling, platform sprawl, and legacy dependencies had introduced hidden bottlenecks that were now directly affecting business-critical metrics like PRS.

🔍 Root Cause Analysis

I led the platform engineering team in conducting deep dives into architectural bottlenecks and startup flows. I also collaborated with:

  • Observability leads to identify instrumentation gaps and trace critical startup delays

  • Product teams across TurboTax and QuickBooks to map real-world startup behavior and align on high-impact user flows

Together, we uncovered four key issues:

  1. Feature Bloat: Years of additive feature development had overloaded the startup path with unnecessary services and logic

  2. Lack of Instrumentation: Cold vs. warm start behavior was not well captured, masking critical path delays

  3. Inconsistent Startup Behavior: Race conditions and blocking dependencies caused varied performance across devices, OS versions, and network types

  4. Tooling Silos: Different teams used different tools and metrics, making it difficult to track regressions and act early

These findings helped shift the conversation from vague complaints of “slowness” to a focused, data-driven performance improvement plan.

🛠 Strategy & Execution

1. Cross-Functional Alignment

  • Defined performance KPIs linked to business outcomes, including cold start time, memory usage, and I/O load
  • Socialized the direct impact of performance on PRS and App Store ratings with leadership and product partners
  • Established a unified roadmap across platform, observability, and product teams

2. Deep Dive with Engineering

Worked hands-on with platform engineers and observability leads to:

  • Use system traces and startup profiling to map critical path delays
  • Apply targeted fixes such as lazy initialization, async loading, and deferred service startup
  • Integrate micro-benchmarking and enforce startup checks directly in pull request workflows

3. Upgraded Tooling & Observability

  • Built dashboards to track Time to Initial Display (TTID) and Time to Full Display (TTFD) across platforms, devices, and app versions
  • Introduced CI alerts and summaries into GitHub pull requests and Slack for real-time monitoring
  • Ensured teams could proactively detect and fix issues without waiting for post-release metrics

4. Developer Enablement at Scale

  • Authored a reusable playbook and starter templates to apply fixes across apps consistently

Embedded performance into the SDLC through checklists and review standards

A person in a business suit standing in front of a whiteboard with diagrams and charts, explaining a product strategy to a team.

Outcome Metrics

💡 Key Learnings &
🎯 Why This Matters

This wasn’t just about speeding up one app. It was about changing how we think about performance across the organization.

Rather than applying one-off fixes in isolation, we built tooling and frameworks that could scale across all mobile apps, not just the flagship experiences. Performance became a product requirement, not an enhancement. It was built into the development lifecycle, enforced through automated checks, and made accessible to every team through self-serve tools.

This shift ensured that even lower-resourced or ancillary apps could meet the same high standards. We began treating performance as a baseline expectation, similar to accessibility or security. It became a core part of how we build for trust and deliver consistently great experiences.

A developer working at a computer, surrounded by code snippets and API documentation, illustrating the importance of developer experience (DX) in boosting team adoption of preference management systems.

Developer Empowerment Drives Adoption

When performance tools were embedded into existing developer workflows, adoption rose naturally

Tying Metrics to Outcomes Secures Buy-In

Demonstrating the direct impact of performance on PRS and App Store ratings helped align executives and teams

Performance is a Culture Shift

Lasting improvement required changing how teams thought about and prioritized performance across the product lifecycle