how do i: define my most difficult product experience

Back in September I explored one of my favorite product experiences which was a pretty fun post to work on.  More recently, I’ve been asked several times what one of the most difficult product experiences I had was and how did it turn out / what did I learn from it.  Much as defining a favorite, choosing the most challenging or difficult one was a tricky proposal since each one presented its own unique circumstances.

In trying to define criteria for this request my approach was to identify products which posed multiple, sustained challenges simultaneously that provided a level of actual conflict in being able to execute to the best of the teams ability.  It wasn’t about a product failure or a methodology failure or a team failure, but rather the circumstances that surrounded the entire product through its full lifecycle (so, not just an individual project such as a new feature build or optimization).

The “winner” was a  native mobile music player application I worked on.  Functionality was required by upper management to be standardized, so the international product had both internal and end-user continuity, but the following affected the ability of the team to accommodate the mandate:

Regional / market segment differences

  • Each market has different legal requirements which affected authentication, payment and usage flows.
  • App content and licensing agreements with partners also differed from market-to-market affecting notification and messaging flow and CMS design, feature functionalities, etc.
  • Differences in languages also affected CMS design and front-end design.
  • End-user preferences and expectations are dictated in part by social and cultural norms which identifying the similarities was not included in the timelines or budget allocations.
  • Competitive sets greatly varied which affected benchmarking of features and marketing position strategy with markets competing for what the core competency of the app should be.

Technological differences

  • Platform differences (Android, iOS, RIM) and handset /carrier experience variance handle feature functionality in different ways
  • Distribution channels (Android Market, Apple App Store, BlackBerry) have different requirements for acceptance to the channel and different ways of handling payments, advertising, usage notification and other flows.
  • Existing API infrastructure and integration to the web-based sister product was not taken into account adequately by management in decision making

Team dynamics

  • Too many cooks in the kitchen: Involved in the build were the mobile product team as well as the web product team and the API team, plus active participation from regional marketing teams , the developers and upper management.  Stakeholders also included finance, legal and biz dev
  • The development team was split up into too may segments including web, iOS, Android, billing, API core and front-end and included both in-house shared and dedicated resources as well as 3rd party vendors
  • The quality testing didn’t include dedicated, team-independent QA resources and those resources, when available were internationally spread out as well.
  • Communication was difficult:  The build team included people speaking five different native languages, on three continents with at least six different time zones.  When taking into account the additional stakeholders it became more complex.
  • Too much emphasis in process on acquisition needs and not enough on customer retention due to team makeup and member individual priorities overriding mobile product team recommendations.

Although every effort was made to create a functional core by the mobile team, the initial budgets and timelines were not conducive to such an effort.  Although we were able to maintain much of the defined scope successfully, the overall performance of the product was not viewed as favorably.

Post-launch failures led to the discontinuation of the app in several markets due to:

Marketing unable to position the value proposition to the user needs

  • End user feedback through customer care and via social media was negative.
  • Acquisitions struggled in some markets with low gross adds and high COA.
  • Uninstalls (churn) were very high in some markets.

Biz Dev unable to maintain satisfied Licensing partners

  • As agreements came up for review some were unfavorably re-negotiated due to lack of mobile adoption rates in some markets.
  • Some agreements were terminated or allowed to expire due to dissatisfaction with feature sets in some markets

Finance unable to procure continued funding

  • Low profit margins as a result of high customer costs and expensive licensing fees.
  • Dissatisfactory KPI numbers demonstrating lack of consumer up-take

Post-Launch mobile product team changes – resulting in incremental churn reduction success for retained markets

  • Increased integration of customer care feedback in the features review process
  • Instituted A/B testing
  • Change in metrics tracking and resulting in KPIs for improved decision making
  • When available instituted a partner review process through biz dev.
Advertisements

About thedoormouse

I am I. That’s all that i am. my little mousehole in cyberspace of fiction, recipes, sacrasm, op-ed on music, sports, and other notations both grand and tiny: https://thedmouse.wordpress.com/about-thedmouse/
This entry was posted in business commentary, Opinion, Product Management. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s