To content | To menu | To search

Tuesday 1 January 2013

A review of 2012 and a look into the future

Happy New Year! I've done the traditional review and predictions thing here for the past few years, and it's that time again. This time around, it's really difficult for me to see the big trends, having been heads-down in start-up building for most of the year. On one hand, that's of course a problem; if I can't describe what's going on around us, how can I know where to head? Yet, most startup decisions really have very little to do with this level of thinking -- once a rough vision is in place, it's more about finding the right people to execute that vision with and not a lot about what are other people doing. So, I haven't really spent enough time putting these thoughts in order, but it'd be a shame to skip this chance.

On the recap side: I predicted that the Euro crisis would continue to play out, that governments would try to regulate Internet, that Facebook would continue to dominate but the "gatekeep net content" stuff would fail, that we'd see a lot of rise in enterpreneurship (would it be safe to call 2012 the year of Kickstarter?) and that we'd see a completely new class of social games, and I'm very happy to see good friends at Supercell emerge as the early leader there. Well, I was pretty vague a year ago, so it's easy to claim I'm a good prognosticator :). I can't make a call on the data-driven medical stuff, not having really followed developments there, though I suppose at least 23andme's massive new funding counts. From earlier stuff, motion controls are now on high-end televisions, though the applications are still pretty raw.

Then there's the personal recap that's far too close for me to summarize it well. 2012 has been a year of change, learning and growth. The chronological highlights: ending a good period at Sulake, using all that I learned there to help several very cool startup teams accelerate on their path to success, helping my spouse get her own startup movingfounding another startup with a great team and most importantly, witnessing and guiding our daughter learn new stuff every day over her second year.

What's in the future? I remain especially bullish on two very large, very disruptive trends - the custom healthcare I already wrote about earlier as well as custom manufacturing (whether 3D printed or otherwise). For sure, 3D printing is advancing really fast right now, and it's reasonable to expect some devices come out of hobbyist-tinkerer labs and prototype studios to regular homes and offices. However, it's not just 3D printing but all kinds of digitally driven manufacturing from custom shoes and jeans to customer-specified patterns or designs on everything. With laptop vinyl skins, tableware and lampshades done, what's next?.

While these deliver value in different ways, they're driven by the same trends powered by digital technology and data. Computing is no longer just computing. Ultimately, we're only a few short years away from Neal Stephenson's Diamond Age. Okay, perhaps not the nanotech, but most other stuff for sure.

Looking at my past predictions, I've been far more focused on the pure computing stuff before. On that note, we're still in the middle of the platform disruption. Though touch computing has clearly taken a leading position in application development, we're still missing a capable standard platform. iOS is capable but proprietary, HTML5 is still not fully here, Android is grabbing market share but at a massive fragmentation cost, and so on. I haven't seen this many new languages and frameworks pop up all over the place since the early 90's. What's going to be the Windows 95 and Visual Basic of this era?

Wednesday 12 December 2012

A marriage of NoSQL, reporting and analytics

Earlier today, I mentioned in a tweet that fired off a chat covering various database-related topics each worth a blog post of their own, some of which I've written about here before:

One response in particular stood out as something I want to cover in a bit more detail that will fit in a Tweet:

While it's fair to say I don't think MongoDB's query syntax is pretty in the best of circumstances, I do agree that at times, given the right kind of other tools your dev team is used to (such as, when you're developing in a JavaScript-heavy HTML5 + Node.js environment) and the application's context is one where objects are only semi-structured, it can be a very good fit as the online database solution. However, as I was alluding to in the original tweet and expounded on in its follow-ups, it's an absolute nightmare to try to use MongoDB as the source for any kind of reporting, and most applications need to provide reporting at some point. When you get there, you will have three choices:

  1. Drive yourselves crazy by trying to report from MongoDB, using Mongo's own query tools.
  2. Push off reporting to a 3rd party service (which can be a very, very good idea, but difficult to retrofit to contain all of your original data, too).
  3. Replicate the structured part of your database to another DBMS where you can do SQL or something very SQL-like, including reasonably accessible aggregations and joins.

The third option will unfortunately come with the cost of having to maintain two systems and making sure that all data and changes are replicated. If you do decide to go that route, please do yourself a favor and pick a system designed for reporting, instead of an OLTP system that can simply do reporting, when pushed to do so. Yes, that latter category includes both Postgres and MySQL - both quite capable as OLTP systems, but you already decided to do that using MongoDB, didn't you?

Most reporting tasks are much better managed using a columnar, analytics-oriented database engine optimized for aggregations. Many have spawned in the last half a decade or so: Vertica, Greenplum, Infobright, ParAccel, and so on. It used to be that choosing to use one might be either complicated or expensive (though I'm on record saying Infobright's open source version is quite usable), but since last week's Amazon conference and its announcements, there's a new player on the field: Amazon Redshift, apparently built on top of ParAccel, priced at $1000/TB/year. Though I've yet to have a chance to participate in its beta program and put it through its paces, I think it's pretty safe to say it's a tectonic shift on the reporting databases market as big or bigger as the original Elastic Compute Cloud was to hosting solutions. Frankly, you'd be crazy not to use it.

Now, reporting is reporting, and many analytical questions businesses need to solve today really can't be expressed with any sort of database query language. My own start-up, is working on a few of those problems, providing cloud-based predictive tools to decide how to serve customers before there's hard data what kind of customers they are. We back this with a wide array of in-memory and on-disk tools which I hope to describe in more detail at a later stage. From a practical "what should you do" point of view though -- unless you're also working on an analytics solution, leave those questions to someone who's focused on that, turn to SaaS services and spend your own time on your business instead.

Tuesday 6 November 2012

Do yourself a favor when looking for an Internet connection, stay away from Elisa Viihde

This post is a local public service announcement for my Finnish friends. I'm sorry if this comes off as whining - I just want to document this publicly, and, as you might notice, I'm a bit annoyed. Although I tried to do some research into this before subscribing to my Internet connection, and even asked my Facebook connections about it, I never realized ahead of time what a piece of crap Elisa Viihde is as a service. So, as a warning to others: learn from my mistake, don't touch this thing with a ten-foot pole.

Our apartment building has what apparently is a fiber optic link to Elisa's network, so their service (under either Elisa or Saunalahti brand) is the default choice, unless I want to depend on 3G only (which, given the number of Wifi devices in our household, would not be very reasonable). Since Saunalahti seems to max out at 50/10 Mbps, and Elisa Viihde was advertised as 100 Mbps, plus it comes with a cloud DVR solution that got a few not-negative references, I went the latter path. The package is a Cisco VSDL2/Ethernet router with a .11n Wifi base station and an Elisa-branded set-top box manufactured by Pace, plus of course the actual net connection, delivered through Ethernet (despite the fact our connection panel actually has fiber links coming up to the apartment, too).

The net is not 100 Mbps, it's 100/10. Might be a minor detail, but it's well hidden in their brochures. Okay, still presumably faster than the next alternative. It's more expensive, too, by €10/mo.

So, what does that buy in terms of TV services?

Simply, the worst set-top box I've ever had the displeasure of using. The piece of crap spends 1-2 minutes simply turning on, if it's been off for more than a couple of hours. It never responds to remote button presses in less than a second, and frequently takes 5-10 seconds to do so. If you mistakenly think it didn't see the remote and press again - it buffered all that, and does several things in a sequence. Irksome as hell. However, it doesn't stop there. It also crashes or freezes randomly, and does not recover without pulling the plug on its power.

Apart from the set-top box, you might expect the service itself to be pretty good. I mean, it does look good on paper: iPhone, iPad, Android apps for controlling the recording schedules, a web site for doing the same, ability to view recorded shows on a PC or with an iPad in addition to the living room (via the above box, when it works), 5 TB of online storage ("2500 hours", in standard def DVB, at least), and a few other bits and pieces.

However, you can not watch a recording while its still broadcasting, or in fact until about 10-15 minutes after the scheduled broadcast time. Forget about timeshifting, in other words. Or pausing TV. Oh yeah, you could pause, by plugging a USB stick or portable hard drive into the box. Er, what's that 5TB of online storage for? Besides, if I want to timeshift onto a local device, my Samsung TV has a much more tolerable UI and response and is also capable of that same thing. I don't want that. I mean, this is what the online service is supposed to do for me, right?

Oh, and sometimes the recordings aren't actually recorded. They'll show up in the browser, "this or that, 60 minutes", but when you try to watch it, it'll be 50 seconds of the ending titles of the show before, and that's it -- not what you actually wanted to see. Now, I may be spoiled by the rich recording capabilities of my previous solution, a PC running MythTV, but this is just ridiculously bad. No DVR is this stupid. Keep your old DVR box. It's probably prettier and/or easier to hide than the round white blob, anyway.

Oh, but there's the cloud video on demand rental, too! Never mind the price, typically 6€ per movie (making two movies a month more expensive than the just launched Netflix service). Or that the selection, while perhaps slightly better in the movie catalog side than that of Netflix Finland, is still not great (nor does it have any TV series back catalog, like Netflix). Or even the fact that its search sucks ass, and the browser shows every movie twice (once as standard def, another time as HD), so you're scrolling twice as much as you're supposed to. No, the by far most aggravating fact of it is, that if you happen to watch movies, oh, I don't know, in the evenings after your kids go to bed, it probably will not get through the viewing without sputtering to a halt.

You might think I'm exaggerating, but I'm not. They gave some promotional vouchers for the movie rentals in subscription welcome package, so we've tried it a few times. EVERY time, at approximately 22:30 (so somewhere between halfway and the end, if you begin around or somewhat after 21:00, as you might) first audio drops out, then the video starts breaking up, and then the movie just halts. After three such experiences and following some guidance from Elisa's tech support (such as, bypass the comes-with-the-package router and plug your Ethernet uplink straight to the set-top box, etc), I've learned that the problem will go away 15-20 minutes later, but in between, don't even try to continue with the movie. Hey, movies are designed to be watched in two parts, right? That's why they have an intermission in the theaters, too? Er... they don't?

The same problem affects all IP-delivered HD channels, too. While I can't completely rule out the possibility that it's the set-top box which just decides to crap out every evening at 22:30, my own experience building and operating Internet services leads me to suspect the fine folks running Elisa's data centers (hey, guys, long time no talk) simply have decided to run their backup scripts at prime time, have no monitoring for service quality, and overload their backbone capacity. Genius.

Top this off with a customer service which responds to questions, complaints, technical support requests etc with a 3-4 day delay, and you have what I would call a perfect package. Welcome to the market, Netflix. I'll forgive your lack of recent titles in the local catalog. At least your service is integrated directly to my TV (Samsung Smart TV, as noted previously), is also available over all the same mobile devices and shows me what I chose to watch in hi-def without forcing me to go in tech troubleshooting mode in the middle of every movie. You're SO going to win Finland.

Now, to find a way to cancel this subscription..

Saturday 1 September 2012

Make Amazon EC2 control go faster

Do you run enough EC2 systems to care about the time it takes to start one or check its status, but not enough to justify an account at Scalr or RightScale to manage them? Do you care about automating instance management? Are you working in a team where several people need to provision or access servers? If so, consider this quickstart to a better way of setting up you cluster. Apologies for the acronym soup - blame Amazon Web Services for that.

  1. Turn on IAM for your AWS account, so that you can create an account for every team member separately. While you're there, I'd also recommend you turn on Multi Factor Authentication (MFA) for each account. You can use Google Authenticator (Android or iPhone) to provide you the MFA tokens, even if you're not securing your Google account with it. Thanks, Tuomo, for pointing that out, I had thought MFA depended on keyfob tokens.
  2. Don't leave IAM yet. Go to the Roles tab and create a new Role (I call mine InstanceManager) with a Power User Access policy template.
  3. Move on to the EC2 management console and create a new instance. It has to be a new one, you can't associate this awesome power with anything you have already. For practice, use the Quick Launch Wizard -- I'll go through this step by step.
  4. Name your instance. I call mine Master. Lets assume you already have a Key Pair you know how to use with EC2. Choose that.
  5. Choose the Amazon Linux AMI 2012.03 as your launch config. Hey, it's a decent OS, and if you like Ubuntu better, you can repeat this with your favorite AMI later once you know how it works.
  6. Choose Edit details on the next page of the wizard. We'll do changes in several places.
  7. Make the Type t1.micro, you don't want to do much more than manage other instances on this one so it doesn't need a lot of oomph. I would recommend turning on Termination Protection to avoid a silly mistake later on.
  8. Tag it like you wish
  9. If you're using Security Groups (warmly recommended!), choose one which has access to your other servers.
  10. Here's the important bit: on the Advanced Details tab, choose the IAM role you created earlier (InstanceManager, if you followed my naming).
  11. No need to change anything in the Storage config. Click Save details, then Launch.

The instance will come up like any other, you'll probably know how that works. If you're used to something else than Amazon Linux, this one expects you to log in as ec2-user, and you can sudo from there to root. Set up your own account and secure the box to your best effort, since this one holds the keys to everything you're running on EC2.

Now, why did we do all this?

  1. Log in. With a regular account, no root, no keys copied from anywhere else.
  2. Type ec2-describe-instances to the shell.
  3. Witness a) fast response b) with all your instances listed. a) comes from running inside the AWS data center, and b) is the IAM Role magic.
  4. Rejoice how your teammates will not need to manage their own access secrets. You did secure the master account and SSH to this box, right?
  5. Try to launch something else. Yup, it all works.

Setting up the IAM Role and associating one to an instance through the command line is somewhat more involved, so this is much easier to do from the web console as above. The IAM docs do tell how, but I wasted an hour or two getting my head wrapped around why the console talked of roles, but the API and command line needed profiles (answer: the console hides the profiles beneath defaults). If you wish to have your own applications manage pieces of the AWS infrastructure, and hate the hoops you have to jump through to pass the required access keys around, IAM Roles are what you're looking for, and you'll want to read up on the API in a bit more detail. Now you've been introduced.

Wednesday 30 May 2012

Evolution of database access - are we making a full cycle?

10gen's new massive funding round ($42M, great job guys) for MongoDB development, monitoring and support services has again caused a cascade of database-related posts in my Twitter feed and elsewhere. Many of these seem a bit one-sided to me, I think it would be good to look at where we came from before we decide where the future will take us.

SQL is built from the concepts of set theory. It's great when you're accessing aggregates, intersections, subsets, and so forth. It was developed for a real need: earlier databases forced a key-by-key API on people who were not necessarily as interested in individual records as they were in collections of those records. SQL also supports manipulation of individual records, but that's not at all where its powerful features reside.

Early databases stuck an SQL interface on record-based storage systems, which then continued to evolve to support various kinds of indexing and partitioning strategies for higher performance. These turned out to be much more reliable and easier to manage than most contemporary alternatives (such as files, horror), so applications began to be built on top of a database accessed via SQL. Many such applications were really using the database for storage of individually managed records, thus using only the least-powerful features of SQL.

Along came ORM tools, designed to hide the boring SQL bits from the application developer, whose interest was in manipulating and displaying individual records. Sadly, since ORM lives between the application and the database, changes on either side would still need to be manually reflected on the other - which is why "SQL makes developers unhappy, and MongoDB does not". The lesson here is very simple: if your application manages individual records, such as users, products, orders, etc, develop it with technologies which make record manipulation easy, and allow easy evolution of the record schema. MongoDB is great at this. It also scales well and is pretty easy to manage. It's not the only one of its kind, but it's good at this in a way most row-based SQL databases (MySQL, PostgreSQL, Oracle, MS SQL, etc) will never be.

But SQL is still great at set theory. Reporting, analytics, etc still need aggregates more than record-by-record access to data. MongoDB is dreadful as a back-end for complex analytics (a sub-standard MapReduce interface is not a solution). Its storage model is designed for real-time access and memory-resident objects and thus is really un-optimal for truly large scale data storage. Any data for which the primary use case is aggregate reporting or complex analytics, such as event metrics (like what we do at needs something else in addition to records. Columnar engines with SQL query frontends are a much better fit. They'll compress massive data sets to much smaller storage requirements, thus improving the aggregate performance over terabytes, scale query execution over cores and nodes (thanks to dealing with much larger, thus easier-to-split data set at once), and retain a business analyst interface that is much friendlier than an object API.

I do agree on one part of the NoSQL idea: for record-by-record application models, SQL is an unnecessary middle-man. Just don't forget there are other models, too.

Thursday 1 March 2012

Increase engagement with social analytics

Last week I discussed segmentation as a method for identifying and differentiating customers for their specific service needs. Whether used for young cohort's introductory period service, high-value segments special treatment, or to identify the group on a transitionary path to high value and help accelerate that process, segmentation is a very versatile tool for business and product optimization. It can be approached with many techniques and I'll go on to more implementation details on those. But first, an introduction to the next topic after segmentation: social metrics.

While social behavior is not historically strongly featured by many products in either the gaming space or in the wider scope of freemium products, your customers and users are people, and thus they will have social interaction with others you can benefit from. If you can capture any of that activity in your product measurement, it can serve as a very valuable basis for in-depth analytics. Today, I will focus on those products and services in which their audience can interact among each other - that is, there is some sort of easily measured, directly connected community.

Any such product will probably have user segments such as:

  • new users who would benefit from seeing good examples of effective use of the product, guidance on the first steps, or some other introduction beyond what the product can do automatically or what your sales or support staff can scale to
  • enthusiasts who would like nothing better than to help the first group
  • direct revenue contributors who either have a lot of disposable income, or otherwise find your service so valuable to them that they'll be happy to buy a lot of premium features or content
  • people who, though they're not top customers themselves, find innovative ways to use premium features for extra value
  • people who are widely appreciated by the community for their contributions, "have good karma"
  • people whose influence within the community is on the whole negative due to disruptive behavior

and many, many others. Two of these groups are easy to identify simply based on their own history, I'm sure you'll recognize which two. The other four are determined largely by their interaction with the rest of the community and other users' reaction to their activities. How do you find them? This is a rapidly evolving field of analytics with constantly growing pool of theoretical approaches and practical tools, and can look daunting at first. The good news, there are many practical tools already, and while theoretical background helps, the first steps aren't too hard to make.

You'll need to develop some simple way to identify interaction. The traditional way to begin is to define a "buddy list" of some sort similar to Facebook friends network, Twitter following, or a simple email address book. However, I find a more "casual" approach of quantifying interactions works better for analytics. Enumerate comments, time in the same "space", exposure to the same content, common play time, or whatever works for your product. At the simplest level, this will be a list of "user A, user B, scale of interaction" stored somewhere in your logs or a metrics database. This is already a very good baseline. With the addition of time/calendar, you'll be able to measure the ebb and flow of social activities, but even that isn't strictly necessary.

Up to data set of about 100k users and half a million connections or so, you'll be able to do a lot of analysis just on your laptop. Grab such a data dump and a tool called Gephi and you're just minutes away from fun stuff like visualizing whether connections are uniformly defined or clustered into smaller, relatively separate groups (I bet you'll find the latter - social networks are practically always have this "small world" property). This alone, even though it isn't an ongoing, easily comparable metric, will be very informative for your product design and community interaction.

In terms of metrics and connected actions, here's a high-level overview of some of the more simple-to-implement things:

  • highly connected users are a great seed for new features or content, because they can spread messages fast and giving them early access will make them more engaged. While in theory you'd want to reach people "in between" clusters, the top connected people are an easy, surprisingly well functioning substitute.
  • those same people with a large number of connections are also critical hubs in the community, and you should protect them well, jumping in fast if they have problems. This is independent of their individual LTV, because they may well be the connection between high-value customers.
  • high clustering coefficient will indicate a robust network, so you should aim to build one and increase that metric. Try introducing less-connected (including new) people to existing clusters, not simply to random other users. A cluster, of course, is a set of people who all have connections to most others in the cluster (i.e., a high local clustering coefficient).
  • Once someone already has a reasonable number of semi-stable relationships (such as, 4-8 people they've interacted with more than once or twice), it's time to start introducing more variance, such as connecting them to someone who's distant in the existing graph. Most of these introductions are unlikely to stick, but the ones that do will improve the entire community a great deal.
  • if you can quantify the importance of the connections, e.g. by measuring the time or number of interactions, you can further identify the top influencers apart from the overall most connected people.
  • finally, when you combine these basic social graph metrics to the other user lifetime data I discussed previously, you'll get a whole new view into how to find crucial user segments and predict their future behavior. This merged analysis will give you measurable improvement far faster than burying yourself into advanced theories of social models, so take the low-hanging fruits first.

That's it for yet another introductory post. Time for feedback: what other analytics areas would you like to see high-level explanations about, or would you rather see this series dive into the implementation details on some particular area? Do let me know, either via comments here, or by a tweet.

Tuesday 21 February 2012

There's no such thing as an average free-to-player

A quick recap: in part 1 of this series, I outlined a couple of basic ways to define a customer acquisition funnel and explained how it falls short when measuring freemium products, in particularly free-to-play. In part 2, I continued to explain two alternative measurement models for free-to-play and focused on Lifetime Value (LTV) as a key metric. A core feature emerged: the spread of possible LTV through the audience is immense, ranging from 0 to, depending on product, hundreds, perhaps even thousands of euros. 

This finding isn't limited to just money spending, but is seen over all kinds of behavior, and is well documented for social participation as the 90-9-1 rule. From a measurement point of view, one of the most overlooked aspects is how it destroys averages as a representative tool. At the risk of stating the obvious, an example below.

When 90% of a freemium product players choose the free version and 10% choose to pay, the average LTV is obviously just 1/10th of the average spending of the paid customers. However, when there's not just a variety of price points, but in fact a scale of spending connected to consumption (or if we're valuing something else than spending, such as time), the top 1% is likely to spend 10x or more than the next 9%. Simple math will show you that the top 1% would be more valuable than the rest of the audience in total, as illustrated here:

The average? 0.19. Now, can you identify the group that is represented by "average spending of 0.19" in the above example? Of course you can't - there is no such group. Averages work fine when what you're measuring follows some approximation of a standard distribution (e.g., heights of people), but they break down with other kinds of quantities. Very crucially, they break down on behavioral and social metrics. Philip Ball's book Critical Mass goes to some length on the history of these measurements, if you're interested in that.

Instead of measuring an average, you should identify your critical threshold levels. Those might be the actions or value separating the 90% and 9% value players, and equally, 9% and 1%. Alternatively, you might already have a good idea of your per-user costs and how much a customer needs to spend to be profitable. Measure how many of your audience are above that level. Identify and name them, if you can. Certainly try to remember them over time to address them individually. This goes deeper than simply "managing whales", to use the casino term. Yes, the top 1% are valuable and important to special case, but it's equally if not more important to determine what are the right strategies for developing more paying customers from the 90% majority.

This is why it's important to measure everything. If you only measure payment, the 90% majority will be invisible to your metrics, and it's usually very hard to identify ahead of time which other measurements are important for identifying the activities that lead to spending. Instrumenting your systems to collect events on all kinds of activities on a per-user basis (rather than just system-level aggregates) enables a data mining approach to the problem. Collect the events, aggregate them across time for each player (computing additional metrics, when appropriate), and then identify which pre-purchase activities separate those players who convert to paying from those in the same cohort who do not. There are several strategies for this, from decision trees to Bayesian filtering to all kinds of dimensionality reduction algorithms. The tools are already pretty approachable, even in open source, whether as GUIs like Weka, in R, or with Big Data solutions like Apache Mahout, which works on top of Hadoop.

Essentially, this approach will surface a customer acquisition funnel akin to what I described earlier, but using the raw measurement data. It will probably reveal things about your product and its audience you would not have identified otherwise, and allow you to optimize the product for higher conversions. The next step in this direction is to replace the "is a customer" criteria above with the measured per-player LTV value. Now, instead of a funnel, you will reveal a number of correlations between types of engagement and purchase behavior, and will be able to further optimize for high LTV. Good results depend on having a rich profile of players across their lifetimes. A daily summary of all the various activities in a wide table with a column for each activity, and a row per player per day is a great source for this analysis.

Friday 17 February 2012

A personal note on past, present and future..

Before I continue the series on metrics for free-to-play, I want to take an opportunity to comment on this week. I promise, I'll get back to the lessons soon.

I was finally able to announce this Wednesday that my well over 8 years at Sulake and Habbo Hotel are over. It's a time I will always remember fondly, for many reasons. By far the longest period I've ever worked with one business, it's been challenging, educational, fun, exciting - and, yes, at times also exhausting. I've been lucky to enjoy great workplaces, excellent colleagues and wonderful businesses since starting my professional career in the latter half of the 90s, but Sulake has been by far the most interesting of them yet. Indeed, most times it didn't feel like work.

Still, I'm itching for something different. I'm not quite sure what yet, but I couldn't focus on figuring that out while inside Sulake, with all the questions to explore and day-to-day problems to solve any business has being a distraction from this personal exploration. Nor would it have been fair for the team there if I hadn't given my energy to those things. Finally, this means our whole family is suddenly free of attachments and we can now consider opportunities anywhere in the world.

It turned out that the day after I handed in the office keys and revoked my passwords, some rumors started flying about regarding Sulake. While I don't know the content of the internal discussion any better than anyone else, what it looks like is that the company is doubling down on putting its focus on product development. That's absolutely the right thing to do - the teen market deserves to be served with great products, and the core creativity values of Sulake and Habbo can be explored in ways Habbo Hotel has not yet done. We have not seen in public yet what precisely is the plan, but I expect to see both changes to the existing products, as well as something new, probably combining online and mobile platforms in interesting ways. My leaving the company is completely unrelated to this development. Lets wait for the dust to settle and try not to jump to conclusions on behalf of the team.

But back to me. Getting involved in a small way in the emerging Helsinki start-up scene through AaltoVG, Startup Sauna and advisory work has strengthened my feel that while all sizes and stages of business are interesting to optimize, there's something uniquely exhilarating about the beginning. I love teams small enough that the members can really take the time understand each other - even when they're not in agreement. Still, I thrive in change, be it early or later. It takes a special kind of courage to keep changing things even after pieces start to work, but losing that ability kills any business. So, I'm looking first of all for that exciting team to be part of and the awesome vision for a different world to create, never mind whether . the current idea of how to get there really works or not. I have some ideas I'm brewing, but few great things happen alone, so the team takes priority.

That's the future. For the present, I'm happy to get an opportunity to spend more time with my 14 month old daughter (she's being her usual bubbly self here as I write this), catch up on reading, meet people, exchange thoughts and hopefully also do more writing here. I want to thank everyone who has already reached out, whether with best wishes or with ideas or even offers of help. It's been great to hear from all of you, and I'm looking forward to all the conversations I'm sure will follow. My calendar is filling up with more meetings than I'm used to! The requests for help have also been welcome - while I'm not ready to make any long-term commitments yet, I'd be happy to try to help out in the short term with any problems you have my expertise might be applicable for. Do let me know. You'll find various ways of contacting me on this blog, if you already don't have my info.

Friday 10 February 2012

Developing metrics for free-to-play products

In my previous post, I outlined a few ways in which a "sales funnel" KPI model changes between different businesses, and argued that it really doesn't serve a free-to-play business well. Today, I'll summarize a few ways in which a free-to-play model can be measured effectively.

Free-to-play is a games industry term, but the model is a bit more general. In effect, this model is one where a free product or service exists not only as a trial step on the way to converting a paying customer, but can serve both the user as well as the business without a direct customer relationship, for example by increasing the scale of the service, making more content available. From a revenue standpoint, a free-to-play service is structured to sell small add-ons or premium services to the users on repeat basis - in the games space, typically in individual transactions ranging from a few cents to a couple of dollars in value.

As I wrote in the previous article, it's this repeated small transaction feature which makes conversion funnels of limited value to free-to-play models. Profitable business depends on building customer value over a longer lifetime (LTV), and thus retention and repeat purchase become more important attributes and measurements. Here is where things become interesting, and common methodologies diverge between platforms.

Facebook games have standardized on measuring number and growth of daily active users (DAU), engagement rate (measured as % of monthly users on average day, ie DAU/MAU), and the average daily revenue per user (ARPDAU). These are good metrics, primarily because they are very simple to define, measure and compare. However, they also have significant weaknesses. DAU/MAU is hard to interpret as it pushed up by high retention but down by high growth, yet both are desirable. Digital Chocolate's Trip Hawkins has written numerous posts about this, I recommend reading them. ARPDAU, on the other hand, hides a very subtle, but crucially important fact regarding the business - because there is no standard price point, LTV will range from zero to possibly very high values, and an average value will bear no reflection on either the median nor the mode value. This is, of course, the Long Tail like Pareto distribution in action. Why does this matter? Well, because without understanding the impact of the extreme ends of the LTV range to the total, investments will be impossible to target, implications of changes impossible to predict, as Soren Johnson describes in an anecdote about Battlefield Heroes ("Trust the Community?").

Another way of structuring the metrics is to look at measured cohort lifetimes, sizes and lifetime values. Typically, cohorts will be defined by their registration/install/join date. This practice is very instructive and permits in-depth analysis and conclusions on performance improvement: are the people who first joined our service or installed our product last week more or less likely to stay active and turn into paying users than the people four weeks ago? Did our changes to the product help? Assuming you trust that later cohorts will behave similarly to earlier ones, you can also use the earlier cohorts' overall and long term performance to predict future performance of currently new users. The weakness of this model relates to the rapidly increasing number of metrics, as every performance indicator is repeated for every cohort. Aggregation becomes crucial. Should you aggregate data older than a few months all-in-one? Does your business exhibit seasonality, so that you should compare this New Year cohort to the one last year, rather than to the one from December? In addition, we have not yet done anything here to address the fallacy of averages.

The average problem can be tackled to some degree by further increasing the number of cohorts over some other feature than the join date, such as the source by which they arrived, their geographic location, or some demographic data we may have of them. This will let us understand that French gamers will spend more money than those from Mexico, or that Facebook users are less likely to buy business services than those from LinkedIn. This information comes at a further cost in ballooning the number of metrics, and will ultimately require automating significant parts of the comparison analysis, sorting data to top-and-bottom percentiles, and highlighting changes in cohort behavior.

Up until now, all the metrics discussed have been simple aggregations of per-user data into predefined break-down segments. While I've introduced a few metrics which can take some practice to learn to read, the implementations of these measurements are relatively trivial - only the comparison automation and highlight summaries might require non-trivial development work. Engineering investments may already be fairly substantial, depending on the traffic numbers and the amount of collected data, but the work is fairly straightforward. In the next installation, I will discuss what happens when we start to break the aggregates down using something else than pre-determined cohort features.

Tuesday 7 February 2012

Metrics, funnels, KPIs - a comparative introduction

I know, I know - startup metrics have been preached about for years by luminaries like Mark Suster, Dave McClure, Eric Ries and many, many others. The field is full of great companies and tools like KISSmetrics, ChartBeat, Google Analytics, to name but three (and do a great disservice to many others). Companies like Facebook and Zynga collect and analyze oodles (that's a technical term) of data on their traffic, customers and products, and have built multi-billion dollar business on metrics. Surely everything is done already, and everyone not only knows that metrics matter, but also how to select the right metrics and implement a robust metrics process? There's nothing to see here, move along... or is there?

Metrics depend on your business as much as your business depends on them. No, more, in fact. It is possible (though hard) to build a decent, if not awesome business purely on intuition, but it is not possible to define metrics without understanding the business. Applying the wrong metrics is a disaster waiting to happen. In fact, in some ways this makes building a robust metrics platform more difficult than building the product it's supposed to measure. Metrics can't exist ahead of the product, but are needed from the beginning. Sure, with experience you will learn to pick plausible candidates for KPIs, and may even have tools ready for applying them to new products, but details change, and sometimes, with those details, the quality of the metrics changes dramatically. This is obviously true between industries like retail vs entertainment, but it's also true between companies working in the same industry.

This is a big part of why metrics aren't a solution to lack of direction. They can be a part of a solution in that well-chosen metrics will make progress or lack of it obvious, and may even provide clear, actionable targets for developing the business. Someone still needs to have an idea of what to do, and that insight feeds back into all parts: product, operations, measurement. I've never liked the phrase "metrics driven business" for this reason. Metrics don't do any driving. They're the instrumentation to tell you whether you're still on a road or what your speed is. You still have to decide whether that's the right road to be on, and whether you should be moving faster, or perhaps at times slower.

What to do, then? Well, understanding the differences helps. Lets start with a commonly applied metrics model, the sales funnel.

In a business-to-business, face to face sales driven business, a traditional funnel may begin with identifying potential customer opportunity, then measuring the number of contacts, leads, proposals, negotiations, orders, deliveries and invoices. A well managed business will focus on qualified customers and look for repeat transactions, as the cost per opportunity will likely be lower, and the revenue per order may be higher, leading to greater profitability. They will also look at the success rate between the steps of the funnel, trying to improve the probability of developing an opportunity into a first order.

This model is often adapted to retail: advertising, foot traffic, product presentation, purchase decision. For some businesses that's it - others will need to manage the delivery of the product, and may see further opportunities in service, cross-sales, or otherwise. Online retail businesses measure every step in much greater detail, simply because it is easier to do so. Large retail chains emulate that measurement with very sophisticated foot traffic measurement systems. But even in its simplest form, while the shape is similar, the steps of the funnel are very different.

Online businesses have developed a variety of business models, among which two large categories are very common: advertising and freemium.

The advertising-funded two sided market model is two different funnels: a visibility - traffic - engaged traffic - repeat traffic page view model, and a more traditional sales funnel for the advertisers, though even that one has been, through automation, turned to look more like an online retail model than what advertising industry is used to. This model is further enhanced by traffic segmentation and intent analysis, allowing targeted advertising and a real-time direct marketing product, the sales funnel of which is even less familiar to the sales funnel I described at the beginning.

Freemium isn't even one business model: a B2B service with a tiered product offering and a free time- or feature-limited trial product may ultimately use the traditional sales model, only so that the opportunity to prospect part of the model is fully automated. Often it's entirely automated to the point a customer never needs to (or perhaps even wants to, assuming a simple enough product) talk to a sales rep. Still, the basic structure holds: some of the prospective leads turn into customers, and carry the business forward. Free service, be it for trial only or for the starter-level segment, is a marketing cost and a leads-qualifier, enabling a smaller sales force.

On the other hand, the free-plus-microtransactions model, one which we pioneered with Habbo Hotel, and has since been used to great success by many, including Zynga, can certainly be described as a funnel, but to measure it with one requires significant violence to many details. The most important of these is that because individual transactions are typically of very low value, building a profitable business on top of a model which aims for, and measures one sale per customer is practically impossible. This class of business doesn't just benefit from repeat customers, it requires them. Hence, a free-plus (or, as it is called in the games industry, free-to-play) business model must replace counting a "new customer" metric or measuring individual transaction value with the measurement of customer lifetime value. Not just measuring it on average, but trying to predict it individually - both to try to develop 0- or low-value users (oh, how I hate the word) to higher value by giving them better value or experience, and by identifying the high value customers to serve and pamper them to the best of the company's ability (within reason and profitability, of course). And once you switch the way you value revenue, you really need to switch the way you measure things pre-revenue.

Funnels change. There are business models where funnels really can't provide the most instructive KPIs, even though they still may be conceptually helpful in describing the business. As this post is getting long, more on the details of KPIs of free-to-play in the next episode.

- page 2 of 25 -