Fishpool

To content | To menu | To search

Tag - research

Entries feed - Comments feed

Monday 29 April 2013

Analytics infrastructure of tomorrow

If you happen to be interested in the technologies that enable advanced business analytics, like I am, the last year has been an interesting one. A lot is happening, on all levels of the tech stack from raw infrastructure to cloud platforms and to functional applications.

As Hadoop has really caught on and is now a building block for even conservative corporations, several of its weaknesses are also beginning to be tackled. From my point of view, the most severe has been the terrible processing latencies of the batch- and filesystem-oriented MapReduce approach, rather than solutions designed on top of streaming data. That's now being addressed by several projects. Storm provides a framework for dealing with incoming data, Impala makes querying stored data more processing-efficient, and finally, Parquet is coming together to make the storage itself more space- and I/O efficient. With these in place, Hadoop will move from its original strength in unstructured data processing to a compelling solution for dealing with massive amounts of mostly-structured events.

Those technologies are a bear to integrate and, in their normal mode, require investment in hardware. If you'd prefer to get a more flexible start to building a solution, Amazon Web Services has introduced a lot of interesting stuff, too. Not only have the prices for compute and storage dropped, they now offer I/O capacities comparable to dedicated, FusionIO-equipped database servers, very cost efficient long-term raw data storage (Glacier), and a compelling data warehouse/analytics database in the shape of Redshift. The latter is a very interesting addition to Amazon's already-existing database-as-a-service offerings (SimpleDB, DynamoDB and RDS), and, as far as I've noticed, gives it a unique capability other cloud infrastructure providers are today unable to match - although Google's BigQuery comes close.

The next piece in the puzzle must be analytical applications delivered as a service. It's clear that the modern analytics pipeline is powered by event data - whether it's web clickstreams (Google Analytics, Omniture, KISSMetrics or otherwise), mobile applications (such as Flurry, MixPanel, Kontagent) or internal business data, it's significantly simpler to produce a stream of user, business and service events from the operational stack than it is to try to retrofit business metrics on top of an operational database. The 90's style OLTP-to-OLAP Extract-Transform-Load approach must die!

However, the services I mentioned above, while excellent in their own niches, can not produce a 360-degree view across the entire business. If they deliver dashboards, customer insight is impossible. Even if they're able to report on customers, they don't integrate to support systems. They leave holes in the offering that businesses have to plug with ad-hoc tools. While it's understandable, as they're built on technologies that force nasty compromises, those holes are still unacceptable for a demanding digital business of today. And as the world increasingly turns more digital, what's demanding today is going to be run-of-the-mill tomorrow.

Fortunately, the infrastructure is now available. I'm excited to see the solutions that will arrive to make use of the new capabilities.

Wednesday 12 December 2012

A marriage of NoSQL, reporting and analytics

Earlier today, I mentioned querymongo.com in a tweet that fired off a chat covering various database-related topics each worth a blog post of their own, some of which I've written about here before:

One response in particular stood out as something I want to cover in a bit more detail that will fit in a Tweet:

While it's fair to say I don't think MongoDB's query syntax is pretty in the best of circumstances, I do agree that at times, given the right kind of other tools your dev team is used to (such as, when you're developing in a JavaScript-heavy HTML5 + Node.js environment) and the application's context is one where objects are only semi-structured, it can be a very good fit as the online database solution. However, as I was alluding to in the original tweet and expounded on in its follow-ups, it's an absolute nightmare to try to use MongoDB as the source for any kind of reporting, and most applications need to provide reporting at some point. When you get there, you will have three choices:

  1. Drive yourselves crazy by trying to report from MongoDB, using Mongo's own query tools.
  2. Push off reporting to a 3rd party service (which can be a very, very good idea, but difficult to retrofit to contain all of your original data, too).
  3. Replicate the structured part of your database to another DBMS where you can do SQL or something very SQL-like, including reasonably accessible aggregations and joins.

The third option will unfortunately come with the cost of having to maintain two systems and making sure that all data and changes are replicated. If you do decide to go that route, please do yourself a favor and pick a system designed for reporting, instead of an OLTP system that can simply do reporting, when pushed to do so. Yes, that latter category includes both Postgres and MySQL - both quite capable as OLTP systems, but you already decided to do that using MongoDB, didn't you?

Most reporting tasks are much better managed using a columnar, analytics-oriented database engine optimized for aggregations. Many have spawned in the last half a decade or so: Vertica, Greenplum, Infobright, ParAccel, and so on. It used to be that choosing to use one might be either complicated or expensive (though I'm on record saying Infobright's open source version is quite usable), but since last week's Amazon conference and its announcements, there's a new player on the field: Amazon Redshift, apparently built on top of ParAccel, priced at $1000/TB/year. Though I've yet to have a chance to participate in its beta program and put it through its paces, I think it's pretty safe to say it's a tectonic shift on the reporting databases market as big or bigger as the original Elastic Compute Cloud was to hosting solutions. Frankly, you'd be crazy not to use it.

Now, reporting is reporting, and many analytical questions businesses need to solve today really can't be expressed with any sort of database query language. My own start-up, Metrify.io is working on a few of those problems, providing cloud-based predictive tools to decide how to serve customers before there's hard data what kind of customers they are. We back this with a wide array of in-memory and on-disk tools which I hope to describe in more detail at a later stage. From a practical "what should you do" point of view though -- unless you're also working on an analytics solution, leave those questions to someone who's focused on that, turn to SaaS services and spend your own time on your business instead.

Wednesday 6 July 2011

Zynga's ARPU doubling? Not quite

Apparently today the pundits and analysts around have come up to reviewing Zynga's ARPU figures from their S-1 filing (Inside Social Games, Eric von Coelin). Something seemed fishy in these calculations, and since I'm home for a day, I had the opportunity to review the filing figures on a computer, rather than just a tablet. Yep, people, you're comparing apples to oranges. Zynga's monetization rate is improving, but it's nowhere as dramatic as you're making it look. Did you already forget, they defer revenue? You can't compare GAAP deferred revenue to non-deferred DAU/MAU figures! Use the bookings data instead.

This is what the S-1 filing states about the difference:

"Bookings is a non-GAAP financial measure that we define as the total amount of revenue from the sale of virtual goods in our online games and from advertising that would have been recognized in a period if we recognized all revenue immediately at the time of the sale. We record the sale of virtual goods as deferred revenue and then recognize that revenue over the estimated average life of the purchased virtual goods or as the virtual goods are consumed. Advertising revenue consisting of certain branded virtual goods and sponsorships is also deferred and recognized over the estimated average life of the branded virtual good, similar to online game revenue. Bookings is calculated as revenue recognized in a period plus the change in deferred revenue during the period. For additional discussion of the estimated average life of virtual goods, see the section titled “Management’s Discussion and Analysis of Financial Condition and Results of Operations—Revenue Recognition.”

Zynga is of the opinion that bookings more accurately represents their current sales activities, and I fully agree. After all, this is not subscription business we're talking of! If you're as hard-core geek about these things as I tend to be, the description of when a booking turns into revenue is discussed on pages 62-63 of the filing: 

"Durable virtual goods, such as tractors in FarmVille, represent virtual goods that are accessible to the player over an extended period of time. We recognize revenue from the sale of durable virtual goods ratably over the estimated average playing period of paying players for the applicable game, which represents our best estimate of the average life of our durable virtual goods"

That deferring means that during periods of rapid growth, ARPU monetization appears to decline, while on the other hand periods of flat or declining traffic would seem to improve ARPU, due to the recognition of earlier deferred revenue against current, not earlier userbase. 

With these covered, what are the actual sales figures? The average daily Bookings to DAU rate is somewhat higher than the Revenue to DAU rate, at $0.051 (B) in Q1 of this year vs $0.042 (R). Both seem to have plateau'd on that level since growing from a year-ago $0.030 (B) / $0.017 (R). Respectable, but not earth-shattering -- and the growth, while impressive, isn't quite "more than doubled".


Tuesday 21 June 2011

On software and design, vocabularies and processes

Having recently witnessed the powerful effect establishing a robust vocabulary has on the process of design, and seeing today the announcement of the oft-delayed Nokia N9 finally hit TechMeme front page, I again thought about the common misconceptions of creating software products. It's been a while since I posted anything here, and this is as good a time as any to do a basics refresher.

A typical axis of argument sets software engineering somewhere between manufacturing and design. I, among many others, have for years argued that the relationship of software to physical manufacturing almost non-existent. This is because while the development process for a new physical product, like any involving new creation, starts with a design phase, the creation of a specification (typically in the hundreds of pages) is where the manufacturing really only begins. The job of the spec is to outline how to make the fully-designed product in volume. In comparison, by the time a software product is fully-designed and ready to start volume production, there is no work left - computers can copy the final bits forever without a spec. There's more to that argument, but that's the short version. Creating software is the design part of a product development process.

So, goes the line of thinking, if software is design, then it must be right to always begin a software project from zero. After all, all designs start from a blank sheet of paper, right? At least, all visual designs do... No good comes from drawing on top of something else.

If this truly was the case, what do you think they teach in art schools, architecture departments, and so on? Technique? For sure, but if that was all there was, we'd still be in the artesan phase of creation. History? Yes, but not only that. An important part of the history and theory of design is establishing lineage, schools of thought, and vocabularies which can serve as a reference for things to come. All truly new, truly great things build on prior art, and not just on the surface, but by having been deeply affected by the learning collected while creating all which came before them.

Not having actually studied art, I have only a vague idea of how complex these vocabularies are, and this is an area where a Google search isn't going to be helpful, as it only brings up the glossaries of a few dozen to at most a hundred basic terms of any design profession. This is not even the beginning for a real vocabulary, since those describe to a great detail the relationships of the concepts, ways of using them together, examples of prior use, and so on. However, even from this rather precarious position, I will hazard a statement which might offend some:

Software design, in terms of the vocabulary required for state of the art, is more complex than any other field of design by an order of magnitude or more. The practical implication of this is that no new software of value can be created from a "blank sheet of paper".

This will require some explanation. Let's tackle that magnitude thing first.

Any complete software system, such as that running within the smart phone in your pocket, measures in the tens, if not hundreds of millions of lines of code. LOC is not a great measurement of software complexity, but there you have it. In terms of other, more vocabulary related measurements, the same system will consist of hundreds of thousands of classes, function points, API calls, or other externally-referable items. Their relationships and dependencies to each other typically grow super-linearly, or faster than the total number of items.

By comparison, the most complex designs in any other field are dwarfed. Yes, a modern fighter jet may have design specs of hundreds of thousands of pages, and individual parts where the specs for the part only are as complex as any you've seen. Yes, a cruise ship, when accounting for all the mechanical, logistic and customer facing functions together may be of similar complexity. And yes, a skyscraper design blueprints are of immense complexity, where no one person really can understand all of it. However, a huge part of these specs, too, is software! Counting software out of those designs, a completely different picture emerges.

None of these designs would be possible without reusing prior work, components, designs, mechanisms and customs created for their predecessors. Such is the case for software, too. The components of software design are the immense collections of libraries and subsystems already tested in the field by other software products. Why, then, do we so often approach software product development as if we could start from scratch?

Why was it that the N9 reminded me of this? Well, if stories and personal experiences are to be trusted, it seems that during the process of creating it, Nokia appears to have "started over" at least two or three times. And that just during the creation of one product.. As a result, it's completely different, both from a user as well as a developer standpoint to the four devices which preceded it in the same product line, and two (three?) years late from it original schedule. Of course, they did not scratch everything every time, otherwise it would never have finished at all. But this, and Nokia's recent history, should serve as a powerful lesson to us all: ignoring what has already been created, building from a blank sheet instead, is a recipe for delay and financial disaster.

Software is design. Design needs robust vocabulary and the processes to use them well, if it is to create something successful.

Sunday 2 May 2010

On rich web technologies

For the past week, the technology world has been unable to discuss anything but Apple's refusal to allow Flash applications on the iPhone and iPad, and Steve Jobs's open letter which paints this as a technology question and Apple's position as one of protecting consumer interests by ensuring quality applications. It would be incredibly naive to take that literally. No, of course it's all about business control.

Charlie Stross has written a great, if speculative piece on the bigger picture. I think Charlie is spot-on - Apple is seeing a chance to disrupt the PC market, and wants to finish at the top, holding all the aces. That might even happen, given how badly other companies are addressing the situation, but if it did, it would be anything but good for the consumer - or for the small developer.

The business interest

Apple today is a $43 billion annual revenue, $240 billion market cap giant, give or take. Out of that value, 40% or so is riding on the iPhone, and Steve is clearly taking the company to a direction where devices running the iPhoneOS will replace the Macs, so that share is only increasing. Right now, they have more resources to do this than anyone else in the world, and least legacy to worry about, given that despite the rising market share and the title of leading laptop vendor, computers running Mac OS X are still a minority market compared to all the Windows powered devices from a legion of other makers.

The company's DNA, and Steve's personal experience over the past 25 years has taught them that an integrated, tightly controlled platform is something they are very good at, but that earlier mistakes of not controlling the app distribution as well left them weak. They're not going to repeat that mistake. And certainly they'll try to ensure that not only do the iPhone and iPad have the best applications, but that those applications are only available on Apple devices.

Adobe, despite their history of dominating many design and content production software niches and a market cap of $18 billion, is tiny in comparison. Furthermore, the Flash platform is a visible but financially less relevant part of Adobe's product portfolio (though exact share of Flash is buried inside their Creative Solutions business segment). Even disregarding that Apple can, as the platform owner, dictate whatever rules they want for the iPhoneOS, Adobe símply can not win a battle of resources against Apple.

But this fight is not about Flash on the iPhone - it's about Apple's control of the platform in general. Whether or not it's true, Apple believes tight control is a matter of survival for them.

The technical argument

Apple wants to make it seem like they're doing this because Flash is bad technology. As I wrote above, and so many others have described better than I have, that's a red herring. It's always convenient to dress business decisions behind seemingly accurate technical arguments ("Your honor, of course we'd do that, but the tech just doesn't work!"). Anyway, let's look at that technical side a bit.

First, lets get the simple bit out of the way. Flash is today most often used to display video on web sites. However, this is not about video, and video has never been Flash's primary point. It just happened to have a good install base and decent codecs at a time in 2005 when delivering lots of video bits started to make sense and YouTube came along to popularize the genre. In fact, it was completely superior for the job compared to the alternatives at the time, such as Real Player. The real feature, however, was that Flash was programmable, which allowed these sites to create their own embedded video players without having to worry about the video codecs.

By that time, Flash had already gained somewhat of a bad reputation for being the tool with which some seriously horrible advertising content had been made, so the typical way to make the web fast was to disable Flash content - rendering most ads invisible. I'm pretty sure for many YouTube was the first time there really was an incentive to have Flash in their browsers at all. That is, unless you liked to play the casual games that already then were also often created with Flash.

But that's all history, what about the future? Adobe certainly needs to take quite a lot of the blame for the accusations leveled against Flash - in particular, the way Flash content slows a computer down even when nothing is visible (as in, the 10 Flash-based adverts running in a browser tab you haven't even looked at in the last half an hour), or that yes, it does crash rather frequently. Quite a few of those problems are being addressed by Flash Player 10.1, currently in beta testing and to be released some time in the next months. Too little, too late, says Apple, and many agree.

I would, too, except for the fact that despite the issues, Flash is still the leading and best platform for rich web applications. It took that position from Java because it was (and is) lighter and easier to install, and keeps that position now against the much-talked-about HTML5 because the latter simply isn't ready yet, and once it is, will still take years to be consistently available for applications (that is, until everyone has upgraded their browsers). Furthermore, it's quite a bit easier to create something that works by depending on Flash 10 than to work around all the differences of Internet Explorer, Firefox, Safari, Chrome, Opera and so on.

But that's exactly what Steve is saying, isn't it? That these cross-platform Flash applications simply can't provide the same level of sophistication and grace as a native application on the iPad. Well, maybe that's true today. Maybe it's even true after Adobe finally releases 10.1's mobile editions on the Android. And given the differences in the scale of resources Apple and Adobe can throw at a problem, maybe it's true even with Flash Player 10.2 somewhere down the road.

But that doesn't matter. What matters is what developers do with the tools given to them, because the tools themselves do nothing. There's plenty of horrible crap in the ranks of App Store's 200,000 applications, and there's plenty of brilliant things done with Flash and AIR. Among the best of the best, which platform has the greatest applications? That's a subjective call that I will let someone else try to answer.

I will say this: all technology is fated to be replaced by something better later. At least ActionScript3 and Flash's virtual machine provide a managed language that lets application developers worry about something else than memory allocation. Sure, it wasn't all that hot until version 10, and still loses to Java, but it sure is better than Objective-C. If we're now witnessing the battle for platform dominance for the end of this decade, I sure would like to see something else than late 80s technology at the podium.

The consumer position

Apple wants to provide the consumer a polished, integrated experience where all pieces fit together, and most of them are made by Apple. The future of that experience includes control of your data as well. Put your picture albums in Apple's photo service, your music library in iTunes, your home video on iMovie Cloud, and access it all with beatiful Apple devices. Oh, you don't want to be all-Apple? Too bad. That's what you get.

Or, you can choose something where you'll have choice. If you believe Steve Jobs, that choice is between dirt, smut and porn, but his interest is to scare you back to Apple, so take that with a grain of salt. Me, I've never liked being dictated to, so I'll be choosing the path where I can pick what I want, when I want it. Sure, it'll mean I'll miss some of the polish (iPhone is by far the nicest smart phone today, and the iPad sure feels sweet), but nevertheless, I respect my freedom to choose more. Today, it means I'll choose Android, and am looking forward to playing Flash games and using AIR applications on tablets powered by it.

Thursday 30 April 2009

The difference between conversion and retention

Picked up a piece of analysis today from my newsfeed regarding Twitter audience. Nielsen has posted information about Twitter's month-to-month retention (40%) and compared that to Facebook's and MySpace's. Pete Cashmore over at Mashable promptly misread the basic information and came to an entirely wrong conclusion about the stats, titling his post about it as "60% quit Twitter in the first month". A simple misunderstanding of basic audience analysis like this is the crucial difference between explosively growing traffic and a failure. That's a fail for you, Pete.

What's wrong? Well, retention is a separate matter from conversion. 40% conversion from a trial registration to being a continuing active user to the second month would not be a bad conversion rate. It's not stratospherically great, I've seen better, but I wouldn't be terribly unhappy about such a figure. However, Nielsen didn't say anything at all about first-to-second month conversion. This is what they DID say: "Twitter’s audience retention rate, or the percentage of a given month’s users who come back the following month, is currently about 40 percent."

That's pretty plain English when you take the time to read it. Month to month, regardless of visitor lifetime, not first to second month. On this metric, 40% retention is not good at all, and will definitely be a limiting factor to Twitter's traffic and audience size over time, just the Nielsen article points out (and shows the math for). For any given retention rate, there just is a certain maximum audience reach beyond which any new traffic can't overcome the leaving base, since new traffic is not an inexhaustible supply.

And since today is a busy day, that concludes the free startup advice. Take the time to understand the difference between these metrics, you'll thank yourself for it later.

Thursday 2 April 2009

Amazon order sizes, ideal behaviour, and proof of market friction

I wrote last fall about the "sweet spot" in pricing and spending patterns for a microtransaction-based service and business model, where I posited that given flexible consumption, revenue could be maximized by ensuring the lowest possible minimum price point; one which is preferably closer to 1 cent than one 1 euro. Depending on the goods sold and amount of logistics overhead, the minimum profitable price may of course be much higher, and depending on the payment mechanisms available, the minimum price for which the consumers effort overhead exceeds the cost of the good may also be fairly significant. A chocolate bar may be sellable for 40 cents, while few durable goods can achieve a price point lower than a few euros. For virtual goods, the minimum pricing is mostly a question of efficient mechanism for transferring low amounts of money, because the minimum "size" of the good sold can be in theory reduced ad infinitum, and distribution costs are a non-issue.

Last week there was a Facebook Developer Garage day in San Francisco where a couple of interesting presentations were given. I wasn't there, but browsing through the material I found this slide about the distribution of order sizes among Amazon customers (slide 10 in the deck):

It's interesting to see the similarities on this chart to the behavior in virtual goods. In this data, the observed behavior follows the power law model in an ideal fashion at price points over $25, but the drop-off below that order size is remarkably fast. This is the result of primarily the goods sold and the logistical overheads implicit in that; it just doesn't make much sense for someone to order $5 worth of goods from Amazon given the shipping costs and delays incurred on top of the purchase.

For virtual goods, the drop-off point can be much lower, but still, a similar drop off does happen - again because below a certain price point and transactional overhead level, neither the consumer of the good nor its producer see value in the market. At prices above that, the transactional model does however exhibit the power law distribution. Again, by reducing the minimum marketable and profitable price point, there is a big potential customer base to be gained at the bottom end of the pricing scale. Most companies leave an amazing amount of revenue on the table by not addressing this issue.

Friday 5 September 2008

The sweet spot in free-to-play, pay-for-stuff market

I've been talking recently about a few particularities in the business models based on end-user micropayments that have created lots of followup discussion and questions. So much, in fact, that I decided it's time to try to explain one crucial and somewhat counter-intuitive detail in writing for later reference.

First, a bit of background: this information is based on my work with Habbo over the last 5 years, and is half learned from experience, half based on theoretical models built from that experience. I'm sharing this with the world because while it's been an interesting ride to build an online social game with an end-user business model, breaking pretty much every conventional rule in the process ("games have to have objectives", "there is no profit in micropayments", and so on), it's still better for our business if people understand why it works. If this allows a competitor to fix a problem in their product and get off the ground, so be it - there's plenty of growth to go around here, and failures don't help anyone. As a disclaimer, the numbers I'm discussing here have no relation to Habbo, though the basic observations certainly apply.

Let's start with an obvious statement and follow it up with something less obvious: Everyone wants to maximize revenue per player. However, in a free-to-play environment, where the majority of players do not contribute direct revenue, the right tool for the job is not to try to extract the maximum amount of money from those who do pay - rather, to increase the number of players buying anything at all - even if it's just $1 over their entire lifetime. In other words, it's good to have a lot of very low individual value players.

To explain it in detail, lets look at two assumptions behind a flexible pricing business model: first, that the number of customers grows as the cost of goods drops, and second, that the maximum consumption is unrelated to the minimum. There is no average customer who would spend more than half of others, and less than half of the rest. If there were, the picture of that customer base would look something like the image here, and it's pretty strange looking, wouldn't you say? You've probably seen pictures resembling this one where they don't start from the dominating $0 value point - that's the normal distribution.

The first assumption really is very simple: more people are willing to buy a product at a lower price. This is true for most goods, with some notable exceptions in the luxury goods market, where the perception and desirability of a product goes up with its price. However, it is difficult to create a mass-market luxury item, and those do tend to be cheap (and small).

The second is perhaps slightly more involved especially if one is used to thinking of fixed-price models such as one-time purchase of a boxed product or monthly subscriptions, both of which are difficult to scale up on a revenue per customer basis, so scaling them down is highly undesirable as well. However, it's more clear, if not obvious, by looking at other consumer goods - whether tangible such as drink- and foodstuff or intangible like movies, music and other entertainment. Buying these once certainly does not exclude further sales of the same product to the same customer - rather, it's a strong indicator of sales potential!

The free-to-play, pay-for-stuff model follows both of these assumptions. Cheap purchase price attracts more customers out of the existing free users, and transactional item-based sales allows repeat purchases of theoretically unlimited amount. Those who are willing to buy more will do so, up to some practical maximum of consumable goods and discretionary spending.

In this environment, focusing on higher-paying customers makes sense only if the number of customers drops by less than half when the revenue per customer doubles. Again, with the exception of some luxury goods segments, this rarely happens. Think about it: how many chocolate bars of standard quality would you expect to sell for $1? How about for $2? More or less than half? How about for $10 for the exact same package? I'd wager chocolate bars sell at least 10x better at the price of $1 than at the price of $10 each, and the increase of customer base more than covers the lower per-unit revenue.

This is a simple exhibit of power-law market dynamics, and is easiest observed when looked at through a logarithmic chart. Readers of books like The Long Tail or Critical Mass should not be surprised. There's a twist through - because this starts from zero gains (at the free players), the exponential behaviour follows a different path in the beginning. This model also turns Pareto's Law on its head - due to the (in my experience) relatively high exponent, the highest total value is at the lowest end of the spending.

Now, of course there is a minimum profitable price for a bar of chocolate that does not become near-$0 even at very high volumes, unlike purely digital products, so increasing chocolate-sales revenue by dropping prices does not necessarily increase profits, and I'm completely ignoring the effects of packaging and marketing on the perceived value of items. For digital sales, where packaging is more flexible and material costs are effectively non-existent, we still have to consider not-unsubstantial fixed development costs, a certain amount of costs associated to servers and bandwidth, some transaction-related pricing friction, and so forth, but certainly the minimum value (and price) of one unit of digital sales can be driven much lower than a bar of chocolate.

Thursday 3 April 2008

Nokia loses share among global youth, music on mobiles doubles popularity

We just released the results of our second global Habbo youth survey. For this 2007 edition, over 58,000 people contacted via the Habbo sites in 31 countries answered a survey the results of which have been painstakingly analyzed for weeks, nay, months by our in house analytics team.

Among the findings is tha majority of teens are now using their phones as mp3 players, that being almost twice as popular at 71 percent of the surveyed. Sure sign of convergence there. Sony Ericsson has enjoyed a rise in popularity thanks to this trend, and while Nokia is still the global leader in the teen segment as well as overall, it has lost some ground among teens.

This and a lot more, 250 pages worth of brilliant insight, can be found in the book we released in Virtual Worlds Conference today. We're selling this for €475 a piece, a real bargain for the content, because we figured it'll be more useful widely distributed rather than if sold at the typical market research prices. Check out the links above for more info.