To content | To menu | To search

Tag - business

Entries feed - Comments feed

Wednesday 28 September 2011

MVP is about proof of potential

Among the people I interact with, and in the places I frequent, a question that comes up a lot is "what exactly is a minimum viable product?". Perhaps that tells you something about me, but let me tell you something more, and offer one answer to the question.

Back at the end of the 90s, between preparing for having a great bash for the Millenium and various other activities of the sort, I was also learning how to apply the software skills I had acquired to some sort of purpose which would pay my bills. I ended up, to a large degree by accident, to spend about four years of the rise and height of the dot-com boom at what I to this day consider the best possible school for creating great Internet apps for end users: the Helsinki offices of Razorfish, then-legendary marketing, technology and management consulting agency, now simply a legend.

In those days, we would at times come across a situation in which a pitch for a project, a client relationship, or a business idea consisted mostly of what we called the "Photoshop application" - a web site consisting of screenshots of something that had not been built. Coming from a developer point of view, and being rather proud of the skills we had collected, like any 20-something developer is, we saw these as something to laugh at. It's just a bunch of screens, it doesn't do anything! Anyone could come up with that!

Now, in some cases that's probably true, anyone could have come up with that. They weren't all great. Some were, and I would grow to respect the skills and effort people who took design seriously would apply to coming up with both great interaction, and beautiful looks for software. These are things not to be underestimated, because impressions matter a great deal, and nothing kills a budding consumer relationship faster than a dead-end transaction flow. There's something more than that to designing applications on the screen level, though, and listening to a recording of Bill Gross (of Idealab) talking at Stanford reminded me of one part of it.

At the end of that talk, he recalls the story of Carsdirect, of giving the founder a small budget and 90 days to prove there's a business. In other words, to find out whether anyone would buy a car from this site, without talking to the dealer. Turns out that once they got the web site up, in the first day four people did just that - buy a car. They would have to go and buy them from local dealers themselves and deliver the cars to their first four customers. However, what was NOT important to proving the business opportunity was whether they would be able to form a dealer relationship with auto makers, figure out the logistics of car delivery up front, and so forth. For the four first customers, it was enough to drive the cars from the dealer's lot up to their driveway one at a time, by the founder!

This is the MVP - the minimum viable proof of business. The front-end of a business is where value is delivered. Sometimes you can prove that with just by showing Photoshop images of the service to prospective customers. Sometimes you need to have a prototype site up that looks and feels like a real business. What you will not need is to figure out the supporting processes, back-end business logic, and a whole partner value network to prove business potential. Sure, those are things that you will need to figure out to turn a profit - but without the front-end facing the customers and acquiring sales, there's no revenue, there's no business, and there's no chance of profit, never mind how wonderful your back-end would be.

Looking back, I'm not sure I knew this back in the days. I was lucky to have people around me who did. Today, I still see a lot of people thinking of future businesses worry about the back-end processes before they've figured out whether there is a front-end business. Tackle the front-end first. Sometimes Eric Ries's "spend $100 on Google AdWords and see if you get any clicks" is enough to do that, sometimes you do need a web site resembling a real service. Do not waste your precious runway building out something to support even the first 10 customers through the entire product delivery before you have one customer, though! If you get even one customer, you're learning a ton about how your next product version is not what you though it would be.

Wednesday 6 July 2011

Zynga's ARPU doubling? Not quite

Apparently today the pundits and analysts around have come up to reviewing Zynga's ARPU figures from their S-1 filing (Inside Social Games, Eric von Coelin). Something seemed fishy in these calculations, and since I'm home for a day, I had the opportunity to review the filing figures on a computer, rather than just a tablet. Yep, people, you're comparing apples to oranges. Zynga's monetization rate is improving, but it's nowhere as dramatic as you're making it look. Did you already forget, they defer revenue? You can't compare GAAP deferred revenue to non-deferred DAU/MAU figures! Use the bookings data instead.

This is what the S-1 filing states about the difference:

"Bookings is a non-GAAP financial measure that we define as the total amount of revenue from the sale of virtual goods in our online games and from advertising that would have been recognized in a period if we recognized all revenue immediately at the time of the sale. We record the sale of virtual goods as deferred revenue and then recognize that revenue over the estimated average life of the purchased virtual goods or as the virtual goods are consumed. Advertising revenue consisting of certain branded virtual goods and sponsorships is also deferred and recognized over the estimated average life of the branded virtual good, similar to online game revenue. Bookings is calculated as revenue recognized in a period plus the change in deferred revenue during the period. For additional discussion of the estimated average life of virtual goods, see the section titled “Management’s Discussion and Analysis of Financial Condition and Results of Operations—Revenue Recognition.”

Zynga is of the opinion that bookings more accurately represents their current sales activities, and I fully agree. After all, this is not subscription business we're talking of! If you're as hard-core geek about these things as I tend to be, the description of when a booking turns into revenue is discussed on pages 62-63 of the filing: 

"Durable virtual goods, such as tractors in FarmVille, represent virtual goods that are accessible to the player over an extended period of time. We recognize revenue from the sale of durable virtual goods ratably over the estimated average playing period of paying players for the applicable game, which represents our best estimate of the average life of our durable virtual goods"

That deferring means that during periods of rapid growth, ARPU monetization appears to decline, while on the other hand periods of flat or declining traffic would seem to improve ARPU, due to the recognition of earlier deferred revenue against current, not earlier userbase. 

With these covered, what are the actual sales figures? The average daily Bookings to DAU rate is somewhat higher than the Revenue to DAU rate, at $0.051 (B) in Q1 of this year vs $0.042 (R). Both seem to have plateau'd on that level since growing from a year-ago $0.030 (B) / $0.017 (R). Respectable, but not earth-shattering -- and the growth, while impressive, isn't quite "more than doubled".

Tuesday 21 June 2011

On software and design, vocabularies and processes

Having recently witnessed the powerful effect establishing a robust vocabulary has on the process of design, and seeing today the announcement of the oft-delayed Nokia N9 finally hit TechMeme front page, I again thought about the common misconceptions of creating software products. It's been a while since I posted anything here, and this is as good a time as any to do a basics refresher.

A typical axis of argument sets software engineering somewhere between manufacturing and design. I, among many others, have for years argued that the relationship of software to physical manufacturing almost non-existent. This is because while the development process for a new physical product, like any involving new creation, starts with a design phase, the creation of a specification (typically in the hundreds of pages) is where the manufacturing really only begins. The job of the spec is to outline how to make the fully-designed product in volume. In comparison, by the time a software product is fully-designed and ready to start volume production, there is no work left - computers can copy the final bits forever without a spec. There's more to that argument, but that's the short version. Creating software is the design part of a product development process.

So, goes the line of thinking, if software is design, then it must be right to always begin a software project from zero. After all, all designs start from a blank sheet of paper, right? At least, all visual designs do... No good comes from drawing on top of something else.

If this truly was the case, what do you think they teach in art schools, architecture departments, and so on? Technique? For sure, but if that was all there was, we'd still be in the artesan phase of creation. History? Yes, but not only that. An important part of the history and theory of design is establishing lineage, schools of thought, and vocabularies which can serve as a reference for things to come. All truly new, truly great things build on prior art, and not just on the surface, but by having been deeply affected by the learning collected while creating all which came before them.

Not having actually studied art, I have only a vague idea of how complex these vocabularies are, and this is an area where a Google search isn't going to be helpful, as it only brings up the glossaries of a few dozen to at most a hundred basic terms of any design profession. This is not even the beginning for a real vocabulary, since those describe to a great detail the relationships of the concepts, ways of using them together, examples of prior use, and so on. However, even from this rather precarious position, I will hazard a statement which might offend some:

Software design, in terms of the vocabulary required for state of the art, is more complex than any other field of design by an order of magnitude or more. The practical implication of this is that no new software of value can be created from a "blank sheet of paper".

This will require some explanation. Let's tackle that magnitude thing first.

Any complete software system, such as that running within the smart phone in your pocket, measures in the tens, if not hundreds of millions of lines of code. LOC is not a great measurement of software complexity, but there you have it. In terms of other, more vocabulary related measurements, the same system will consist of hundreds of thousands of classes, function points, API calls, or other externally-referable items. Their relationships and dependencies to each other typically grow super-linearly, or faster than the total number of items.

By comparison, the most complex designs in any other field are dwarfed. Yes, a modern fighter jet may have design specs of hundreds of thousands of pages, and individual parts where the specs for the part only are as complex as any you've seen. Yes, a cruise ship, when accounting for all the mechanical, logistic and customer facing functions together may be of similar complexity. And yes, a skyscraper design blueprints are of immense complexity, where no one person really can understand all of it. However, a huge part of these specs, too, is software! Counting software out of those designs, a completely different picture emerges.

None of these designs would be possible without reusing prior work, components, designs, mechanisms and customs created for their predecessors. Such is the case for software, too. The components of software design are the immense collections of libraries and subsystems already tested in the field by other software products. Why, then, do we so often approach software product development as if we could start from scratch?

Why was it that the N9 reminded me of this? Well, if stories and personal experiences are to be trusted, it seems that during the process of creating it, Nokia appears to have "started over" at least two or three times. And that just during the creation of one product.. As a result, it's completely different, both from a user as well as a developer standpoint to the four devices which preceded it in the same product line, and two (three?) years late from it original schedule. Of course, they did not scratch everything every time, otherwise it would never have finished at all. But this, and Nokia's recent history, should serve as a powerful lesson to us all: ignoring what has already been created, building from a blank sheet instead, is a recipe for delay and financial disaster.

Software is design. Design needs robust vocabulary and the processes to use them well, if it is to create something successful.

Monday 31 January 2011

Did common identities die with OpenID? No

About a year ago I posted here a summary of trends I expected would be relevant to our product development over 2010, and looking back at it, perhaps I should have put tablet computing on that list.. However, what prompted me to go back and look at it today was picking up on the news that 37signals has declared OpenID a failed experiment, and the related Quora thread I found. Wow, the top-voted answer there is one-sided. Here's what I think about it, to update my statement from a year ago. Comments would be welcome!

Facebook has established itself as a de-facto source of identity and social graph data for all but a few professional/enterprise-targeted Internet services. Over a medium- to long-term, it is still possible that another service or a federation of multiple services using standard APIs will displace Facebook as the central source. However, a networked, "external" social graph is a given. Majority of users are still behaving as if stand-alone services with individual logins and user-to-user relationships are preferred, but that's a matter of behavioral momentum.

This has not removed the problem of identity-related security issues, like identity theft. The nature of the problem will shift over time from account theft to impersonation and large-scale and/or targeted information theft. Consumers still remain uninterested and even hostile to improving security (at the cost of sometimes reduced convenience). Visible and wide-spread security scares are beginning to change the mindset though, and it's possible that even by the end of the year, at least one of the big players will introduce a "secure id" solution for voluntary access as a further argument for their services.

The spread of the social graph will have more impact to the scope of Internet services, however. Application development today should take it for granted that information about the users' preferences, friends, brand connections and activity history will be available and should be utilized (wisely) to improve service experience. The key to viral/social distribution is not whether applications can reach their users' network (which will be given), rather what would motivate the user to spread the message.

Thursday 13 January 2011

A last look at 2010... and what's in sight?

For a few years, I've tried to recap here some events I've found notable over the past year and offering some guesses on what might be ahead of us. I'm somewhat late on these things this year, due to being busy with other stuff, but I didn't want to break the tradition, no matter how silly my wrong guesses might seem later. And again, others have covered generals, so I'll try to focus on specifics, in particular as they relate to what I do. For a look at what we achieved for Habbo, see my recap post on the Sulake blog.

This time last year Oracle still had not successfully completed the Sun acquisition due to some EC silliness, but that finally happened over the 2010. It seems to be playing about how I expected it to - MySQL releases have started to appear (instead of just being announced, which was mostly what MySQL AB and Sun were doing), and they actually are improvements. Most things are good on that front. On the other hand, Oracle is exerting license force on the Java front, and hurting Java's long-term prospects in the process, just at a time when things like Ruby and Node.js should put the Java community on the move to improve the platform. Instead, it looks like people are beginning to jump ship, and I can't blame them.

A couple of things surprised me in 2010. Nokia finally hired a non-Finn as a CEO, and Microsoft's Kinect actually works. I did mention camera-based gesture UIs in my big predictions post, but frankly I wasn't expecting it to actually happen during 2010. Okay, despite the 8 million units, computer vision UIs aren't a general-purpose mass market thing yet, but the real kicker here is how easy Kinect is to use for homebrew software. We're going to see some amazing prototypes and one or two actual products this year, I'm sure.

In terms of other software platform stuff, much hot air has been moved around iOS, Android, JavaScript and Flash. I haven't seen much that would have made me think it'd be time to reposition yet. Native applications are on their way out (never mind Mac App Store, it's a last-hurrah thing for apps which don't have an Internet service behind them), and browser-based stuff is on its way in. Flash is still the best browser-side applications platform for really rich stuff, and while JavaScript/HTML5/Canvas is coming, it's not here yet. For more, see this thread on Quora where I commented on the same. Much of the world seems to think that HTML5 Video tag, h.264 and VP8 equate to the capabilities of Flash, that's quite off-base.

On the other hand, tablets are very much the thing. I very much expect that my Galaxy Tab will be outdated by next month, and am looking forward to the dual-core versions which probably will be good for much, much more than email, calendar, web and the occasional game. Not that I'm not already happy about what's possible on the current tablets -- I carry a laptop around much less already. An in terms of what it means for software -- UI's are ripe for a radical evolution. 

The combination of direct touch on handheld devices and camera-read gestures on living-room devices is already here, and I expect both to shift on to the desktop as well. Not by replacing keyboards, nor necessarily mouses, but I'm looking forward to soon having a desktop made out of a large near-horizontal touchscreen for arranging stuff replacing the desk itself, a couple of large vertical displays for presenting information, a camera vision for helping the computer read my intentions and focus on stuff, and keeping the keyboard around for rapid data entry. One has to remember that things for which fingers are enough are much more efficiently done with fingers than by waving the entire hand around.. 

Will I have such a desk this year? Probably not. At the workplace, I move around so much that a tablet is more useful, and at home, time in front of a desktop computer grew rather more infrequent with the arrival of our little baby girl a few weeks ago.. But those are what I want "a computer" to mean to her, not these clunky limited things my generation is used to.

Tuesday 11 May 2010

LOGIN presentation on Habbo's Flash transition and player-to-player market

Had my presentation as one of the first sessions of this year's LOGIN conference. Darius Kazemi liveblogged the speech at his blog, and the slides are here. Best viewed together.

Sunday 2 May 2010

On rich web technologies

For the past week, the technology world has been unable to discuss anything but Apple's refusal to allow Flash applications on the iPhone and iPad, and Steve Jobs's open letter which paints this as a technology question and Apple's position as one of protecting consumer interests by ensuring quality applications. It would be incredibly naive to take that literally. No, of course it's all about business control.

Charlie Stross has written a great, if speculative piece on the bigger picture. I think Charlie is spot-on - Apple is seeing a chance to disrupt the PC market, and wants to finish at the top, holding all the aces. That might even happen, given how badly other companies are addressing the situation, but if it did, it would be anything but good for the consumer - or for the small developer.

The business interest

Apple today is a $43 billion annual revenue, $240 billion market cap giant, give or take. Out of that value, 40% or so is riding on the iPhone, and Steve is clearly taking the company to a direction where devices running the iPhoneOS will replace the Macs, so that share is only increasing. Right now, they have more resources to do this than anyone else in the world, and least legacy to worry about, given that despite the rising market share and the title of leading laptop vendor, computers running Mac OS X are still a minority market compared to all the Windows powered devices from a legion of other makers.

The company's DNA, and Steve's personal experience over the past 25 years has taught them that an integrated, tightly controlled platform is something they are very good at, but that earlier mistakes of not controlling the app distribution as well left them weak. They're not going to repeat that mistake. And certainly they'll try to ensure that not only do the iPhone and iPad have the best applications, but that those applications are only available on Apple devices.

Adobe, despite their history of dominating many design and content production software niches and a market cap of $18 billion, is tiny in comparison. Furthermore, the Flash platform is a visible but financially less relevant part of Adobe's product portfolio (though exact share of Flash is buried inside their Creative Solutions business segment). Even disregarding that Apple can, as the platform owner, dictate whatever rules they want for the iPhoneOS, Adobe símply can not win a battle of resources against Apple.

But this fight is not about Flash on the iPhone - it's about Apple's control of the platform in general. Whether or not it's true, Apple believes tight control is a matter of survival for them.

The technical argument

Apple wants to make it seem like they're doing this because Flash is bad technology. As I wrote above, and so many others have described better than I have, that's a red herring. It's always convenient to dress business decisions behind seemingly accurate technical arguments ("Your honor, of course we'd do that, but the tech just doesn't work!"). Anyway, let's look at that technical side a bit.

First, lets get the simple bit out of the way. Flash is today most often used to display video on web sites. However, this is not about video, and video has never been Flash's primary point. It just happened to have a good install base and decent codecs at a time in 2005 when delivering lots of video bits started to make sense and YouTube came along to popularize the genre. In fact, it was completely superior for the job compared to the alternatives at the time, such as Real Player. The real feature, however, was that Flash was programmable, which allowed these sites to create their own embedded video players without having to worry about the video codecs.

By that time, Flash had already gained somewhat of a bad reputation for being the tool with which some seriously horrible advertising content had been made, so the typical way to make the web fast was to disable Flash content - rendering most ads invisible. I'm pretty sure for many YouTube was the first time there really was an incentive to have Flash in their browsers at all. That is, unless you liked to play the casual games that already then were also often created with Flash.

But that's all history, what about the future? Adobe certainly needs to take quite a lot of the blame for the accusations leveled against Flash - in particular, the way Flash content slows a computer down even when nothing is visible (as in, the 10 Flash-based adverts running in a browser tab you haven't even looked at in the last half an hour), or that yes, it does crash rather frequently. Quite a few of those problems are being addressed by Flash Player 10.1, currently in beta testing and to be released some time in the next months. Too little, too late, says Apple, and many agree.

I would, too, except for the fact that despite the issues, Flash is still the leading and best platform for rich web applications. It took that position from Java because it was (and is) lighter and easier to install, and keeps that position now against the much-talked-about HTML5 because the latter simply isn't ready yet, and once it is, will still take years to be consistently available for applications (that is, until everyone has upgraded their browsers). Furthermore, it's quite a bit easier to create something that works by depending on Flash 10 than to work around all the differences of Internet Explorer, Firefox, Safari, Chrome, Opera and so on.

But that's exactly what Steve is saying, isn't it? That these cross-platform Flash applications simply can't provide the same level of sophistication and grace as a native application on the iPad. Well, maybe that's true today. Maybe it's even true after Adobe finally releases 10.1's mobile editions on the Android. And given the differences in the scale of resources Apple and Adobe can throw at a problem, maybe it's true even with Flash Player 10.2 somewhere down the road.

But that doesn't matter. What matters is what developers do with the tools given to them, because the tools themselves do nothing. There's plenty of horrible crap in the ranks of App Store's 200,000 applications, and there's plenty of brilliant things done with Flash and AIR. Among the best of the best, which platform has the greatest applications? That's a subjective call that I will let someone else try to answer.

I will say this: all technology is fated to be replaced by something better later. At least ActionScript3 and Flash's virtual machine provide a managed language that lets application developers worry about something else than memory allocation. Sure, it wasn't all that hot until version 10, and still loses to Java, but it sure is better than Objective-C. If we're now witnessing the battle for platform dominance for the end of this decade, I sure would like to see something else than late 80s technology at the podium.

The consumer position

Apple wants to provide the consumer a polished, integrated experience where all pieces fit together, and most of them are made by Apple. The future of that experience includes control of your data as well. Put your picture albums in Apple's photo service, your music library in iTunes, your home video on iMovie Cloud, and access it all with beatiful Apple devices. Oh, you don't want to be all-Apple? Too bad. That's what you get.

Or, you can choose something where you'll have choice. If you believe Steve Jobs, that choice is between dirt, smut and porn, but his interest is to scare you back to Apple, so take that with a grain of salt. Me, I've never liked being dictated to, so I'll be choosing the path where I can pick what I want, when I want it. Sure, it'll mean I'll miss some of the polish (iPhone is by far the nicest smart phone today, and the iPad sure feels sweet), but nevertheless, I respect my freedom to choose more. Today, it means I'll choose Android, and am looking forward to playing Flash games and using AIR applications on tablets powered by it.

Monday 26 April 2010

A new lean software manifesto

This weekend saw Eric Ries's Lean Startup movement produce a conference on the approach. People who were there have already summarized and documented the proceedings in quite a detail. One of the interesting take-aways seems to have been Kent Beck's proposal for the evolution of the Agile Manifesto into something more applicable to the startup context of continuous learning and adaptation. Apparently, it has created quite a bit of discussion, but apart from the video recording, I haven't seen it being stated completely anywhere. So, it goes something like this (original waterfall comparison parenthesized):

As practitioners of software development to support lean business, we have come to realize that the unknowns of the business context are more critical to the success of the enterprise than the attributes of the software we create. As we learn this, we have come to value:

Team vision and discipline over individuals and interactions (or processes and tools)
Validated learning over working software (or comprehensive documentation)
Customer discovery over customer collaboration (or contract negotiation)
Initiating change over responding to change (or following a plan)

That is, while there is value in the items on the right, we value the items on the left more.

I hope I did not butcher some subtlety when extracting those words out of the keynote speech. Now, for my own view: there's plenty in the above statements which I can resonate with, but some bits that I find myself somewhat uneasy about. And no, it's not over the second point, which apparently has ruffled the feathers of quite a few software engineers (I'll let Steve Freeman explain that one).

The biggest issue I have is with the third statement, preferring customer discovery to customer collaboration. Not because that's not a great thing in some situations, but because it limits the applicability of this model to a tiny cross section of where the lean principles truly apply. Namely, it works great for a garage startup that doesn't yet know what its market really is. It doesn't work all that great for a business which already has customers, revenue, and even profit - yet such a business is still well served by maintaining a lean approach. Now, one may argue that a growth business will always need to continue to discover new customers, either similar to those it already has, or entirely new segments, and I will not disagree. Still, there comes a point where greater success comes from collaborating with your customers than from looking for new ones.

The second issue I have is with the first statement of preferring teams and discipline over individuals and interaction. Again, not because I disagree, but because I know there are many people who will interpret the word "discipline" as "lets set up processes, plans and approval mechanisms", and turn the whole thing back to waterfall. Successful application of the agile principles has never been as easy as the books and educators make it sound like, and the subtlety of the differences between the values of the first statement is, I think, the primary reason why.

Sunday 17 January 2010

First thoughts about Balancion

I got an invite to the Balancion personal finance application beta a week ago, and have played with it somewhat since. I've tried a few similar tools before, ranging from the finance packages of the banks I've been a customer of, to a few desktop applications. Until now, I haven't been sufficiently impressed by them to continue using any for any significant period, but I think Balancion might be one to stick around for a while.

Balancion solves the two issues my previous experiments have failed at: first, it covers the entirety of my personal accounts (or very close thereof), because it isn't limited to just the services offered by one bank (the failing of Nordea's, Sampo's and OP's packages, at least the last time I tried them), and second, it doesn't force me to spend my evenings manually typing in boring details, thanks to its tools for downloading the data from the banks and other institutions. Of course, that's just table stakes for the game, really, but my previous experiences have shown even that much is not a given in a market the size of Finland. I would imagine larger market areas have had more focus on this type of stuff - American banks seem to advertise compatibility with Quicken or MS Money - or now with Mint, the hottest entry in the area. German banks apparently have a standard for transaction data exchange. None of that has been available to individuals in Finland.

What currently lifts Balancion above the table-stakes minimum is how it deals with "uncategorized" expenses. Other tools allow searching for similar historic transactions and categorizing all of them at once. Balancion applies that to the future as well, and learns to recognise more and more stuff as you go. Setting the books up for the first time does require a few hours of clicking around, but it gets less and less manual as time goes. That's what makes it a joy to use (as much as any financial application can be a joy, that is!)

At this point in the beta, it's a bit limited; just tracking income and expenses, plus a few (quite useful and informative) visualizations of the same, which already can be helpful in recognizing big expense areas and saving money. However, I'm looking forward to seeing more of the budgeting, expense management and investing tools in the service. It's pretty clear how this can develop and where the opportunities for the business lie. The crucial question is, how can Balancion add partnerships and cross-sell features while retaining the trust of the users. Thus far, their communication indicates they understand how important this will be to their success.

I'm not terribly happy about the way Balancion authenticates me, though. The email/password login is standard, though I'd prefer to use OpenID to avoid managing one more password. What really bugs me are the mandatory "security questions", which they require to be able to change the password. Such questions, especially since they were limited to two out of half a dozen pre-selected questions only reduce security (seriously, it does not take much investigating to figure out the maiden name of my mother). If this is what their security advisor Nixu truly has recommended to the team, I'm disappointed in Nixu as well. Anyway, I answered the questions with something random - so now I can't change my password at all. This probably was not what they intended.

For anyone interested in this category of services, I would recommend checking out the venture capital pitch presentation of, the US equivalent of Balancion. If you want to try out Balancion yourself, ask me for an invite here in the blog comments or by tweeting @osma.

Thursday 14 January 2010

Technology factors to watch during 2010

Last week I posted a brief review of 2009 here, but didn't go much into predictions for 2010. I won't try to predict anything detailed now either, but here's a few things I think will be interesting to monitor over the year. And no, tablet computing isn't on the list. For fairly obvious reasons, this is focused on areas impacting social games. As a further assist, I've underlined the parts most resembling conclusions or predictions.


Social networks and virtual worlds interoperability

As more and more business transforms to use Internet as a core function, the customers of these businesses are faced with a proliferation of proprietary identification mechanisms that has already gotten out of hand. It is not uncommon today to have to manage 20-30 different userid/password pairs that are in regular use, from banks to e-commerce to social networks. At the same time, identity theft is a growing problem, no doubt in large part because of the minimum-security methods of identification.

Social networks today are a significant contributor to this problem. Each collects and presents information about its users that contribute to the rise of identity theft while having their own authorization mechanisms in a silo of low-trustworthy identification methods. The users, on the other hand, perceive little incentive to manage their passwords in a secure fashion. Account hijacking and impersonation is a major problem area to each vendor. The low trust level of individual account data also leads to a low relative value of owning a large user database.

A technology solution, OpenID is emerging and taking hold in a form of an industry-accepted standard for exchanging identity data between an ID provider and a vendor in need of a verified id for their customer. A few of current backers of the standard in the picture on the right. However, changing the practices of the largest businesses has barely begun and no consumer shift can yet be seen – as is typical for such “undercurrent” trends.

OpenID will allow consumers to use fewer, higher-security ids over the universe of their preferred services, which in turn will allow these services a new level of transparent interoperability in combining data from each other in near-automatic, personalized mash-ups via the APIs each vendor can expose to trusted users with less fear of opening holes for account hijacking.


Browsers vs desktops: what's the target for entertainment software?

Here's a rough sketch of competing technology streams in terms of two primary factors – ease of access versus the rich experience of high-performance software. “Browser wars” are starting again, and with the improved engines behind Safari 4, Firefox 4, IE 8 and Google Chrome, a lot of the kind of functionalitywe're used to thinking belongs to native software or at best browser plugins like Flash, Java or Silverlight will be available straight in the browser. This for sure includes high-performance application code, rich 2D vector and pixel graphics, video streams and access to new information like location-sensing. The plugins will most likely be stronger at 3D graphics and synchronized audio and at advanced input mechanisms like using webcams for gesture-based control. Invariably, especially the new input capabilities will also bring with them new security and privacy concerns which will not be fully resolved within the next 2-3 years.

While 3D as a technology will be available to browser-based applications, this doesn't mean the web will turn to represent everything as a virtual copy of the physical world. Instead, it's best use will be as a tool for accelerating and enhancing other UI and presentation concepts – think iTunes CoverFlow. For social interaction experiences, a 3-degrees-freedom pure 3D representation will remain a confusing solution, and other presentations such as axonometric “camera in the corner” concepts will remain more accessible. Naturally, they can (but don't necessarily need to) be rendered using 3D tech.


Increased computing capabilities will change economies of scale

The history of the “computer revolution” has been about automation changing economies of scale to enable entirely new types of business. Lately we've seen this eg by Google AdWords enabling small businesses to advertise and/or publish ads without marketing departments or involvement of agencies.

The same trend is continuing in the form of computing capacity becoming a utility in Cloud Computing, extreme amounts of storage becoming available in costs which allow terabytes of storage to organizations of almost any size and budget, and most importantly, developing data mining, search and discovery algorithms that enable organizations to utilize data which used to be impossible to analyze as automated business practices. Unfortunately, the same capabilities are available for criminals as well.

Areas in which this is happening as we speak:

  • further types and spread of self-service advertising, better targeting, availability of media
  • automated heuristics-based detection of risky customers, automated moderation
  • computer-vision based user interfaces which require nothing more than a webcam
  • ever increasing size of botnets, and the use of them for game exploits, money laundering, identity theft and surveillance

The escalation of large-scale threats have raised the need for industry-wide groups for exchanging information and best practices between organizations regarding the security relevant information such as new threats, customer risk rating, identification of targeted and organized crime.


Software development, efficiencies, bottlenecks, resources

Commercial software development tools and methods experience a significant shift roughly once every decade. The last such shift was the mainstreaming of RAD/IDE-based, virtual-machine oriented tools and the rise of Web and open source in the 90s, and now those two rising themes are increasingly mainstream while “convergent”, cross-platform applications which depend on the availability of always-on Internet are emerging. As before, it's not driven by technological possibility, but by the richness and availability of high-quality development tools with which more than just the “rocket-scientist” superstars can create new applications.

The skills which are going to be in short supply are those for designing applications which can smoothly interface to the rest of the cloud of applications in this emerging category. Web-accessible APIs, the security design of those APIs, efficient utilization of services from non-associated, even competing companies, and friction-free interfaces for end users of these web-native applications is the challenge.

In this world, the traditional IT outsourcing houses won't be able to serve as a safety valve for resources as they're necessarily still focused on serving the last and current mainstream. In their place, we must consider the availability of open source solutions not just as a method for reducing licensing cost, but as the “extra developer” used to reduce time-to-market. And as with any such relationship, it must be nurtured. In the case of open source, that requires participation and contribution back to the further development of that enabling infrastructure as the cost of outsourcing the majority of the work to the community.

Mobile internet

With the launch of iPhone, the use of Web content and 3rd party applications on mobile devices has multiplied compared to previous smart phone generations. This is due to two factors: the familiarity and productivity of Apple's developer tools for the iPhone, and the straightforward App Store for the end-users. Moreover, the wide base of the applications is primarily because of the former, as proven by the wide availability of unauthorized applications already before the launch of iPhone 2.0 and the App Store. Nokia's failure to create such an applications market despite the functionality available on S60 phones for years before the iPhone launch proves this – it was not the features of the device, but the development tools and application distribution platform were the primary factor.

The launch of Google's Android will further accelerate this development. Current Android-based devices lack the polish of iPhone, and the stability gained from years of experience of Nokia devices, yet the availability of development tools will supercharge this market, and the next couple of years will see accelerated development and polish cycle from all parties. At the moment, it's impossible to call the winner on this race, though.

- page 2 of 3 -