Fishpool

To content | To menu | To search

Tag - games

Entries feed - Comments feed

Tuesday 5 February 2013

Arbitrage as a game mechanic

Reading this rather amazing story about cross-border arbitrage, I could not help but think about how it applies to game design.

Here's how the arbitrage math adds up. The ferry costs approximately $275 round trip, and gas is about $8 a gallon in Sweden, which, if we assume our car gets around 30 miles per gallon, gives us $435 in expenses. Throw in food, lodging, and other miscellaneous costs, and the total should come in around $600 or so. Remember, diapers costs more than twice as much in Lithuania as they do in Norway, so we only need to buy that much to break even.

If in the real world it's possible to entice enough entrepreneurial activity from a neighboring country to make the supermarkets of south Norway run out of diapers, imagine how powerful arbitrage opportunities are for game design. It can do everything:

  • Increase play frequency, as you need to come often to exploit recurring opportunities
  • Drive explorative gameplay, as more and more players search for new kinds of arbitrage
  • Incent specialization, because to exploit arbitrage, you need to focus on a particular activity
  • Drive expected lifetime up, as leaving the game means leaving value on the table
  • Drive lifetime value up, because in a free-to-play game, longer play time means more opportunities to buy
  • Drive virality up, because players have incentive to find both supply and demand for their particular arbitrage skill

Many of these factors apply even to a single-player game that simulates market activites. Look no further than the classics of market games, David Braben's Elite (1984) (or Star Trader, which preceded it by a cool 10 years). However, the forces really come to forefront when applied to a social game where the arbitrages don't need even need to be programmed in, as long as the design doesn't eliminate their possibility. Players will probably discover them.

That doesn't mean it's trivial to fully exploit that capability, though. For example, I don't think we ever really explored the arbitrage mechanics fully in Habbo Hotel, even though the system is full of player to player trading, rare items, well-hidden nooks and crannies, and whatnot. The most important feature missing in Habbo Hotel is rich support for specialization. RPG style games bring specialization through character classes and skills, resource management games through directing players to invest their earned resources in a particular type of activity, and so forth. The game mechanic should reward specializing, by making it possible for a player highly capable in a particular section of the gameplay to trade that capability with others for the skills or resources provided by another type of specialization. Don't reward being a generalist, or allow maximizing all stats.

Sunday 1 January 2012

Dusting off the looking glass

It's that time again... to make a few statements I can feel ridiculous about later on. I did take an advance position back in October regarding Internet platforms, so no need to touch that topic again just yet (especially after the additional HTML5/Flash comments in November). As before though, let's take a look at what my hit rate is.

#1 Oracle's jostling on Java patents will hurt Java as a platform: yeah, although it's hard to notice, what with the chatter on cloud platforms instead. Still, you've got to write that cloud-hosted application in some language, and though evidence is sparse, it seems to me that more devs are picking other tools. Somewhat insanely, PHP still ranks well in those selections, which proves that these things don't follow any observable logic, though.

#2 Amazing natural motion control applications during 2011: well, not really, yet. XBox Kinect has supposedly continued to sell well, though Microsoft hasn't given any sales data since last March (when they announced 10 million units sold), but applications are rather lame. Some pretty amazing research stuff going on though, which will ultimately enable computers to truly augment live views into the real world.

#3 Flash and new computing devices: see the other posts, linked above. Progress is steady but impact will take several years. As for the long-term view; while my daughter already understands that tablets and phones are for looking at stuff and playing, and keyboards are for banging, I maintain hope that in the next couple of years, she will be able to interact with computers by speech as well as gestures. We'll still need to invent the new human-computer interface best practices for that age, though.

Facebook/Timeline did finally launch before end of 2011. What do you think of it? I haven't seen a reason to change my view since October, although the "social reader" apps like Washington Post's or Guardian's certainly are annoying. Don't know if I should expect media companies to learn how to interact with people, though.

Now, the predictions. This one's gonna be difficult. Not because the world would be ending this year, but because it seems like quite a few macro trends are converging. Lots to feel optimistic about: locally, the interest in growth entrepreneurship and globally, new forms of peaceful citizen democracy, and the ever-continuing development of technology (gene therapy and data-driven, preventative medical treatments are exciting). A few that I hope will turn out well, though it's going to be a bumpy road: the ongoing Arab Spring as well as the Russian pro-democracy movement, the Euro crisis, which could still lead to yet another banking collapse. And finally, some political and regulatory changes that are quite worrying, even if I've tried to avoid a position on politics, and especially politics outside the EU. Still, these bother me for both their privacy as well as anti-competitive aspects and lack of due process: ACTA, SOPA, NDAA. Still, these are hardly going to bring the Singularity around quite yet, dystopian though they seem.

However, I don't want to pretend I care about or follow politics closely enough to understand why these things always come years behind and over-reach, so I'd rather focus on something more tractable. In terms of professional interests, the trend toward hosted, multiplayer gaming is, by now, quite unstoppable. We're moving on from the Social Games 1.0 of Facebook Canvas, though, and the future is more for games where the players' actions impact each other. The challenge is, we need to learn to design these games so that while they truly have group interaction in their core, they still remains games; that is, masterable, repeatable and somewhat predictable experiences people can continue to enjoy, and a source of richness their lives might otherwise be lacking.

As always, comments welcome. This year this post was quite hard to focus on anything in particular, and maybe you have better insight. Let me know. In any case, Happy New Year! Whatever you do, make 2012 matter.

Thursday 13 October 2011

Where the chips fall - platform dominator for 2012

It's been about a year since I put my prognosis skills on the line and tried to predict where technology and consumer products are heading. Since today is National Fail Day in Finland, perhaps it's time to try again. Lets see how right or wrong I end up being.

Last year I noted a couple of things about mobile platforms and of the software environments best suited for creating apps on them. While this year has seen a lot of development on those fronts, little of it has been in surprising directions. HTML5 is coming, but not here yet. If WebGL and Intel's River Trail project were supported by the Big Three (IE, Firefox and WebKit, ie Safari/Chrome), that'd make an amazing game platform - but at least the latter is research-only at this point, and IE9 isn't going to support either. In the meantime, Adobe finished Flash 11, which now has hardware-accelerated 3D in addition to a pretty good software runtime, and, after only 10 days out, already has 42% reach for consumer browsers (at least judging by stats on habbo.com). Like I've said a long time, Flash gets a lot of undeserved crap due to the adware content created on it. We won't get rid of that by changing tech, and platforms should be judged by their capabilities in the hands of good developers, not by mediocrity. And, as far as mobile goes, the trend continues -- iPhone and Android battle it out, now also in courts as well as in consumer markets, while everything else falls under the wagon. If you're creating an app -- do it either with a cross-platform native toolchain, or with HTML5. If you're doing a game, do it with Unity or Flash, and build a native app out of it for mobile.

The interesting thing, to me, is playing out on the Internet. Google+ came out as a very nice product with well-balanced feature set, but (fairly predicably, though I was rooting for it) failed to catch the early adopter fancy for long enough to displace Facebook in any niche. Facebook, on the other hand, scared (or is going to scare) 40% of their audience by announcing Timeline (eek, privacy invasion!). Brilliant move -- you can't succeed today without taking such leaps that nearly half of your audience will be opposed to them, at least initially. Smaller changes simply aren't meaningful enough.

So, I'm betting on Facebook. I'd also guess that once they get Facebook Credits working outside of the Canvas, they're going to demand that any app using Facebook Connect log-ins will accept Credits for payment. I'd hazard a guess they're even going to demand FB Credits exclusivity. They'll fail the latter demand, but that won't stop them from trying it. Having your app's/game's social publishing automatically done by Facebook simply by feeding them events, and not having to think about which ones are useful to publish, is just such a big time saver for a developer, no one will want to miss out on it.

Not even Zynga. They're doing this destination-site, we're-not-gonna-play-inside-Facebook-anymore strategy, but continue to use Facebook Connect for log-ins. That's not because FB Connect is so much more convenient than own username and password (though it is), but because even they can't afford to let go of the "free" access to people's social network. That's the power of Timeline and the new, extended Graph API.

The chips are still in the air. When they fall, I think Facebook will be stronger than ever, but strong enough to displace the "rest of the Internet"? No. As a developer, I want to push Facebook the data for in-game activities, because that saves me time doing the same thing myself. As a publisher, I'm unsure I want Facebook to have all that info, exploiting them for their purposes, risking my own ability to run a business. As a consumer, it makes me uneasy that they have all that info about me, and while I can access and control quite a lot of it, I can't know what they're using it for. I don't think that unease will be enough to stop me or most other consumers from feeding them even more data of our lives, likes and activities. Still, they're only successful doing this as long as they don't try to become a gatekeeper to the net - nor do they need to do that, since they get the data they want without exerting control over my behavior. Trying to fight against that trend is going to be a losing strategy for most of us - possibly even for Google. Apple and Microsoft won't need to fight it, because they're happy enough, for now at least, to simply work with Facebook.

Sunday 2 May 2010

On rich web technologies

For the past week, the technology world has been unable to discuss anything but Apple's refusal to allow Flash applications on the iPhone and iPad, and Steve Jobs's open letter which paints this as a technology question and Apple's position as one of protecting consumer interests by ensuring quality applications. It would be incredibly naive to take that literally. No, of course it's all about business control.

Charlie Stross has written a great, if speculative piece on the bigger picture. I think Charlie is spot-on - Apple is seeing a chance to disrupt the PC market, and wants to finish at the top, holding all the aces. That might even happen, given how badly other companies are addressing the situation, but if it did, it would be anything but good for the consumer - or for the small developer.

The business interest

Apple today is a $43 billion annual revenue, $240 billion market cap giant, give or take. Out of that value, 40% or so is riding on the iPhone, and Steve is clearly taking the company to a direction where devices running the iPhoneOS will replace the Macs, so that share is only increasing. Right now, they have more resources to do this than anyone else in the world, and least legacy to worry about, given that despite the rising market share and the title of leading laptop vendor, computers running Mac OS X are still a minority market compared to all the Windows powered devices from a legion of other makers.

The company's DNA, and Steve's personal experience over the past 25 years has taught them that an integrated, tightly controlled platform is something they are very good at, but that earlier mistakes of not controlling the app distribution as well left them weak. They're not going to repeat that mistake. And certainly they'll try to ensure that not only do the iPhone and iPad have the best applications, but that those applications are only available on Apple devices.

Adobe, despite their history of dominating many design and content production software niches and a market cap of $18 billion, is tiny in comparison. Furthermore, the Flash platform is a visible but financially less relevant part of Adobe's product portfolio (though exact share of Flash is buried inside their Creative Solutions business segment). Even disregarding that Apple can, as the platform owner, dictate whatever rules they want for the iPhoneOS, Adobe símply can not win a battle of resources against Apple.

But this fight is not about Flash on the iPhone - it's about Apple's control of the platform in general. Whether or not it's true, Apple believes tight control is a matter of survival for them.

The technical argument

Apple wants to make it seem like they're doing this because Flash is bad technology. As I wrote above, and so many others have described better than I have, that's a red herring. It's always convenient to dress business decisions behind seemingly accurate technical arguments ("Your honor, of course we'd do that, but the tech just doesn't work!"). Anyway, let's look at that technical side a bit.

First, lets get the simple bit out of the way. Flash is today most often used to display video on web sites. However, this is not about video, and video has never been Flash's primary point. It just happened to have a good install base and decent codecs at a time in 2005 when delivering lots of video bits started to make sense and YouTube came along to popularize the genre. In fact, it was completely superior for the job compared to the alternatives at the time, such as Real Player. The real feature, however, was that Flash was programmable, which allowed these sites to create their own embedded video players without having to worry about the video codecs.

By that time, Flash had already gained somewhat of a bad reputation for being the tool with which some seriously horrible advertising content had been made, so the typical way to make the web fast was to disable Flash content - rendering most ads invisible. I'm pretty sure for many YouTube was the first time there really was an incentive to have Flash in their browsers at all. That is, unless you liked to play the casual games that already then were also often created with Flash.

But that's all history, what about the future? Adobe certainly needs to take quite a lot of the blame for the accusations leveled against Flash - in particular, the way Flash content slows a computer down even when nothing is visible (as in, the 10 Flash-based adverts running in a browser tab you haven't even looked at in the last half an hour), or that yes, it does crash rather frequently. Quite a few of those problems are being addressed by Flash Player 10.1, currently in beta testing and to be released some time in the next months. Too little, too late, says Apple, and many agree.

I would, too, except for the fact that despite the issues, Flash is still the leading and best platform for rich web applications. It took that position from Java because it was (and is) lighter and easier to install, and keeps that position now against the much-talked-about HTML5 because the latter simply isn't ready yet, and once it is, will still take years to be consistently available for applications (that is, until everyone has upgraded their browsers). Furthermore, it's quite a bit easier to create something that works by depending on Flash 10 than to work around all the differences of Internet Explorer, Firefox, Safari, Chrome, Opera and so on.

But that's exactly what Steve is saying, isn't it? That these cross-platform Flash applications simply can't provide the same level of sophistication and grace as a native application on the iPad. Well, maybe that's true today. Maybe it's even true after Adobe finally releases 10.1's mobile editions on the Android. And given the differences in the scale of resources Apple and Adobe can throw at a problem, maybe it's true even with Flash Player 10.2 somewhere down the road.

But that doesn't matter. What matters is what developers do with the tools given to them, because the tools themselves do nothing. There's plenty of horrible crap in the ranks of App Store's 200,000 applications, and there's plenty of brilliant things done with Flash and AIR. Among the best of the best, which platform has the greatest applications? That's a subjective call that I will let someone else try to answer.

I will say this: all technology is fated to be replaced by something better later. At least ActionScript3 and Flash's virtual machine provide a managed language that lets application developers worry about something else than memory allocation. Sure, it wasn't all that hot until version 10, and still loses to Java, but it sure is better than Objective-C. If we're now witnessing the battle for platform dominance for the end of this decade, I sure would like to see something else than late 80s technology at the podium.

The consumer position

Apple wants to provide the consumer a polished, integrated experience where all pieces fit together, and most of them are made by Apple. The future of that experience includes control of your data as well. Put your picture albums in Apple's photo service, your music library in iTunes, your home video on iMovie Cloud, and access it all with beatiful Apple devices. Oh, you don't want to be all-Apple? Too bad. That's what you get.

Or, you can choose something where you'll have choice. If you believe Steve Jobs, that choice is between dirt, smut and porn, but his interest is to scare you back to Apple, so take that with a grain of salt. Me, I've never liked being dictated to, so I'll be choosing the path where I can pick what I want, when I want it. Sure, it'll mean I'll miss some of the polish (iPhone is by far the nicest smart phone today, and the iPad sure feels sweet), but nevertheless, I respect my freedom to choose more. Today, it means I'll choose Android, and am looking forward to playing Flash games and using AIR applications on tablets powered by it.

Thursday 14 January 2010

Technology factors to watch during 2010

Last week I posted a brief review of 2009 here, but didn't go much into predictions for 2010. I won't try to predict anything detailed now either, but here's a few things I think will be interesting to monitor over the year. And no, tablet computing isn't on the list. For fairly obvious reasons, this is focused on areas impacting social games. As a further assist, I've underlined the parts most resembling conclusions or predictions.

 

Social networks and virtual worlds interoperability

As more and more business transforms to use Internet as a core function, the customers of these businesses are faced with a proliferation of proprietary identification mechanisms that has already gotten out of hand. It is not uncommon today to have to manage 20-30 different userid/password pairs that are in regular use, from banks to e-commerce to social networks. At the same time, identity theft is a growing problem, no doubt in large part because of the minimum-security methods of identification.

Social networks today are a significant contributor to this problem. Each collects and presents information about its users that contribute to the rise of identity theft while having their own authorization mechanisms in a silo of low-trustworthy identification methods. The users, on the other hand, perceive little incentive to manage their passwords in a secure fashion. Account hijacking and impersonation is a major problem area to each vendor. The low trust level of individual account data also leads to a low relative value of owning a large user database.

A technology solution, OpenID is emerging and taking hold in a form of an industry-accepted standard for exchanging identity data between an ID provider and a vendor in need of a verified id for their customer. A few of current backers of the standard in the picture on the right. However, changing the practices of the largest businesses has barely begun and no consumer shift can yet be seen – as is typical for such “undercurrent” trends.

OpenID will allow consumers to use fewer, higher-security ids over the universe of their preferred services, which in turn will allow these services a new level of transparent interoperability in combining data from each other in near-automatic, personalized mash-ups via the APIs each vendor can expose to trusted users with less fear of opening holes for account hijacking.

 

Browsers vs desktops: what's the target for entertainment software?

Here's a rough sketch of competing technology streams in terms of two primary factors – ease of access versus the rich experience of high-performance software. “Browser wars” are starting again, and with the improved engines behind Safari 4, Firefox 4, IE 8 and Google Chrome, a lot of the kind of functionalitywe're used to thinking belongs to native software or at best browser plugins like Flash, Java or Silverlight will be available straight in the browser. This for sure includes high-performance application code, rich 2D vector and pixel graphics, video streams and access to new information like location-sensing. The plugins will most likely be stronger at 3D graphics and synchronized audio and at advanced input mechanisms like using webcams for gesture-based control. Invariably, especially the new input capabilities will also bring with them new security and privacy concerns which will not be fully resolved within the next 2-3 years.

While 3D as a technology will be available to browser-based applications, this doesn't mean the web will turn to represent everything as a virtual copy of the physical world. Instead, it's best use will be as a tool for accelerating and enhancing other UI and presentation concepts – think iTunes CoverFlow. For social interaction experiences, a 3-degrees-freedom pure 3D representation will remain a confusing solution, and other presentations such as axonometric “camera in the corner” concepts will remain more accessible. Naturally, they can (but don't necessarily need to) be rendered using 3D tech.

 

Increased computing capabilities will change economies of scale

The history of the “computer revolution” has been about automation changing economies of scale to enable entirely new types of business. Lately we've seen this eg by Google AdWords enabling small businesses to advertise and/or publish ads without marketing departments or involvement of agencies.

The same trend is continuing in the form of computing capacity becoming a utility in Cloud Computing, extreme amounts of storage becoming available in costs which allow terabytes of storage to organizations of almost any size and budget, and most importantly, developing data mining, search and discovery algorithms that enable organizations to utilize data which used to be impossible to analyze as automated business practices. Unfortunately, the same capabilities are available for criminals as well.

Areas in which this is happening as we speak:

  • further types and spread of self-service advertising, better targeting, availability of media
  • automated heuristics-based detection of risky customers, automated moderation
  • computer-vision based user interfaces which require nothing more than a webcam
  • ever increasing size of botnets, and the use of them for game exploits, money laundering, identity theft and surveillance

The escalation of large-scale threats have raised the need for industry-wide groups for exchanging information and best practices between organizations regarding the security relevant information such as new threats, customer risk rating, identification of targeted and organized crime.

 

Software development, efficiencies, bottlenecks, resources

Commercial software development tools and methods experience a significant shift roughly once every decade. The last such shift was the mainstreaming of RAD/IDE-based, virtual-machine oriented tools and the rise of Web and open source in the 90s, and now those two rising themes are increasingly mainstream while “convergent”, cross-platform applications which depend on the availability of always-on Internet are emerging. As before, it's not driven by technological possibility, but by the richness and availability of high-quality development tools with which more than just the “rocket-scientist” superstars can create new applications.

The skills which are going to be in short supply are those for designing applications which can smoothly interface to the rest of the cloud of applications in this emerging category. Web-accessible APIs, the security design of those APIs, efficient utilization of services from non-associated, even competing companies, and friction-free interfaces for end users of these web-native applications is the challenge.

In this world, the traditional IT outsourcing houses won't be able to serve as a safety valve for resources as they're necessarily still focused on serving the last and current mainstream. In their place, we must consider the availability of open source solutions not just as a method for reducing licensing cost, but as the “extra developer” used to reduce time-to-market. And as with any such relationship, it must be nurtured. In the case of open source, that requires participation and contribution back to the further development of that enabling infrastructure as the cost of outsourcing the majority of the work to the community.


Mobile internet

With the launch of iPhone, the use of Web content and 3rd party applications on mobile devices has multiplied compared to previous smart phone generations. This is due to two factors: the familiarity and productivity of Apple's developer tools for the iPhone, and the straightforward App Store for the end-users. Moreover, the wide base of the applications is primarily because of the former, as proven by the wide availability of unauthorized applications already before the launch of iPhone 2.0 and the App Store. Nokia's failure to create such an applications market despite the functionality available on S60 phones for years before the iPhone launch proves this – it was not the features of the device, but the development tools and application distribution platform were the primary factor.

The launch of Google's Android will further accelerate this development. Current Android-based devices lack the polish of iPhone, and the stability gained from years of experience of Nokia devices, yet the availability of development tools will supercharge this market, and the next couple of years will see accelerated development and polish cycle from all parties. At the moment, it's impossible to call the winner on this race, though.

Thursday 2 April 2009

I'm still thinking of OnLive. Why is that?

I pretty much blasted OnLive the other day as something that doesn't hold a candle to the distribution power that is the web. Still I keep wondering what's the draw of it. Positioned against the consoles business, it does have clear benefits, clear enough that Nintendo's Reggie Fils-Aime felt it necessary to try to dismiss it. That is, with the exception that it doesn't run console games, only PC games. Today though I read this post from Keith Boesky (RT @jussil), and sure, looking at it from the perspective of building it for acquisition, yeah, it makes perfect sense. I guess my weakness is I always try to understand things like this as standalone businesses, when they're probably not meant to be that. My bad.

Friday 27 March 2009

Why OnLive will not be the massive tectonic shift so many are currently predicting

Among the things announced this week in GDC were two developments in entirely different directions on a particular axis of games technology: first, the OnLive network of thin clients showing network-streamed video games rendered on a server cluster somewhere, and second, the Mozilla/Khronos Group initiative to develop an OpenGL-accelerated, JavaScript-programmed 3D canvas in a web browser. Both have one thing in common: make it possible to run 3D apps (games) on standard devices without prior installations. How they go about that goal is radically different. One of them will fail.

OnLive is not the first company to attempt their idea. It's a basic extension to the same theme that has been around since at least the inception of the X Window System and Sun NeWS in early 80s - graphical thin clients showing applications running somewhere in the network. Further, the idea was explored for 3D games in the late 90s by G-Cluster, which apparently is still around in Japan in some form or another. In my opinion, it's a misguided approach. Certainly there's value to server-side processing, even of graphics, but the final rendering just makes so much more sense to be done on the client even when all of the application logic is remote.

What kind of client? Well, anything that can run a high-performance VM for Java or JavaScript (ie, a modern browser), and has 3D acceleration functionalities built into the graphics pipeline. This includes basically every network-connected device from the cost of $200 upwards: all smart phones, all netbooks, all laptops, all games consoles, and so on. Some of those devices are still intentionally crippled by their manufacturers in terms of operating system support for the required features, and clearly the 3D Canvas development hasn't been finished yet. The hardware capabilities, however, are already deployed to hundreds of millions of consumers.

Ignoring that deployed base and trying to scale a server-side rendering solution to the same figures is just mad. And that's not even considering the framerate and responsiveness constraints that are inescapable simply because of the round-trip network latency of such a system: on a high-bandwidth wired network 10s of milliseconds (not everyone can be situated within a few kilometers of the server cluster), and on radio networks, 100s of ms. Developing high-framerate games under those circumstances is hard enough when you only need to deal within transmitting positional data and adjusting for lag and jitter in both ends - making the games playable when every action made by the player needs to go to the server and back before it shows up is practically speaking impossible.

(Update an hour later) I suppose I should acknowledge that clearly the OnLive approach does have certain benefits to it: no piracy, little hacking of the typical kinds, little opportunity to cheat, and no need for investing in PunkBuster-type technology in the game clients, since none of that is running locally. However, all that just simply will not matter when weighed against the enormous brunt of having to run all that rendering in the wrong end of the MMO network and ignoring the opportunities to disperse so much of the investment and energy requirements to the gamers.

Sunday 16 November 2008

Chris Anderson on freemium conversion

Chris Anderson, author of The Long Tail, uses free-to-play web games as a case study on conversion rates for freemium products. I wrote about the conversion and monetization rates in this world two months ago as a followup to my GCDC presentation from last summer. I can't really think of a better example of freemium model than Habbo - a freely accessible service with high engagement and a large audience really gets to utilize and showcase the model at its very peak. The only thing missing is even easier micropayment models. We'd love to use the iTunes store for selling Habbo items, for example.

Thursday 16 October 2008

Splitting the virtual worlds market to segments

IMVU founder Eric Ries commented on Virtual Goods Summit and suggested that virtual worlds can be divvied up along three axes of UGC/first-party, subscription/pay-for-stuff, and economy/gameplay focus. This is certainly one good way of thinking about the focus decisions needed when designing and developing a product in this market, but personally, I think this model, along with others I've seen and played with myself, suffers from a few key weaknesses that arise from the need to simplify things. I'm not saying the model can't help put things in order, just that there's more to finding the right solutions than this. Lets go with the great blogger tradition of point-for-point response.

UGC vs first-party

It's amazing Sulka didn't comment on this: UGC is not just about letting users upload pictures or items to a world. More to the point, Habbo certainly is not first-party content focused. Yes, all our furni is designed and developed by our own teams, and we don't enable user uploads. But at the same time, over 90% of all of the activities in Habbo emerge from the community - users take what we've made, and do their own things with it. Most of what's going on, we had no idea would happen.

Eric says IMVU's efforts to enable UGC dwarf those to create their own first-party catalog. Well, so do ours, despite his classification of Habbo being first-party content focused. Every feature, every furni, every activity, every news item receives more thought on "how do we support users to go to their own directions here?" than "what do we want this to be about?". Plus the significant fraction of our work that has absolute no effort to produce content attached to it, and is fully focused on player activities.

Lets just use the old, tired LEGO analogy here. How much of LEGO is first-party content? Just enough to get the imagination of the players going so they can create something of their own. Anything more would be too much, and this applies to any VW that can call itself "social" - and none that isn't social isn't going to be interesting. Trying to make a useful UGC split for any purpose other than copyright infringement monitoring is a red herring, and even for that one purpose it's not very likely to be useful due to other moderation requirements.

Subscription or pay-for-stuff

This is one of the stronger arguments, if only just because those are the business models the industry has latched on to. They're certainly not the only possibilities though, nor are they alternatives to each other. Eric's points about the strenghts and weaknesses are good - but you can benefit from both at the same time, and support the weaknesses of one model with the strenghts of the other. This is certainly an area where we have a lot of experience, over 8 years of it, and I don't think we've gotten very far yet..

Economy or gameplay

Eric used the word "merchandising" instead of economy, and I think that's the crucial over-simplification that leads to thinking that pay-for-stuff games and worlds are just about cross-selling opportunities best left to a competent marketing department to handle. I'm wondering whether he's simplifying the choice to make it easier to explain, or purposefully misleading someone on what's crucial to think about, or whether our friends at IMVU simply haven't realized this yet: the first-hand sales are a small fraction of the total trade in an item-based game, and the gameplay balance is just as critical here as it is in a game built out of designer-created quests and gameplay mechanics. What's more, because its emergent behaviour, it's nearly impossible to predict, and very difficult to measure, model and understand. Yet that's exactly what's required in order to succeed.

I hope that explains why I choose to call it economy-driven rather than merchandising.

PS. Browsing around Eric's blog a bit further, this article is a gem

Tuesday 26 February 2008

A look back at GDC, and forward to ION

Much to my regret, I had to miss GDC San Francisco this year, but I've been following with interest some of the session transcripts and got feedback from my colleagues who just returned. One thing I noted in particular was Raph Koster's comments on the iteration speed of web developers (measured in days  or weeks typically) vs that of game studios (where the difference might be six months for a casual Wii title and four years for a triple-A PS3 title -- of course iterating inside the development team, but with little consumer feedback). It seems Raph has taken a lot of this onboard in the development of Metaplace, as seen in their pre-release "postmortem" session.

Of course, I noted this because this pretty much reflects how we (Sulake) have been modeling not just our development process but our business management methods driving that development. That is, make the iteration cycle ever faster, learn to do big changes in very, very small chunks, incorporating metrics- and testing-based learning all through the cycle.

Also, I'm going to be talking about this very topic this May in ION 08 in Seattle. Looks like I missed a lot of interesting discussion relevant to that session, so I hope I won't be repeating too much of what was already stated in GDC.