Fishpool

To content | To menu | To search

Monday 14 November 2011

Flash is dead? What changed?

So, Adobe finally did the inevitable, and announced that they've given up trying to make Flash relevant on mobile devices. Plenty has been written already about what lead to this situation, and the "tech" blogosphere certainly has proved their lack of insight in matters of development again, so maybe I won't go there. Flash plugin has a bad rap, and HTML5 will share that status as soon as adware crap starts to be made with it. It's not the tech, but its application.

So, lets focus on the developer angle. Richard Davey of Aardman and PhotonStorm is offering a developer-focused review into the alternatives. TL;DR; Flash is what's there now, but learn HTML5 too. Yeah, for web, I would agree.

However, that misses the big picture as well. Choosing any tech today for the purpose of building games for Web is deciding a future course by the back-view mirror. Web, as it is today, is about a 500M connected, actively used devices market. Sure, more PCs have been sold, and about that many are sold both this and next year, but the total number of devices sold doesn't matter - the number of people using them for anything resembling your product (here, games) does. So, I'll put a stake in the ground at 500M.

In comparison - iPad and other tablets reach about 100M devices this year, and projections look like about as much more next year. I would argue that most of them will be used for casual entertainment, at least some of their active time. That makes tablet-class devices (large touchscreen, no keyboard, used on a couch or other gaming-friendly situations) a significant fraction of the Web market already, and that will only be growing going forward.

Mobiles are a class of their own. Several billion devices already, maybe about a billion of them smart phones, some projections claim another billion smart phone-class devices to be sold next year. Just by limiting the market to only those devices which sport installable apps, touch screens, significant processing power (think iPhone and Android devices, possibly excluding lowest-end Android and the iPhone 1.0 and 3G), you're still looking at a potential market of 1 billion devices or so. Now, phones are not in my book very gaming-friendly - the screen is small, touch controls obscure parts of it, play sessions are very short, the device spends most time in a pocket and rarely gets focused attention, and play can be interrupted by many, many things. Still, as we've seen, great games and great commercial success can be created on the platform.

However, lets not pretend that a Web game could ever have worked on either a tablet or a phone without significant effort, both technical and conceptual. The platforms' underlying assumptions simply are too different.

So, how would you go about choosing a technology for creating a game for the future, instead of the past?

The choices are:

  • Native, writing for iOS only. Decent tools, except when they don't work, one platform, though a relatively large one with customer base proven to be happy to spend on apps.
  • Native, writing for iOS and Android. Perhaps for Windows Phone too, if that takes off. Welcome to niche markets or fragmentation hell.
  • Native, but with a cross-platform middleware that makes porting easier. Still, you're probably dealing with low-level crap on a daily basis.
  • HTML5, if you're willing to endure an unstable, changing platform, more fragmentation, dubious performance, and frankly, bad tools. Things will be different in a couple of year's time, I'm sure, but today, that's what it's really like. I would do HTML5 for apps, but not for games, because that way you'll get to leverage the best parts of web and skip on the hairiest client-side issues. In theory you'll also get Web covered, but in practice, making anything "advanced" work on even one platform is hard work.
  • AIR, if you continue to have faith that Adobe will deliver. In theory, this is great: a very cross-platform tech, you can apply some of the same stuff on Web too, get access to most features on most platforms on almost-native level, performance is not bad at all, and so on. Except in practice HW-accelerated 3D actually isn't available on mobile platforms, its cousin Flash was managed to oblivion, and perhaps most crucially, Adobe's business is serving ad/marketing/content customers, not developers. I keep hoping, but the facts aren't encouraging. For now though, you'd base your tech on a great Web platform with a reasonable conversion path to a mobile application, caveats in mind.
  • Unity, if you're happy with the 3D object-oriented platform and tools. You'll get to create installable games on all platforms, but lets face it: you will give up Web, because Unity's plugin doesn't have a useful reach. Here, the success case makes you almost entirely tables/mobile, with PC distribution (in the form of an installable app, not a Web game) less than a rounding error. This is probably what you'd be looking for in just a few years time anyway, even if today it looks like a painful drawback.
Conclusion: Working on tools? HTML5. Web game for the next 2 years? Flash 11. Mobile game? Unity, if its 3D model fits your concept. AIR if not, though you'll take a risk that Adobe further fumbles with the platform and never gets AIR 3 with Stage3D enabled on mobile devices out the door. Going native is a choice, of course, but one that exceeds my personal taste for masochism.

On the upside, Unity is actively doing something to expand their market, including trying to make Unity games run on top of Flash 11 on PC/Mac, so in theory you might be getting the Web distribution as a bonus. Making code written for Mono (.NET/C#/whatever you want to call it) run on the AS3/AVM Flash runtime is not an easy task though, so consider it a bonus, not a given.

Thursday 14 January 2010

Technology factors to watch during 2010

Last week I posted a brief review of 2009 here, but didn't go much into predictions for 2010. I won't try to predict anything detailed now either, but here's a few things I think will be interesting to monitor over the year. And no, tablet computing isn't on the list. For fairly obvious reasons, this is focused on areas impacting social games. As a further assist, I've underlined the parts most resembling conclusions or predictions.

 

Social networks and virtual worlds interoperability

As more and more business transforms to use Internet as a core function, the customers of these businesses are faced with a proliferation of proprietary identification mechanisms that has already gotten out of hand. It is not uncommon today to have to manage 20-30 different userid/password pairs that are in regular use, from banks to e-commerce to social networks. At the same time, identity theft is a growing problem, no doubt in large part because of the minimum-security methods of identification.

Social networks today are a significant contributor to this problem. Each collects and presents information about its users that contribute to the rise of identity theft while having their own authorization mechanisms in a silo of low-trustworthy identification methods. The users, on the other hand, perceive little incentive to manage their passwords in a secure fashion. Account hijacking and impersonation is a major problem area to each vendor. The low trust level of individual account data also leads to a low relative value of owning a large user database.

A technology solution, OpenID is emerging and taking hold in a form of an industry-accepted standard for exchanging identity data between an ID provider and a vendor in need of a verified id for their customer. A few of current backers of the standard in the picture on the right. However, changing the practices of the largest businesses has barely begun and no consumer shift can yet be seen – as is typical for such “undercurrent” trends.

OpenID will allow consumers to use fewer, higher-security ids over the universe of their preferred services, which in turn will allow these services a new level of transparent interoperability in combining data from each other in near-automatic, personalized mash-ups via the APIs each vendor can expose to trusted users with less fear of opening holes for account hijacking.

 

Browsers vs desktops: what's the target for entertainment software?

Here's a rough sketch of competing technology streams in terms of two primary factors – ease of access versus the rich experience of high-performance software. “Browser wars” are starting again, and with the improved engines behind Safari 4, Firefox 4, IE 8 and Google Chrome, a lot of the kind of functionalitywe're used to thinking belongs to native software or at best browser plugins like Flash, Java or Silverlight will be available straight in the browser. This for sure includes high-performance application code, rich 2D vector and pixel graphics, video streams and access to new information like location-sensing. The plugins will most likely be stronger at 3D graphics and synchronized audio and at advanced input mechanisms like using webcams for gesture-based control. Invariably, especially the new input capabilities will also bring with them new security and privacy concerns which will not be fully resolved within the next 2-3 years.

While 3D as a technology will be available to browser-based applications, this doesn't mean the web will turn to represent everything as a virtual copy of the physical world. Instead, it's best use will be as a tool for accelerating and enhancing other UI and presentation concepts – think iTunes CoverFlow. For social interaction experiences, a 3-degrees-freedom pure 3D representation will remain a confusing solution, and other presentations such as axonometric “camera in the corner” concepts will remain more accessible. Naturally, they can (but don't necessarily need to) be rendered using 3D tech.

 

Increased computing capabilities will change economies of scale

The history of the “computer revolution” has been about automation changing economies of scale to enable entirely new types of business. Lately we've seen this eg by Google AdWords enabling small businesses to advertise and/or publish ads without marketing departments or involvement of agencies.

The same trend is continuing in the form of computing capacity becoming a utility in Cloud Computing, extreme amounts of storage becoming available in costs which allow terabytes of storage to organizations of almost any size and budget, and most importantly, developing data mining, search and discovery algorithms that enable organizations to utilize data which used to be impossible to analyze as automated business practices. Unfortunately, the same capabilities are available for criminals as well.

Areas in which this is happening as we speak:

  • further types and spread of self-service advertising, better targeting, availability of media
  • automated heuristics-based detection of risky customers, automated moderation
  • computer-vision based user interfaces which require nothing more than a webcam
  • ever increasing size of botnets, and the use of them for game exploits, money laundering, identity theft and surveillance

The escalation of large-scale threats have raised the need for industry-wide groups for exchanging information and best practices between organizations regarding the security relevant information such as new threats, customer risk rating, identification of targeted and organized crime.

 

Software development, efficiencies, bottlenecks, resources

Commercial software development tools and methods experience a significant shift roughly once every decade. The last such shift was the mainstreaming of RAD/IDE-based, virtual-machine oriented tools and the rise of Web and open source in the 90s, and now those two rising themes are increasingly mainstream while “convergent”, cross-platform applications which depend on the availability of always-on Internet are emerging. As before, it's not driven by technological possibility, but by the richness and availability of high-quality development tools with which more than just the “rocket-scientist” superstars can create new applications.

The skills which are going to be in short supply are those for designing applications which can smoothly interface to the rest of the cloud of applications in this emerging category. Web-accessible APIs, the security design of those APIs, efficient utilization of services from non-associated, even competing companies, and friction-free interfaces for end users of these web-native applications is the challenge.

In this world, the traditional IT outsourcing houses won't be able to serve as a safety valve for resources as they're necessarily still focused on serving the last and current mainstream. In their place, we must consider the availability of open source solutions not just as a method for reducing licensing cost, but as the “extra developer” used to reduce time-to-market. And as with any such relationship, it must be nurtured. In the case of open source, that requires participation and contribution back to the further development of that enabling infrastructure as the cost of outsourcing the majority of the work to the community.


Mobile internet

With the launch of iPhone, the use of Web content and 3rd party applications on mobile devices has multiplied compared to previous smart phone generations. This is due to two factors: the familiarity and productivity of Apple's developer tools for the iPhone, and the straightforward App Store for the end-users. Moreover, the wide base of the applications is primarily because of the former, as proven by the wide availability of unauthorized applications already before the launch of iPhone 2.0 and the App Store. Nokia's failure to create such an applications market despite the functionality available on S60 phones for years before the iPhone launch proves this – it was not the features of the device, but the development tools and application distribution platform were the primary factor.

The launch of Google's Android will further accelerate this development. Current Android-based devices lack the polish of iPhone, and the stability gained from years of experience of Nokia devices, yet the availability of development tools will supercharge this market, and the next couple of years will see accelerated development and polish cycle from all parties. At the moment, it's impossible to call the winner on this race, though.

Wednesday 29 August 2007

Working 3D on the 965GM

I took a second (third, whatever) look at how to get 3D acceleration enabled with the TravelMate, and finally found the clue to avoiding a display lockup the moment an OpenGL application was started.

Fedora 7 will not support it as-is. You'll need at least kernel 2.6.22.1 (2.6.22.4 is now in updates) and Mesa 6.5.3. I found it easiest to install Richard Hughes' "Utopia" builds of mesa-libGL and libdrm and a rebuilt fc8 xorg-x11-drv-i810. With these three packages, DRI can now be enabled and the machine is stable. Performance isn't stellar, but it's plenty enough to enjoy compiz and a slightly blinged up desktop, which is essentially what I was looking for, anyway. Ready-made binary attached. Remember, you need to update the kernel and drm bits too with the linked stuff.