Fishpool

To content | To menu | To search

Tag - development

Entries feed - Comments feed

Thursday 18 July 2013

The difference in being demanding and acting like a jerk

Sarah Sharp is a member of a very rare group. She's a Linux kernel hacker. Even among that group, she's unique - not because of her gender (though that probably is distinctive in many of the group's social gatherings), but because she's brave enough to demand civil behavior from the leaders of the community. I applaud her for that.

Now, I have immense respect for Linus Torvalds and his crew. I've been a direct beneficiary of Linux in both professional and personal contexts for soon to be two decades. The skills this group demonstrate are possibly only matched by the level of quality they demand from each other. However, unfortunately that bar is often demonstrated only on the technical level, while the tone of discussion, both in-person and on the mailing lists, can turn quite hostile at times. It's been documented many times, and I can bring no value to rehashing that.

However, I wanted share some experience from my own career as a developer, manager of developers, and someone who has been both described as demanding and who has needed to report to others under very demanding circumstances. I've made some of these mistakes myself, and hope to have learned from them.

Perhaps not so surprisingly, the same people in the community who defend hostile behavior also, almost by rule, misunderstand what it means to behave professionally. There's a huge difference between behaving as in a workplace where people are getting paid, and behaving professionally. The latter is about promoting behaviors which lead to results. If being a asshole was effective, I'd have no problem with it. But it's not.

To consistently deliver results, we need to be very demanding to ourselves and to others. Being an uncompromising bastard with regards to the results will always beat accepting inferior results, when we measure technical progress on the long run -- though sometimes experience tells us a compromise truly is "good enough".

However, that should never be confused with being a bastard in general. Much can (and should) be said about how being nice to others means we're all that much happier, but I have something else to offer. Quite simply: people don't like to be called idiots and having personal insults hurled at them. They don't respond well to those circumstances. Would you like it yourself? No? Don't expect anyone else to, either. It's not productive. It will not ensure delivering better results in the future.

Timely, frequent and demanding feedback is extremely valuable to results, to the development of an organization, and to the personal development of the individuals. But there are different types of communication, and not all of it is feedback. Demanding better results isn't feedback, it's setting and communicating objectives. Commenting on people's personalities, appearance, or otherwise, let alone demanding them to change themselves as a person isn't feedback nor reasonable. Feedback is about observing behavior and demanding changes in the behavior, because behavior leads to results. Every manager has to do it, never mind whether they're managing a salaried or voluntary team.

However, calling people names is bullying. Under all circumstances. While it can appear to produce results (such as, making someone withdraw from an interaction, thus "no longer exhibiting an undesirable behavior"), those results are temporary and come with costs that far outweigh the benefits. It drives away people who could have been valuable contributors. What's not productive isn't professional. Again - I'm not discussing here how to be a nicer person, but how to improve results. If hostility helped, I'd advocate for it, despite it not being nice.

The same argument can be said about using hostile language even when it's not directed at people, but to results. Some people are more sensitive than others, and if by not offending someone's sensibilities you get better overall results, it's worth changing that behavior. However, unlike "do not insult people", use of swearwords toward something else than people is a cultural issue. Some groups are fine with it, or indeed enjoy an occasional chance to hurl insults at inanimate objects or pieces of code. I'm fine with that. But nobody likes to be called ugly or stupid.

In the context of Linux kernel, does this matter? After all, it seems to have worked fine for 20 years, and has produced something the world relies on. Well, I ask you this: is it better to have people selected (or have the select themselves to) for Linux development by their technical skills and capability to work in organized fashion, or by those things PLUS an incredibly thick skin and capacity to take insults hurled at them without being intimidated? The team currently developing the system is of the latter kind. Would they be able to produce something even better, if that last requirement wasn't needed? Does that help the group become better? I would say hostility IS hurting the group.

Plus, it's setting a very visible, very bad precedent for all other open source teams, too. I've seen other projects wither and die because they've copied the "hostility is ok, it works for Linux" mentality, but losing out on the skills part of the equation. They didn't have to end that way, and it's a loss.

Thursday 28 February 2013

Are you prepared to deal with a negative test result?

Someone recently asked me, whether he should do a limited-market test launch for a product he knows isn't finished yet, in order to learn more from actual users. A worthy goal, of course. Perhaps you're considering the same thing. I have, many times. Before you decide either way, consider the following:

What do you expect to learn? If you need to see people using your product, throwing it on the App Store won't help you achieve that objective. Better you go a nearby meetup of people you'd like to see using your product, introduce yourself and ask them to test it while you're looking. If you can't take the product with you or need to have an entire team experience the end-user feedback first hand, invite 10 people over for some pizza (either to your office or some more cozy environment) and record the event on video. I promise you, it'll be an eye-opening experience.

Do you have a hypothesis you're trying to verify? Can you state how you're verifying it? Can you state a test which would prove your hypothesis is false? Are those tests something that you can implement and measure over a launch? Awesome! If not, then you need to think harder and probably identify some other way of gaining the insight you're looking for.

Are you just trying to gather experience of something not directly related to the product itself? Such as, you don't yet know first hand how to manage an App Store launch. Well, you could launch your baby -- or you could quickly create another product with which to learn what you needed to learn. This tactic has the added benefit that such side-products are typically simpler, so they're easier to analyze for the understanding you're after.

But most importantly, what will you do if your test comes back with a negative result? Far too often, this hasn't been given any consideration, and when it does happen (as it typically does, if you haven't thought through the process), the response is "oh, the test failed, never mind, we'll just go ahead anyway". Unfortunately, in most such situations, it was not the test which failed. Rather, it successfully proved that the hypothesis being tested was incorrect. This is a completely different thing, and going ahead without changes would be a mistake after such a result. You have to be prepared to make the hard decisions. For example, Supercell killed Battle Buddies after their test launch showed it would not convert enough people.

You should test often and early. You should gather market data to support significant further investments of time or resources to any development you're undertaking. But you should also be prepared to take any necessary actions if the tests you're running show that your assumptions were incorrect, and the product doesn't work the way you intended. Those are not easy decisions to take, if you're invested into the product, as most creators would be. Think it through. A launch is not a test.

Wednesday 12 December 2012

A marriage of NoSQL, reporting and analytics

Earlier today, I mentioned querymongo.com in a tweet that fired off a chat covering various database-related topics each worth a blog post of their own, some of which I've written about here before:

One response in particular stood out as something I want to cover in a bit more detail that will fit in a Tweet:

While it's fair to say I don't think MongoDB's query syntax is pretty in the best of circumstances, I do agree that at times, given the right kind of other tools your dev team is used to (such as, when you're developing in a JavaScript-heavy HTML5 + Node.js environment) and the application's context is one where objects are only semi-structured, it can be a very good fit as the online database solution. However, as I was alluding to in the original tweet and expounded on in its follow-ups, it's an absolute nightmare to try to use MongoDB as the source for any kind of reporting, and most applications need to provide reporting at some point. When you get there, you will have three choices:

  1. Drive yourselves crazy by trying to report from MongoDB, using Mongo's own query tools.
  2. Push off reporting to a 3rd party service (which can be a very, very good idea, but difficult to retrofit to contain all of your original data, too).
  3. Replicate the structured part of your database to another DBMS where you can do SQL or something very SQL-like, including reasonably accessible aggregations and joins.

The third option will unfortunately come with the cost of having to maintain two systems and making sure that all data and changes are replicated. If you do decide to go that route, please do yourself a favor and pick a system designed for reporting, instead of an OLTP system that can simply do reporting, when pushed to do so. Yes, that latter category includes both Postgres and MySQL - both quite capable as OLTP systems, but you already decided to do that using MongoDB, didn't you?

Most reporting tasks are much better managed using a columnar, analytics-oriented database engine optimized for aggregations. Many have spawned in the last half a decade or so: Vertica, Greenplum, Infobright, ParAccel, and so on. It used to be that choosing to use one might be either complicated or expensive (though I'm on record saying Infobright's open source version is quite usable), but since last week's Amazon conference and its announcements, there's a new player on the field: Amazon Redshift, apparently built on top of ParAccel, priced at $1000/TB/year. Though I've yet to have a chance to participate in its beta program and put it through its paces, I think it's pretty safe to say it's a tectonic shift on the reporting databases market as big or bigger as the original Elastic Compute Cloud was to hosting solutions. Frankly, you'd be crazy not to use it.

Now, reporting is reporting, and many analytical questions businesses need to solve today really can't be expressed with any sort of database query language. My own start-up, Metrify.io is working on a few of those problems, providing cloud-based predictive tools to decide how to serve customers before there's hard data what kind of customers they are. We back this with a wide array of in-memory and on-disk tools which I hope to describe in more detail at a later stage. From a practical "what should you do" point of view though -- unless you're also working on an analytics solution, leave those questions to someone who's focused on that, turn to SaaS services and spend your own time on your business instead.

Monday 14 November 2011

Flash is dead? What changed?

So, Adobe finally did the inevitable, and announced that they've given up trying to make Flash relevant on mobile devices. Plenty has been written already about what lead to this situation, and the "tech" blogosphere certainly has proved their lack of insight in matters of development again, so maybe I won't go there. Flash plugin has a bad rap, and HTML5 will share that status as soon as adware crap starts to be made with it. It's not the tech, but its application.

So, lets focus on the developer angle. Richard Davey of Aardman and PhotonStorm is offering a developer-focused review into the alternatives. TL;DR; Flash is what's there now, but learn HTML5 too. Yeah, for web, I would agree.

However, that misses the big picture as well. Choosing any tech today for the purpose of building games for Web is deciding a future course by the back-view mirror. Web, as it is today, is about a 500M connected, actively used devices market. Sure, more PCs have been sold, and about that many are sold both this and next year, but the total number of devices sold doesn't matter - the number of people using them for anything resembling your product (here, games) does. So, I'll put a stake in the ground at 500M.

In comparison - iPad and other tablets reach about 100M devices this year, and projections look like about as much more next year. I would argue that most of them will be used for casual entertainment, at least some of their active time. That makes tablet-class devices (large touchscreen, no keyboard, used on a couch or other gaming-friendly situations) a significant fraction of the Web market already, and that will only be growing going forward.

Mobiles are a class of their own. Several billion devices already, maybe about a billion of them smart phones, some projections claim another billion smart phone-class devices to be sold next year. Just by limiting the market to only those devices which sport installable apps, touch screens, significant processing power (think iPhone and Android devices, possibly excluding lowest-end Android and the iPhone 1.0 and 3G), you're still looking at a potential market of 1 billion devices or so. Now, phones are not in my book very gaming-friendly - the screen is small, touch controls obscure parts of it, play sessions are very short, the device spends most time in a pocket and rarely gets focused attention, and play can be interrupted by many, many things. Still, as we've seen, great games and great commercial success can be created on the platform.

However, lets not pretend that a Web game could ever have worked on either a tablet or a phone without significant effort, both technical and conceptual. The platforms' underlying assumptions simply are too different.

So, how would you go about choosing a technology for creating a game for the future, instead of the past?

The choices are:

  • Native, writing for iOS only. Decent tools, except when they don't work, one platform, though a relatively large one with customer base proven to be happy to spend on apps.
  • Native, writing for iOS and Android. Perhaps for Windows Phone too, if that takes off. Welcome to niche markets or fragmentation hell.
  • Native, but with a cross-platform middleware that makes porting easier. Still, you're probably dealing with low-level crap on a daily basis.
  • HTML5, if you're willing to endure an unstable, changing platform, more fragmentation, dubious performance, and frankly, bad tools. Things will be different in a couple of year's time, I'm sure, but today, that's what it's really like. I would do HTML5 for apps, but not for games, because that way you'll get to leverage the best parts of web and skip on the hairiest client-side issues. In theory you'll also get Web covered, but in practice, making anything "advanced" work on even one platform is hard work.
  • AIR, if you continue to have faith that Adobe will deliver. In theory, this is great: a very cross-platform tech, you can apply some of the same stuff on Web too, get access to most features on most platforms on almost-native level, performance is not bad at all, and so on. Except in practice HW-accelerated 3D actually isn't available on mobile platforms, its cousin Flash was managed to oblivion, and perhaps most crucially, Adobe's business is serving ad/marketing/content customers, not developers. I keep hoping, but the facts aren't encouraging. For now though, you'd base your tech on a great Web platform with a reasonable conversion path to a mobile application, caveats in mind.
  • Unity, if you're happy with the 3D object-oriented platform and tools. You'll get to create installable games on all platforms, but lets face it: you will give up Web, because Unity's plugin doesn't have a useful reach. Here, the success case makes you almost entirely tables/mobile, with PC distribution (in the form of an installable app, not a Web game) less than a rounding error. This is probably what you'd be looking for in just a few years time anyway, even if today it looks like a painful drawback.
Conclusion: Working on tools? HTML5. Web game for the next 2 years? Flash 11. Mobile game? Unity, if its 3D model fits your concept. AIR if not, though you'll take a risk that Adobe further fumbles with the platform and never gets AIR 3 with Stage3D enabled on mobile devices out the door. Going native is a choice, of course, but one that exceeds my personal taste for masochism.

On the upside, Unity is actively doing something to expand their market, including trying to make Unity games run on top of Flash 11 on PC/Mac, so in theory you might be getting the Web distribution as a bonus. Making code written for Mono (.NET/C#/whatever you want to call it) run on the AS3/AVM Flash runtime is not an easy task though, so consider it a bonus, not a given.

Tuesday 21 June 2011

On software and design, vocabularies and processes

Having recently witnessed the powerful effect establishing a robust vocabulary has on the process of design, and seeing today the announcement of the oft-delayed Nokia N9 finally hit TechMeme front page, I again thought about the common misconceptions of creating software products. It's been a while since I posted anything here, and this is as good a time as any to do a basics refresher.

A typical axis of argument sets software engineering somewhere between manufacturing and design. I, among many others, have for years argued that the relationship of software to physical manufacturing almost non-existent. This is because while the development process for a new physical product, like any involving new creation, starts with a design phase, the creation of a specification (typically in the hundreds of pages) is where the manufacturing really only begins. The job of the spec is to outline how to make the fully-designed product in volume. In comparison, by the time a software product is fully-designed and ready to start volume production, there is no work left - computers can copy the final bits forever without a spec. There's more to that argument, but that's the short version. Creating software is the design part of a product development process.

So, goes the line of thinking, if software is design, then it must be right to always begin a software project from zero. After all, all designs start from a blank sheet of paper, right? At least, all visual designs do... No good comes from drawing on top of something else.

If this truly was the case, what do you think they teach in art schools, architecture departments, and so on? Technique? For sure, but if that was all there was, we'd still be in the artesan phase of creation. History? Yes, but not only that. An important part of the history and theory of design is establishing lineage, schools of thought, and vocabularies which can serve as a reference for things to come. All truly new, truly great things build on prior art, and not just on the surface, but by having been deeply affected by the learning collected while creating all which came before them.

Not having actually studied art, I have only a vague idea of how complex these vocabularies are, and this is an area where a Google search isn't going to be helpful, as it only brings up the glossaries of a few dozen to at most a hundred basic terms of any design profession. This is not even the beginning for a real vocabulary, since those describe to a great detail the relationships of the concepts, ways of using them together, examples of prior use, and so on. However, even from this rather precarious position, I will hazard a statement which might offend some:

Software design, in terms of the vocabulary required for state of the art, is more complex than any other field of design by an order of magnitude or more. The practical implication of this is that no new software of value can be created from a "blank sheet of paper".

This will require some explanation. Let's tackle that magnitude thing first.

Any complete software system, such as that running within the smart phone in your pocket, measures in the tens, if not hundreds of millions of lines of code. LOC is not a great measurement of software complexity, but there you have it. In terms of other, more vocabulary related measurements, the same system will consist of hundreds of thousands of classes, function points, API calls, or other externally-referable items. Their relationships and dependencies to each other typically grow super-linearly, or faster than the total number of items.

By comparison, the most complex designs in any other field are dwarfed. Yes, a modern fighter jet may have design specs of hundreds of thousands of pages, and individual parts where the specs for the part only are as complex as any you've seen. Yes, a cruise ship, when accounting for all the mechanical, logistic and customer facing functions together may be of similar complexity. And yes, a skyscraper design blueprints are of immense complexity, where no one person really can understand all of it. However, a huge part of these specs, too, is software! Counting software out of those designs, a completely different picture emerges.

None of these designs would be possible without reusing prior work, components, designs, mechanisms and customs created for their predecessors. Such is the case for software, too. The components of software design are the immense collections of libraries and subsystems already tested in the field by other software products. Why, then, do we so often approach software product development as if we could start from scratch?

Why was it that the N9 reminded me of this? Well, if stories and personal experiences are to be trusted, it seems that during the process of creating it, Nokia appears to have "started over" at least two or three times. And that just during the creation of one product.. As a result, it's completely different, both from a user as well as a developer standpoint to the four devices which preceded it in the same product line, and two (three?) years late from it original schedule. Of course, they did not scratch everything every time, otherwise it would never have finished at all. But this, and Nokia's recent history, should serve as a powerful lesson to us all: ignoring what has already been created, building from a blank sheet instead, is a recipe for delay and financial disaster.

Software is design. Design needs robust vocabulary and the processes to use them well, if it is to create something successful.

Monday 23 May 2011

Nordic Game followup

A week ago Thursday, I gave a presentation in Malmö on Nordic Game Conference's second day on a couple of related topics, slides below. I spoke about the lack of truly social interaction in this generation's "social games", and reflected on what a social game where players actually play together looks like. As you might guess, Habbo has been a social playground for a long time.. 11 years, in fact. The slides themselves are, typically for me, a bit difficult to understand since they're mostly just pictures. You should've been there :)

True Social Games - NG11 - Slides

Monday 31 January 2011

Did common identities die with OpenID? No

About a year ago I posted here a summary of trends I expected would be relevant to our product development over 2010, and looking back at it, perhaps I should have put tablet computing on that list.. However, what prompted me to go back and look at it today was picking up on the news that 37signals has declared OpenID a failed experiment, and the related Quora thread I found. Wow, the top-voted answer there is one-sided. Here's what I think about it, to update my statement from a year ago. Comments would be welcome!

Facebook has established itself as a de-facto source of identity and social graph data for all but a few professional/enterprise-targeted Internet services. Over a medium- to long-term, it is still possible that another service or a federation of multiple services using standard APIs will displace Facebook as the central source. However, a networked, "external" social graph is a given. Majority of users are still behaving as if stand-alone services with individual logins and user-to-user relationships are preferred, but that's a matter of behavioral momentum.

This has not removed the problem of identity-related security issues, like identity theft. The nature of the problem will shift over time from account theft to impersonation and large-scale and/or targeted information theft. Consumers still remain uninterested and even hostile to improving security (at the cost of sometimes reduced convenience). Visible and wide-spread security scares are beginning to change the mindset though, and it's possible that even by the end of the year, at least one of the big players will introduce a "secure id" solution for voluntary access as a further argument for their services.

The spread of the social graph will have more impact to the scope of Internet services, however. Application development today should take it for granted that information about the users' preferences, friends, brand connections and activity history will be available and should be utilized (wisely) to improve service experience. The key to viral/social distribution is not whether applications can reach their users' network (which will be given), rather what would motivate the user to spread the message.

Thursday 13 January 2011

A last look at 2010... and what's in sight?

For a few years, I've tried to recap here some events I've found notable over the past year and offering some guesses on what might be ahead of us. I'm somewhat late on these things this year, due to being busy with other stuff, but I didn't want to break the tradition, no matter how silly my wrong guesses might seem later. And again, others have covered generals, so I'll try to focus on specifics, in particular as they relate to what I do. For a look at what we achieved for Habbo, see my recap post on the Sulake blog.

This time last year Oracle still had not successfully completed the Sun acquisition due to some EC silliness, but that finally happened over the 2010. It seems to be playing about how I expected it to - MySQL releases have started to appear (instead of just being announced, which was mostly what MySQL AB and Sun were doing), and they actually are improvements. Most things are good on that front. On the other hand, Oracle is exerting license force on the Java front, and hurting Java's long-term prospects in the process, just at a time when things like Ruby and Node.js should put the Java community on the move to improve the platform. Instead, it looks like people are beginning to jump ship, and I can't blame them.

A couple of things surprised me in 2010. Nokia finally hired a non-Finn as a CEO, and Microsoft's Kinect actually works. I did mention camera-based gesture UIs in my big predictions post, but frankly I wasn't expecting it to actually happen during 2010. Okay, despite the 8 million units, computer vision UIs aren't a general-purpose mass market thing yet, but the real kicker here is how easy Kinect is to use for homebrew software. We're going to see some amazing prototypes and one or two actual products this year, I'm sure.

In terms of other software platform stuff, much hot air has been moved around iOS, Android, JavaScript and Flash. I haven't seen much that would have made me think it'd be time to reposition yet. Native applications are on their way out (never mind Mac App Store, it's a last-hurrah thing for apps which don't have an Internet service behind them), and browser-based stuff is on its way in. Flash is still the best browser-side applications platform for really rich stuff, and while JavaScript/HTML5/Canvas is coming, it's not here yet. For more, see this thread on Quora where I commented on the same. Much of the world seems to think that HTML5 Video tag, h.264 and VP8 equate to the capabilities of Flash, that's quite off-base.

On the other hand, tablets are very much the thing. I very much expect that my Galaxy Tab will be outdated by next month, and am looking forward to the dual-core versions which probably will be good for much, much more than email, calendar, web and the occasional game. Not that I'm not already happy about what's possible on the current tablets -- I carry a laptop around much less already. An in terms of what it means for software -- UI's are ripe for a radical evolution. 

The combination of direct touch on handheld devices and camera-read gestures on living-room devices is already here, and I expect both to shift on to the desktop as well. Not by replacing keyboards, nor necessarily mouses, but I'm looking forward to soon having a desktop made out of a large near-horizontal touchscreen for arranging stuff replacing the desk itself, a couple of large vertical displays for presenting information, a camera vision for helping the computer read my intentions and focus on stuff, and keeping the keyboard around for rapid data entry. One has to remember that things for which fingers are enough are much more efficiently done with fingers than by waving the entire hand around.. 

Will I have such a desk this year? Probably not. At the workplace, I move around so much that a tablet is more useful, and at home, time in front of a desktop computer grew rather more infrequent with the arrival of our little baby girl a few weeks ago.. But those are what I want "a computer" to mean to her, not these clunky limited things my generation is used to.

Tuesday 9 March 2010

Smartphone platforms comparison - a developer perspective

Having for years used Nokia phones, lately S60 phones of various generations, with a fair amount of experience of the iPhone/iPod Touch OS and lately having used both the Nokia N900 (Maemo OS) and the Google Nexus One (Android) devices as well, I can't avoid comparing these together. As a developer, I'm not really that interested in what they look like today, because today's devices are not what a developer needs to target for applications - rather, what can one determine of the platforms' future from looking at their past?

I can't make any comparisons to the Palm Pre (WebOS), Windows Mobile or Blackberry devices, since I have no first-hand experience of any of them. However, of the four platforms I know to some degree, not only is iPhone still clearly in the lead, but it looks to have the most predictable future as well. iPhone OS 3.1 no longer misses any significant functionality and has gained all the important bits without giving in on the level of platform polish, and the application market is humongous. Its only real weakness is the draconian control Apple enforces, and the crazy restrictions that results in. Those issues are well documented by a recent EFF post outlining the contents of the iPhone developer program contract.

The imminent launch of iPad is the first time the platform starts to experience any kind of real fragmentation in terms of a application development target. At this point that fragmentation looks like to be minimal - with the iPhone, iPod Touch and iPad all sharing the same OS, same UI, practically same inputs and outputs and differing only by what networks are available for communication and what size the screen is, developers are not going to have a hard time at all in developing for all three devices.

That is quite unlike the situation on the other three platforms (Symbian, Android, Maemo). The fragmentation of the Symbian market is a matter of some notoriety. Basically, the same app will not work on phone models launched 9 months apart, or sometimes even on simultaneously launched devices, due to differences in the OS, let alone differences in the form factor, screen size, input mechanisms, and so on. With already two major revisions announced, this trend is only going to continue, and the base OS is already nearing 15 years old, if traced back to the first 32 bit EPOC it evolved from, though I believe the first S60 UI version came out in 2002.

Android is beginning to suffer from the same disease. Not only are the devices on the market each a running different base OS version with different features available to applications, but nearly all of them are also customized by their manufacturers or network carriers with little regard to compatibility (nor in fact could they have any regard for it, since none of them have any previous experience maintaining a platform). And of course each one has a different form factor. However, the most surprising feature of the platform (as a recent Nexus One user) is that even though Android is barely two years old, it already carries with itself a legacy of inconsistent UI controls. What exactly does one do with an indirect-control pointing device (a trackball) on a device capable of direct control via both a touchscreen and motion sensors? Why are the built-in applications (never mind those available on the Android Market) full of menus, "select an object and execute a function on it via a separate control" type UIs clearly inheriting baggage from the decade before touch screens, and other clunky hacks, when there's a rich base to copy from in the iPhone UI design library of 150,000 applications?

So, what about Maemo? I bought a few years ago the very first Maemo device, the N770 Internet Tablet. I've seen and played with every device since. All of them up to the N900 carried the same "windows and menus" baggage Android is suffering from, but the refreshed UI in N900 got rid of most of that. Not entirely so, but enough that I can state with confidence that the N900 UI is more modern, more designed for the touch screen than Android is. However, Maemo's weakness is that of a platform - there's none. Every version of the OS thus far (five iterations on the market) has broken compatibility with the previous. Now, that's to be expected and somewhat forgivable as long as it's in developers-only mode, essentially being beta tested. It's hard to call N900 a beta test any longer. What's worse, is that Nokia has publicly stated that the next device, whatever it's name, and regardless of whether its OS is called Maemo 6 or MeeGo whatever, is also going to be incompatible with the current one, and applications will require a re-write. This is no way to build a developer base.

So, what do we have to look towards to as application developers, trying to figure out what platform to target when working on our next mobile applications?

iPhone, a consistent, easy to use platform with a stable technical roadmap and little legacy baggage, but saddled with an unpredictable owner who's just as likely to deny you from doing business at all than to support you in it?

Symbian, full of legacy, and with a refreshed, incompatible platform to launch maybe next year?

Android, fast-growing, but already full of clunky hacks, and fragmenting faster than than anyone's seen before?

Or Maemo, approaching a state of polish but unable to maintain direction for the length of one device cycle?

I think we're all going to miss the days of Java mobile games development before this is over.

Thursday 14 January 2010

Technology factors to watch during 2010

Last week I posted a brief review of 2009 here, but didn't go much into predictions for 2010. I won't try to predict anything detailed now either, but here's a few things I think will be interesting to monitor over the year. And no, tablet computing isn't on the list. For fairly obvious reasons, this is focused on areas impacting social games. As a further assist, I've underlined the parts most resembling conclusions or predictions.

 

Social networks and virtual worlds interoperability

As more and more business transforms to use Internet as a core function, the customers of these businesses are faced with a proliferation of proprietary identification mechanisms that has already gotten out of hand. It is not uncommon today to have to manage 20-30 different userid/password pairs that are in regular use, from banks to e-commerce to social networks. At the same time, identity theft is a growing problem, no doubt in large part because of the minimum-security methods of identification.

Social networks today are a significant contributor to this problem. Each collects and presents information about its users that contribute to the rise of identity theft while having their own authorization mechanisms in a silo of low-trustworthy identification methods. The users, on the other hand, perceive little incentive to manage their passwords in a secure fashion. Account hijacking and impersonation is a major problem area to each vendor. The low trust level of individual account data also leads to a low relative value of owning a large user database.

A technology solution, OpenID is emerging and taking hold in a form of an industry-accepted standard for exchanging identity data between an ID provider and a vendor in need of a verified id for their customer. A few of current backers of the standard in the picture on the right. However, changing the practices of the largest businesses has barely begun and no consumer shift can yet be seen – as is typical for such “undercurrent” trends.

OpenID will allow consumers to use fewer, higher-security ids over the universe of their preferred services, which in turn will allow these services a new level of transparent interoperability in combining data from each other in near-automatic, personalized mash-ups via the APIs each vendor can expose to trusted users with less fear of opening holes for account hijacking.

 

Browsers vs desktops: what's the target for entertainment software?

Here's a rough sketch of competing technology streams in terms of two primary factors – ease of access versus the rich experience of high-performance software. “Browser wars” are starting again, and with the improved engines behind Safari 4, Firefox 4, IE 8 and Google Chrome, a lot of the kind of functionalitywe're used to thinking belongs to native software or at best browser plugins like Flash, Java or Silverlight will be available straight in the browser. This for sure includes high-performance application code, rich 2D vector and pixel graphics, video streams and access to new information like location-sensing. The plugins will most likely be stronger at 3D graphics and synchronized audio and at advanced input mechanisms like using webcams for gesture-based control. Invariably, especially the new input capabilities will also bring with them new security and privacy concerns which will not be fully resolved within the next 2-3 years.

While 3D as a technology will be available to browser-based applications, this doesn't mean the web will turn to represent everything as a virtual copy of the physical world. Instead, it's best use will be as a tool for accelerating and enhancing other UI and presentation concepts – think iTunes CoverFlow. For social interaction experiences, a 3-degrees-freedom pure 3D representation will remain a confusing solution, and other presentations such as axonometric “camera in the corner” concepts will remain more accessible. Naturally, they can (but don't necessarily need to) be rendered using 3D tech.

 

Increased computing capabilities will change economies of scale

The history of the “computer revolution” has been about automation changing economies of scale to enable entirely new types of business. Lately we've seen this eg by Google AdWords enabling small businesses to advertise and/or publish ads without marketing departments or involvement of agencies.

The same trend is continuing in the form of computing capacity becoming a utility in Cloud Computing, extreme amounts of storage becoming available in costs which allow terabytes of storage to organizations of almost any size and budget, and most importantly, developing data mining, search and discovery algorithms that enable organizations to utilize data which used to be impossible to analyze as automated business practices. Unfortunately, the same capabilities are available for criminals as well.

Areas in which this is happening as we speak:

  • further types and spread of self-service advertising, better targeting, availability of media
  • automated heuristics-based detection of risky customers, automated moderation
  • computer-vision based user interfaces which require nothing more than a webcam
  • ever increasing size of botnets, and the use of them for game exploits, money laundering, identity theft and surveillance

The escalation of large-scale threats have raised the need for industry-wide groups for exchanging information and best practices between organizations regarding the security relevant information such as new threats, customer risk rating, identification of targeted and organized crime.

 

Software development, efficiencies, bottlenecks, resources

Commercial software development tools and methods experience a significant shift roughly once every decade. The last such shift was the mainstreaming of RAD/IDE-based, virtual-machine oriented tools and the rise of Web and open source in the 90s, and now those two rising themes are increasingly mainstream while “convergent”, cross-platform applications which depend on the availability of always-on Internet are emerging. As before, it's not driven by technological possibility, but by the richness and availability of high-quality development tools with which more than just the “rocket-scientist” superstars can create new applications.

The skills which are going to be in short supply are those for designing applications which can smoothly interface to the rest of the cloud of applications in this emerging category. Web-accessible APIs, the security design of those APIs, efficient utilization of services from non-associated, even competing companies, and friction-free interfaces for end users of these web-native applications is the challenge.

In this world, the traditional IT outsourcing houses won't be able to serve as a safety valve for resources as they're necessarily still focused on serving the last and current mainstream. In their place, we must consider the availability of open source solutions not just as a method for reducing licensing cost, but as the “extra developer” used to reduce time-to-market. And as with any such relationship, it must be nurtured. In the case of open source, that requires participation and contribution back to the further development of that enabling infrastructure as the cost of outsourcing the majority of the work to the community.


Mobile internet

With the launch of iPhone, the use of Web content and 3rd party applications on mobile devices has multiplied compared to previous smart phone generations. This is due to two factors: the familiarity and productivity of Apple's developer tools for the iPhone, and the straightforward App Store for the end-users. Moreover, the wide base of the applications is primarily because of the former, as proven by the wide availability of unauthorized applications already before the launch of iPhone 2.0 and the App Store. Nokia's failure to create such an applications market despite the functionality available on S60 phones for years before the iPhone launch proves this – it was not the features of the device, but the development tools and application distribution platform were the primary factor.

The launch of Google's Android will further accelerate this development. Current Android-based devices lack the polish of iPhone, and the stability gained from years of experience of Nokia devices, yet the availability of development tools will supercharge this market, and the next couple of years will see accelerated development and polish cycle from all parties. At the moment, it's impossible to call the winner on this race, though.

- page 1 of 2