To content | To menu | To search

Friday 2 January 2015

A look forward to 2015

As has been my way, here’s the one blog post for the year past and the year forward. Unlike before, this is going to be posted both on Fishpool and on Metrify blogs for reader convenience. Also my own, because it seems Fishpool has not been all too stable lately, and I can’t be bothered to transfer it to a better platform just now.

I didn’t make many projections a year ago, just noting that a few trends were worth keeping an eye on. Security and privacy were in the limelight, and they certainly continued to be so over 2014, with several major retail and cloud hacks, encryption failures and ending with a Bang by hackers breaking into and releasing a giant dump of private information, both corporate and personal, from Sony Pictures’ badly secured systems. 

I continue to be extremely skeptical of any real North Korean involvement in that incident. Basically, it smells like a corporate version of the Snowden Files without any of the ethical motives. That is, a disgruntled insider with more access than anyone had any reason to have grabbing a copy of everything. Unlike Snowden, this guy apparently felt like a unedited dump of people’s identities and lives onto the Internet to be ogled by anybody was a great idea. The North Korea/Interview link is just a forlulz misdirection, the only real bafflement on that is how can US government be so clueless as to assign the deed to DPRK with so weak evidence (even including their undisclosed findings).

What it highlights is something that has been understood in the security industry for a while, but by now should be obvious to anyone who bothers to study the events for even an afternoon. A semi-organized hacker group can now manage to basically destroy a major corporation by penetrating all of their information systems. Doing the same to a government may still be a bit beyond reach, but the trend is clear - private groups will master cyber-warfare attacks at the same speed or faster than governments will. Ukrainian insurgents may have needed Russian support to create chaos even against a weak army like that of Ukraine’s, but on the Internet, it’s not only hard to distinguish between a private and governmental adversary, the worse threats may in fact come from private groups. Certainly they will be more numerous. Ultimately, that thought so scary it may be enough to explain FBI’s eagerness to find any other explanation to the Sony incident.

So, I think we’ll see a lot of people try to come up with a compelling solution to security, or at least a compelling argument to why their solution matters. The latter will take form of Be Afraid, in the long-standing tradition of security services vendors of all time.

My other major note last year was that 2014 might become the Year of the Sensor. In a way, that was true, and yet it wasn’t: over the year, we saw too many wearable device announcements to count (including one from Apple, which I’m not particularly excited about, but so be it). We also saw a bunch of early services enabled by those devices, mostly in the health-and-fitness space. What we didn’t yet see much of was environmental sensing, but buildings and environments do change slower than the latest personal devices, nor are most people yet “wearing” anything but a smartphone. It will change, though.

That relates to my own moves too - as I’ve shared earlier, I put my general Data Science advisory business on the shelf and jumped on to building what is basically a sensor-enabled business. IndoorAtlas, where I now lead the engineering team, is able to produce an accurate indoor positioning of devices thanks to the sensors in them - in our case, primarily thanks to the compass chip embedded into smart phones. Who knew that a digital compass, out of all things, could unlock so much opportunity? I certainly was dubious the first time I heard of the concept.

That’s a great example of how surprising these disruptive changes can be. The last time I had the fortune to be involved in something as big, sometime as early was 12 years ago, when Sulake and Habbo Hotel were starting to gain speed. What was then called “micropayments” was, to most who even knew about it, firmly in the crazy-land. Why would anyone buy imaginary stuff to a freely provided online service, and certainly there would be no way to make a real business out of that! Well, Habbo didn’t end up to be the billion-dollar business we hoped and worked for, but mobile games now drive tens of billions of revenue on precisely the same model. It’s just called In-App-Purchase now, and the payment method isn’t text messages, but tapping a green button on a smartphone. Today it seems like providing services for free might not be just a good way of creating a big business, but the only way to do so!

I believe that 10 years from now, we’ll be looking at solutions enabled by wearable, always-sensing, always-analyzing technology the same way. What other way there possibly could be to produce useful stuff? Devices that don’t know exactly where they are and what is around them? Services that can’t deliver what you need, when you need it, where you are? Clothes that don’t respond to conditions of their wearer? You must be joking!

I’m very much looking forward to 2015. It’s hard to even put in words how exciting it is to be working towards something of this potential. Being involved from a developer platform angle is also very cool, because this gives me an opportunity to work with more devs, more applications, and more services than I could otherwise. Hope to speak with you about it soon!

Saturday 28 December 2013

End of year review, 2013

Another year past! Quite a year it was. I’ve been sadly absent from this blog since the July posting on the copyright reform initiative in Finland, which is yet to really play out, though the signs are that it’ll continue to get the silent treatment. On business side, Metrify went through what most start-ups go through - a direction change and a team split-up, which was good for everybody concerned. Today, we’re helping our customers scale data-driven operations and are involved in a handful of ambitious projects.

I’ve continued to look in amazement what’s happening in manufacturing. A year ago, I mentioned how 3D printing enables custom, one-off design manufacturing, but I failed to note (or notice myself) that something else had also been done already - printed food. Not just printing chocolate pieces, which, after all, is just an extension of how the basically liquid stuff has always been made anyway, but printing beefsteak. Seriously, I’m waiting for the day when I can eat a tuna steak with good conscience, knowing it grew in a can, instead of being an overfished species.

The true surprise of 2013 was the information disclosed by Edward Snowden on the breadth of the surveillance operations by the NSA and its international partners. Not that much of it wasn’t suspected, at least by the more suspicious among us, but that basically the worst fears turned out to be true, and we have not even learned all of it yet. Despite having taken enough interest in secure communications to have written one of the first PGP/MIME email apps in the mid-90s, I had not given the matters a lot of thought for years. So, all of this was somewhat of a wake-up call for me, and I’ve found myself thinking about it quite a lot. I haven’t figured out how to convert that to action though - communications security is a hard field with few real successes to begin with. Doing it against government-class adversaries is a different task entirely.

As far as gadgets go, the year was one without huge milestones, but not without important advances. Smartphones have matured to a point where their improvements don’t provide a lot of amazement (though even the incremental improvements can be nice) and wearable devices are field of geek-out than true usefulness. One exciting piece of news though, one that ties in to the first part about 3D printing: prosthetic limbs designed and manufactured at home. That aside, the big gadgets story is sensors, sensors everywhere - from the capabilities in your phone, to embedded wireless sensors in everything from lamp posts to potted plants, we’re only at the beginning with the sensor rollout and the onslaught of digitized environment that’ll provide. Of course, sensors are also the story in the new games consoles, which I should mention here, despite lacking any enthusiasm for them whatsoever. The only piece of them with any interesting potential is Kinect 2 and PS4 Eye, since they could provide for a really hands-free living room.

What’s ahead for 2014? Well, several of these things are easy to extrapolate from individually, but the combination is tricky. One would hope for a personal privacy field to counteract the surveillance overreach..

Tuesday 23 July 2013

Miksi tuen tekijänoikeuslain kansalaisaloitetta

Usually I write here in English. Today, because the topic is about Finnish legislation, I'm making an exception. Apologies for the foreign audience!

Tänä iltana umpeutuu Järkeä Tekijänoikeuslakiin -kansalaisaloitteen kannatusilmoitusten jättöaika. Aloite on jo saanut kasaan vaadittavat 50000 allekirjoitusta, joten se lähtee käsiteltäväksi eduskuntaan. Viimeisen kuuden kuukauden aikana monia asiassa aktiivisesti toimineita on jännittänyt, saadaanko tuo tavoitetaso kiinni -- tasa-arvoisen avioliittolain hurjista numeroista aloite jäikin aika kauas. Minua kuitenkin jännittää enemmän, mitä tästä eteenpäin tapahtuu. Eduskunnasta on jo edustajien suunnalta kuulunut niin rohkaisevia lausuntoja kuin odotettavissa olleita vähättelyjäkin.

Kahden edellämainitun aloitteen välinen kontrasti on varsin iso. Avioliittolain muutoksessa on kysymys lainsäädännöllisesti yksinkertaisesta, mutta tunteita herättävästä ja monen moraalikäsityksiä suuntaan tai toiseen voimakkaasti koskettavasta asiasta. Tässä tekijänoikeusasiassa taas tunteet ovat kauempana (paitsi niillä, jotka ovat omineet tekijänoikeudet koskemaan vain itseään, mistä Otso Kivekäs jo hyvin kirjoittikin), kun taas itse asia ja sen yksityiskohdat ovat vaikeita, ja aloitekin (joka pitkälle valmisteluna sisältää lakitekstiehdotuksen) pitkä ja monelle raskasta luettavaa.

Miksi sitten itse päädyin aloitetta tukemaan ja miksi kirjoitan näitä perusteluja vasta nyt, enkä aloitteen kampanja-aikana? Siihen on useita syitä.

Ensimmäisenä ja tärkeimpänä tulee kansalaisaloitteiden yleisempi merkitys. Valitettavasti myös Suomessa eduskunta, tai ainakin monet edustajamme, vaikuttavat vieraantuneelta kansalaisten mielipiteistä, ja haluan osaltani vaikuttaa siihen, että suora demokratia kehittyisi myös täällä. Vaikkakin pidän edustuksellista mallia jatkuvia kansanäänestyksiä toimivampana menetelmänä, se toimii vain, jos edustajat jaksavat tutustua myös vaikeampiin kysymyksiin ja uskaltavat niihin ottaa kantaa. Yhteiskunta ei kehity keppihevosilla leikkimällä tai milloin minkäkin asian kieltämistä ehdottamisella. Kokonaiskuva vaatii joskus näkemystä. Jos se näkemys puuttuu eduskunnasta, sen tulee ymmärtää kuulla näkemystä ulkopuoleltaan, ja kansalaisaloitteet ovat yksi niistä välineistä.

Toisena tulee lainsäädännöllä suojattujen monopolien haitallisuus. Tässä tapauksessa katson esimerkiksi Teoston olevan tälläinen monopoli - vaikka tekijänoikeudet koskevat hyvin monenlaista luovaa tuotantoa ja ajattelua, myös tätä artikkelia, nykyinen lainsäädäntömme on viritetty musiikkiteollisuudelle ja joillekin sen edunvalvontaryhmille on annettu yksinoikeuksia. Yksinoikeudet, joita ei pääse markkinalla haastamaan, eivät koskaan ole pidemmällä aikavälillä kuluttajille, siis yhteiskunnan enemmistölle, hyvä asia. Teostokin on varmasti tehnyt hyviäkin asioita, eikä missään mielessä näy itselleni esimerkiksi samanlaisena edunvalvonnan irvikuvana kuin myös tekijänoikeuksiin liittyvä TTVK Ry, mutta on silti aika edes harkita sen yksinoikeuden purkamista.

Kolmantena tulevaisuus. Suomen tuleva menestys riippuu monesta seikasta, mutta kaksi hyvin olennaista asiaa on koulutusjärjestelmämme korkea taso ja uusien palvelukonseptien kehittämisen mahdollisuudet. Kummassakin nykyinen tekijänoikeuslainsäädäntö tuottaa merkittäviä käytännön ongelmia. Internetistä löytyvän, yksityisessä käytössä vapaasti olevan materiaalin hyödyntäminen opetuksessa (silloinkin, kun sen tekijä sen sallisi) on lainsäädännön vuoksi vaikeaa tai jopa mahdotonta (epäilenkin monen opettajan syyllistyvän luokkahuoneessa laittomuuksiin joko tiedostamattaan tai jopa tahallaan). Monet uudet palvelut taas jäävät tekemättä, koska niihin sisällön lisensoiminen on liian vaikeaa (usein edellä mainittujen monopolisoitujen edunvalvojien vähäisen kiinnostuksen vuoksi). Itsekin olin vuosia sitten mukana kehittämässä palvelua, jossa isossa osassa olisi ollut myytävä (ja siis käytöstä korvauksia maksava!) musiikki, mutta lisensointi osoittautui resursseillemme mahdottomaksi. Toki ratkaisuumme vaikuttivat enemmän Suomea suurempien markkinoiden tilanne, joka ei ole sen lohdullisempi. Myöhemmin näin vastaavia, joissa tekijät eivät kummemmin laillisista oikeuksista välittäneet - käyttäjät kyllä tykkäsivät. Me hävisimme, tekijät hävisivät, Suomi hävisi, ja lopulta kuluttajatkin hävisivät, sillä niiden toisten palveluiden laittomuus kyllä lopulta teki niistä lopun.

Neljäntenä tulee kuluttajan oikeus hallita ostamaansa sisältöä. Vuosien saatossa erityisesti musiikkiteollisuus on varsin perverssillä tavalla siirtynyt kappaletavaran (vinyylilevyjen, myöhemmin CD-kiekkojen) myynnistä kummalliseen maailmaan, jossa kuluttaja tuotteen ostaessaan ei kuitenkaan saa oikeutta käyttää tuotetta haluamallaan tavalla. Tämä ajatus on vieras sekä kuluttajille, että lähes kaikille muille elinkeinoelämän haaroille. Jos ostan lautasen kaupasta, saan käyttää sitä ruoan alustana, maalata sille kuvia, lainata tai vaikka vuokrata sitä kaverille, tai vaikka niin tahtoessani rikkoa sen. CD-levyn osalta minulla kuulemma ei ole vastaavalla laajuudella oikeuksia. Tai ilmeisesti on koskien sitä muovitettua alumiinikiekkoa, mutta ei sen sisältämää musiikkia. Sen sisällön kaverille lainaaminen kuin on Väärin. Musiikkiteollisuus on viimeisten vuosien sekä iTunes Storen ja Spotifyn kaltaisten palveluiden kautta vihdoin pakotettu liikkumaan eteenpäin, ja nyt seuraamme, mitä tapahtuu muilla menneisyyteen juuttuneilla sisältöteollisuuden haaroilla. Mutta minä en halua, että minun yhteiskunnassani menneisyyteen juuttuneet saavat päättää, mitä kaikki muut saavat tehdä.

Siinä tärkeimmät. Vaikka aloitteen teksti onkin lähes valmista lakitekstiä, ja sellaisena pidän sitä parempana kuin nykyisin voimassa olevaa tekijänoikeuslakia, en kuvittele, että sitä kukaan sellaisenaan eduskunnassa hyväksyisi. Sen sijaan odotan, että kansan valitsemat edustajat kunnioittavat kansan tahdonilmaisua ja käsittelevät asiaa sen ansaitsemalla huomiolla. Minä, ja moni muu tulee varmasti seuraamaan tarkasti, mitä edustajamme silloin sanovat, miten he toimivat, ja minkälaiseen lopputulokseen tuo käsittely päätyy.

Minä en kannata sisällön vapaata kopiointia. Ei sitä tue tuo lakialoitekaan. Minä kannatan sitä, että saan tulevaisuudessa ostaa tai vuokrata sisältöä silloin kun sitä haluan tai tarvitsen, tavalla, joka parhaiten minua palvelee. Siksi haluan, että tekijänoikeuslakimme muuttamisesta käydään eduskunnassamme keskustelu.

Thursday 18 July 2013

The difference in being demanding and acting like a jerk

Sarah Sharp is a member of a very rare group. She's a Linux kernel hacker. Even among that group, she's unique - not because of her gender (though that probably is distinctive in many of the group's social gatherings), but because she's brave enough to demand civil behavior from the leaders of the community. I applaud her for that.

Now, I have immense respect for Linus Torvalds and his crew. I've been a direct beneficiary of Linux in both professional and personal contexts for soon to be two decades. The skills this group demonstrate are possibly only matched by the level of quality they demand from each other. However, unfortunately that bar is often demonstrated only on the technical level, while the tone of discussion, both in-person and on the mailing lists, can turn quite hostile at times. It's been documented many times, and I can bring no value to rehashing that.

However, I wanted share some experience from my own career as a developer, manager of developers, and someone who has been both described as demanding and who has needed to report to others under very demanding circumstances. I've made some of these mistakes myself, and hope to have learned from them.

Perhaps not so surprisingly, the same people in the community who defend hostile behavior also, almost by rule, misunderstand what it means to behave professionally. There's a huge difference between behaving as in a workplace where people are getting paid, and behaving professionally. The latter is about promoting behaviors which lead to results. If being a asshole was effective, I'd have no problem with it. But it's not.

To consistently deliver results, we need to be very demanding to ourselves and to others. Being an uncompromising bastard with regards to the results will always beat accepting inferior results, when we measure technical progress on the long run -- though sometimes experience tells us a compromise truly is "good enough".

However, that should never be confused with being a bastard in general. Much can (and should) be said about how being nice to others means we're all that much happier, but I have something else to offer. Quite simply: people don't like to be called idiots and having personal insults hurled at them. They don't respond well to those circumstances. Would you like it yourself? No? Don't expect anyone else to, either. It's not productive. It will not ensure delivering better results in the future.

Timely, frequent and demanding feedback is extremely valuable to results, to the development of an organization, and to the personal development of the individuals. But there are different types of communication, and not all of it is feedback. Demanding better results isn't feedback, it's setting and communicating objectives. Commenting on people's personalities, appearance, or otherwise, let alone demanding them to change themselves as a person isn't feedback nor reasonable. Feedback is about observing behavior and demanding changes in the behavior, because behavior leads to results. Every manager has to do it, never mind whether they're managing a salaried or voluntary team.

However, calling people names is bullying. Under all circumstances. While it can appear to produce results (such as, making someone withdraw from an interaction, thus "no longer exhibiting an undesirable behavior"), those results are temporary and come with costs that far outweigh the benefits. It drives away people who could have been valuable contributors. What's not productive isn't professional. Again - I'm not discussing here how to be a nicer person, but how to improve results. If hostility helped, I'd advocate for it, despite it not being nice.

The same argument can be said about using hostile language even when it's not directed at people, but to results. Some people are more sensitive than others, and if by not offending someone's sensibilities you get better overall results, it's worth changing that behavior. However, unlike "do not insult people", use of swearwords toward something else than people is a cultural issue. Some groups are fine with it, or indeed enjoy an occasional chance to hurl insults at inanimate objects or pieces of code. I'm fine with that. But nobody likes to be called ugly or stupid.

In the context of Linux kernel, does this matter? After all, it seems to have worked fine for 20 years, and has produced something the world relies on. Well, I ask you this: is it better to have people selected (or have the select themselves to) for Linux development by their technical skills and capability to work in organized fashion, or by those things PLUS an incredibly thick skin and capacity to take insults hurled at them without being intimidated? The team currently developing the system is of the latter kind. Would they be able to produce something even better, if that last requirement wasn't needed? Does that help the group become better? I would say hostility IS hurting the group.

Plus, it's setting a very visible, very bad precedent for all other open source teams, too. I've seen other projects wither and die because they've copied the "hostility is ok, it works for Linux" mentality, but losing out on the skills part of the equation. They didn't have to end that way, and it's a loss.

Thursday 9 May 2013

HTC One - an unreview, or what could be done better?

Finland is a funny market. Home of Nokia, most interesting devices take a while to actually become available here. Three years ago, I got my Nexus One by help from a colleague based in UK. It served me well - while it had always been pretty short on memory and for a long time had not been too impressive in terms of speed, it had a nice form factor and, even by today's standards, a fairly good display. However, I had been planning to swap to something more up to date for a while.

The Nexus 4, however, still isn't available here. Sure, at times a local retailer might have a few units with a pretty unattractive price, but the value proposition Google gave for the device is unreachable, since they will not deliver it from Germany, France, or UK to Finland. So much for the unified trade region of the European Union..

13050001_1 So, when I first heard of the HTC One, I had not picked up anything as a new device. I had considered a Galaxy S III, but I can not warm up to the imitation-chrome-rimmed plastic design Samsung is so fond of. In sharp contrast to that massive sales hit of the Android world (behind only the iPhone in sales figures), HTC One is a gorgeous design item. Enough has been written about its surface features, I see no point adding to that conversation. To my eye, HTC One wins the physical aesthetics crown among current phones, with the iPhone 5 and Nokia's Lumia 720 coming behind it. Each represents a very different philosophy and executes the details well. Anyway, I'm more of an Android guy, so even if One wasn't so gorgeous, I would not pick an iPhone or Lumia for myself.

But Finland isn't among the first markets for HTC either - heck, often it isn't an early market even for Nokia. In addition, One has suffered several delays, just barely making it to some markets ahead of the Samsung Galaxy S4, which must be its worst rival. So, especially since I managed to crack the Nexus One's screen to an unusable state, I had to resort to foreign help again - always a bit of a gamble with even with unlocked phones, due to the network differences. Since I couldn't locate a device in Germany, it was time to look what UK could deliver. And deliver it did - through Ebay, I received an untouched, still-in-retail-wraps HTC One last Friday.

I have to say it's just as beautiful in real life as it was in pictures. The finish is exquisite, with the aluminum, glass and polycarbonate seamlessly fused together. I would have happily traded 10 grams more mass and a millimeter in thickness for a more powerful battery, which I'm certain is the weakest part of the device, but it's not difficult to come up with a strategy that will take the device through my regular working day. As most reviews have concluded, it's at the top of Android models, if not of all smartphones.

But what of the un-review? Here are the things HTC has failed to do a good job with, all in the software installed on the device, as noted over one week of use. Where I've figured out a workaround, that's noted, too.

  • The Power Saver - yes, it has one built in. However, the way it's implemented (as an always-there checkbox at the top of the Notifications panel), it obstructs Android 4.1 from presenting the expanding notifications (which are present only for the topmost item). Those notifications are very useful. So, long-tap on the power saver option until an App Info pop-up appears. Through that, you can kill the Power Saver to recover the notification menu. For power saving itself, I use Llama profiles and a few events I've come up with over time.
  • The Calendar - several dealbreaker presentation problems, such as no weekday info in the daily view, no event labels in the weekly and monthly views (despite plentiful resolution to display small type on the Full HD 4.7" screen) and terribly confusing multi-calendar display options. I replaced it with Google's own Calendar app, hiding the built-in tool. They'll show the same calendars and this swap in no way prevents the lock screen and BlinkFeed from continuing to show calendar entries.
  • The keyboard's auto-complete and auto-correct is really irritating, including that hitting space will complete words but not insert the space. Replaced with SwiftKey, which is a far more competent solution anyway - but I might not have done it, had the keyboard been just that tiny little bit more finished.
  • The Share menu in HTC's own apps, including the browser. Limited to showing only four sharing options, among which HTC's own service and the not-so-great Mail app, all of the tools I use to share content (most notably GMail and Buffer) require extra taps. Chrome does not suffer from the same issue, though, so for browsing, this is easily bypassed. Too bad, because the HTC-customized stock browser is otherwise quite competent, slightly faster, and supports Flash for the few situations where that still is valuable.
  • That Mail app. Sure, it will connect to various mail servers including Exchange and private IMAP servers, but it's not nearly as polished as the GMail app, and all my mail accounts are backed by GMail anyway. This would not be a big deal, except for..
  • The lock screen is able to show weather, upcoming calendar entries, incoming SMS messages and the latest mail headlines - except, it will show the latter from the Mail app only, not from GMail. D'oh. Naturally, some might prefer to not show that potentially sensitive data on the lock screen, but I'd prefer the convenience, if it worked.

While overall I still prefer stock Android to these manufacturer customizations, HTC has improved on a couple of points. BlinkFeed is a nice presentation of news, Facebook and Twitter streams without leading to any undesirable duplication of work so common in these aggregation apps, the People browser that replaces both Android's Contacts app and the stock dialer is pretty good, though it takes some getting used to, and the camera application is a good use of the unique capabilities of the device. Of the major flaws, only the Mail/lock screen issue is something I have not found a workaround for, and it's a stretch to call that issue major. Nonetheless, I hope and expect HTC to deliver an update (perhaps along with Android 4.2) that would address these issues, many of which have been already noted by others, too.

Oh, and the camera? Its 4MP "UltraPixel" direction certainly sets the device apart from the competition. I have not done comprehensive side-by-side tests of it, but it does have good low-light performance (especially considering others, including the Lumia 920, use much longer exposure times and thus suffer from more motion blur, even if their image stabilization were able to eliminate camera shake). Perhaps the color balance could be slightly better. As for the resolution, it's certainly enough for online use, though won't leave much room for crops. Considering no mobile camera apart from the already-extinct PureView 808 can compete with a zoom-enabled pocket camera, let along a DSLR, I think the camera performs where most consumers would need it to. I hope HTC's gamble will pay off, and those same consumers won't be misled by the megapixel wars.

The other stand-out feature, unmatched sound output certainly stands out as well. This is the first mobile device capable of putting out a decent audio stream that I've tried. I doubt music will ever truly sound good at this scale, but at least its recognizable even from a distance. Most importantly, voice output comes clear and loud, so this is by far the best speakerphone ever made. If there's anything I can fault it with, it's this - even the lowest volume setting is sufficiently loud to carry over in a quiet room in a way that might bother other people nearby. I'd like one more setting below it, but headphones solve this with minimal inconvenience.

Monday 29 April 2013

Analytics infrastructure of tomorrow

If you happen to be interested in the technologies that enable advanced business analytics, like I am, the last year has been an interesting one. A lot is happening, on all levels of the tech stack from raw infrastructure to cloud platforms and to functional applications.

As Hadoop has really caught on and is now a building block for even conservative corporations, several of its weaknesses are also beginning to be tackled. From my point of view, the most severe has been the terrible processing latencies of the batch- and filesystem-oriented MapReduce approach, rather than solutions designed on top of streaming data. That's now being addressed by several projects. Storm provides a framework for dealing with incoming data, Impala makes querying stored data more processing-efficient, and finally, Parquet is coming together to make the storage itself more space- and I/O efficient. With these in place, Hadoop will move from its original strength in unstructured data processing to a compelling solution for dealing with massive amounts of mostly-structured events.

Those technologies are a bear to integrate and, in their normal mode, require investment in hardware. If you'd prefer to get a more flexible start to building a solution, Amazon Web Services has introduced a lot of interesting stuff, too. Not only have the prices for compute and storage dropped, they now offer I/O capacities comparable to dedicated, FusionIO-equipped database servers, very cost efficient long-term raw data storage (Glacier), and a compelling data warehouse/analytics database in the shape of Redshift. The latter is a very interesting addition to Amazon's already-existing database-as-a-service offerings (SimpleDB, DynamoDB and RDS), and, as far as I've noticed, gives it a unique capability other cloud infrastructure providers are today unable to match - although Google's BigQuery comes close.

The next piece in the puzzle must be analytical applications delivered as a service. It's clear that the modern analytics pipeline is powered by event data - whether it's web clickstreams (Google Analytics, Omniture, KISSMetrics or otherwise), mobile applications (such as Flurry, MixPanel, Kontagent) or internal business data, it's significantly simpler to produce a stream of user, business and service events from the operational stack than it is to try to retrofit business metrics on top of an operational database. The 90's style OLTP-to-OLAP Extract-Transform-Load approach must die!

However, the services I mentioned above, while excellent in their own niches, can not produce a 360-degree view across the entire business. If they deliver dashboards, customer insight is impossible. Even if they're able to report on customers, they don't integrate to support systems. They leave holes in the offering that businesses have to plug with ad-hoc tools. While it's understandable, as they're built on technologies that force nasty compromises, those holes are still unacceptable for a demanding digital business of today. And as the world increasingly turns more digital, what's demanding today is going to be run-of-the-mill tomorrow.

Fortunately, the infrastructure is now available. I'm excited to see the solutions that will arrive to make use of the new capabilities.

Thursday 28 February 2013

What if movies were designed for free-to-play?

This tweet from Ben Cousins over at ngmoco (looking forward to The Drowning!) got me thinking:

In a response, I said that's true if you consider the entirety of the movie industry (where some people buy everything they watch, while some pirate it - and yet another group pirates the movies and buys a lot of movie-related merchandise), but that on the level of any one movie, if they're analyzed from a free-to-play angle, they're terrible businesses.

I guess that obligates me to write something about how would a free-to-play movie work. Not being very well versed in the details of how movies get produced today, I guess I'm either way off in the deep end, or in an advantageous position to speculate about it. Take your pick, shoot me down in the comments. It's entirely possible it's not even possible to make every movie work as a standalone free-to-play (in which case we're back to something like Netflix as the freemium business model for movies), but since we did figure it out for games, why shouldn't we try to figure it out for movies?

What's a good free-to-play product design like? A quick summary:

  • A basic version of the product should be available for free. If someone's motivated enough, they should be able to enjoy the full experience without opening their wallet, but they'll have to contribute in some other way. Pirated movies don't count, that's not contributing. Ad support is a weak solution - better than nothing, though.
  • "Basic" doesn't mean low quality, because the free product should be as engaging as the premium version. A low-rez online clip doesn't cut it, it just drives people back to piracy.
  • The bar to spending money should be really, really low and well incentivized. An Amazon $0.99 rental for 24h counts, iTunes store $15 download does not. The incentives still need work, though - ease of access, good recommendations, easy streaming to the big screen are a good start.
  • The upper limit to how much one customer can spend on the product should be high enough to be practically unlimited. Spending more should always result in some additional marginal value.
  • High value customers come in two shapes: those who buy something really expensive once (such as a collector's edition, like Ben was linking to), or those who keep spending, again and again.

In some markets and for some movies, the industry does manage to capture the middle. However, these are not optional points for a free-to-play design. You have to consider all of them, or you're turning away customers. A free-to-play design typically expects a few percent of the audience to pay for their experience.

On the level of basic free edition, the easy suggestion to make is to have each movie be viewable from its own site in exchange for a Facebook Like or a retweet. By doing that, free viewers are contributing viral visibility to the product. As I mentioned above, this should not be a crappy low-rez edition, but a real, enjoyable stream. Done this way, movies would have to have stable, long-term addresses, rather than the marketing campaign sites they now have, but that would be a good thing. Free-to-play is a lot about the long tail, in both volume and time.

That site can sell offline copies as DVD or BluRay for someone who (for whatever their reasons) can't or doesn't want to stream. That may be quaint, but hey, people still buy vinyl, too. It can also rent the movie for streaming to something else than a computer. Clearly, there would need to be several incentives for someone to want to contribute a couple of bucks for the regular edition of the movie, and this is probably the hardest thing to get right. Eg, you could charge for pause function, but that would be a pretty dick move, likely to drive away people who would otherwise enjoy the experience. Perhaps the free edition should only come to play a month after release, until which streaming always costs.

Stuff like comment tracks, making of, etc can be a paid extra. They're made for true fans, and true fans are by definition willing to pay for the work. Some of that stuff can reasonably be priced much higher than it typically is, today.

Selling merchandize and collectors editions are obviously something the site should feature. It should also have exclusive items, such as limited edition access to the production crew. Just look at any of several successful Kickstarter campaigns to see what might a $5000 edition of a movie be packaged with. Today's featured documentary on Kickstarter about the Arab Spring has 10 premier night tickets next to the crew for that price, and another reward for double that (check it out yourself). The Kickstarter rewards are time-limited, but a free-to-play movie should have similar items available for fans throughout its distribution lifetime. They will need to be refreshed. Free-to-play is a service, not a product.

A re-watch would need to have some special features for it, all of which could be paid stuff. This would benefit some movies much more than others - and create an incentive for artists to create more movies like that. I wouldn't mind!

Now, I haven't even tried to run any numbers on this thought experiment, and I don't know where to pull the reference data. According to Box Office Mojo, last year's top grossing movie was The Avengers at $623M US, and on position #100 was The Five Year Engagement at $29M US box office revenue. The same site estimates US ticket prices today at $8.05, so that would mean 78 million US viewers for Avengers and 3.6 million for "5 year". However, those figures probably do not include rentals or online, and almost certainly do not include merchandize, which I would guess is a substantial extra for Avengers (and included in my suggested model above), so basing any comparisons on those data points would be very flawed.

A blockbuster film like Avengers collects most of its revenue very close to the release date, but other movies, like the perennial favorites Sound of Music, The Wizard of Oz and It's a Good Life, or somewhat more recent examples like Pulp Fiction, Inception or The Fight Club would keep racking views and revenue for years, even decades. So, would the Avengers ever get its current revenue as free-to-play? Perhaps not. Would Five Year Engagement? I don't see why not. Would Pulp Fiction or Fight Club, neither of which apparently make it to the all-time top 200 grossing movies on Box Office Mojo be able to generate a billion dollars off their engaged fan base over time? Of course they would.

Are you prepared to deal with a negative test result?

Someone recently asked me, whether he should do a limited-market test launch for a product he knows isn't finished yet, in order to learn more from actual users. A worthy goal, of course. Perhaps you're considering the same thing. I have, many times. Before you decide either way, consider the following:

What do you expect to learn? If you need to see people using your product, throwing it on the App Store won't help you achieve that objective. Better you go a nearby meetup of people you'd like to see using your product, introduce yourself and ask them to test it while you're looking. If you can't take the product with you or need to have an entire team experience the end-user feedback first hand, invite 10 people over for some pizza (either to your office or some more cozy environment) and record the event on video. I promise you, it'll be an eye-opening experience.

Do you have a hypothesis you're trying to verify? Can you state how you're verifying it? Can you state a test which would prove your hypothesis is false? Are those tests something that you can implement and measure over a launch? Awesome! If not, then you need to think harder and probably identify some other way of gaining the insight you're looking for.

Are you just trying to gather experience of something not directly related to the product itself? Such as, you don't yet know first hand how to manage an App Store launch. Well, you could launch your baby -- or you could quickly create another product with which to learn what you needed to learn. This tactic has the added benefit that such side-products are typically simpler, so they're easier to analyze for the understanding you're after.

But most importantly, what will you do if your test comes back with a negative result? Far too often, this hasn't been given any consideration, and when it does happen (as it typically does, if you haven't thought through the process), the response is "oh, the test failed, never mind, we'll just go ahead anyway". Unfortunately, in most such situations, it was not the test which failed. Rather, it successfully proved that the hypothesis being tested was incorrect. This is a completely different thing, and going ahead without changes would be a mistake after such a result. You have to be prepared to make the hard decisions. For example, Supercell killed Battle Buddies after their test launch showed it would not convert enough people.

You should test often and early. You should gather market data to support significant further investments of time or resources to any development you're undertaking. But you should also be prepared to take any necessary actions if the tests you're running show that your assumptions were incorrect, and the product doesn't work the way you intended. Those are not easy decisions to take, if you're invested into the product, as most creators would be. Think it through. A launch is not a test.

Tuesday 5 February 2013

Arbitrage as a game mechanic

Reading this rather amazing story about cross-border arbitrage, I could not help but think about how it applies to game design.

Here's how the arbitrage math adds up. The ferry costs approximately $275 round trip, and gas is about $8 a gallon in Sweden, which, if we assume our car gets around 30 miles per gallon, gives us $435 in expenses. Throw in food, lodging, and other miscellaneous costs, and the total should come in around $600 or so. Remember, diapers costs more than twice as much in Lithuania as they do in Norway, so we only need to buy that much to break even.

If in the real world it's possible to entice enough entrepreneurial activity from a neighboring country to make the supermarkets of south Norway run out of diapers, imagine how powerful arbitrage opportunities are for game design. It can do everything:

  • Increase play frequency, as you need to come often to exploit recurring opportunities
  • Drive explorative gameplay, as more and more players search for new kinds of arbitrage
  • Incent specialization, because to exploit arbitrage, you need to focus on a particular activity
  • Drive expected lifetime up, as leaving the game means leaving value on the table
  • Drive lifetime value up, because in a free-to-play game, longer play time means more opportunities to buy
  • Drive virality up, because players have incentive to find both supply and demand for their particular arbitrage skill

Many of these factors apply even to a single-player game that simulates market activites. Look no further than the classics of market games, David Braben's Elite (1984) (or Star Trader, which preceded it by a cool 10 years). However, the forces really come to forefront when applied to a social game where the arbitrages don't need even need to be programmed in, as long as the design doesn't eliminate their possibility. Players will probably discover them.

That doesn't mean it's trivial to fully exploit that capability, though. For example, I don't think we ever really explored the arbitrage mechanics fully in Habbo Hotel, even though the system is full of player to player trading, rare items, well-hidden nooks and crannies, and whatnot. The most important feature missing in Habbo Hotel is rich support for specialization. RPG style games bring specialization through character classes and skills, resource management games through directing players to invest their earned resources in a particular type of activity, and so forth. The game mechanic should reward specializing, by making it possible for a player highly capable in a particular section of the gameplay to trade that capability with others for the skills or resources provided by another type of specialization. Don't reward being a generalist, or allow maximizing all stats.

Tuesday 1 January 2013

A review of 2012 and a look into the future

Happy New Year! I've done the traditional review and predictions thing here for the past few years, and it's that time again. This time around, it's really difficult for me to see the big trends, having been heads-down in start-up building for most of the year. On one hand, that's of course a problem; if I can't describe what's going on around us, how can I know where to head? Yet, most startup decisions really have very little to do with this level of thinking -- once a rough vision is in place, it's more about finding the right people to execute that vision with and not a lot about what are other people doing. So, I haven't really spent enough time putting these thoughts in order, but it'd be a shame to skip this chance.

On the recap side: I predicted that the Euro crisis would continue to play out, that governments would try to regulate Internet, that Facebook would continue to dominate but the "gatekeep net content" stuff would fail, that we'd see a lot of rise in enterpreneurship (would it be safe to call 2012 the year of Kickstarter?) and that we'd see a completely new class of social games, and I'm very happy to see good friends at Supercell emerge as the early leader there. Well, I was pretty vague a year ago, so it's easy to claim I'm a good prognosticator :). I can't make a call on the data-driven medical stuff, not having really followed developments there, though I suppose at least 23andme's massive new funding counts. From earlier stuff, motion controls are now on high-end televisions, though the applications are still pretty raw.

Then there's the personal recap that's far too close for me to summarize it well. 2012 has been a year of change, learning and growth. The chronological highlights: ending a good period at Sulake, using all that I learned there to help several very cool startup teams accelerate on their path to success, helping my spouse get her own startup movingfounding another startup with a great team and most importantly, witnessing and guiding our daughter learn new stuff every day over her second year.

What's in the future? I remain especially bullish on two very large, very disruptive trends - the custom healthcare I already wrote about earlier as well as custom manufacturing (whether 3D printed or otherwise). For sure, 3D printing is advancing really fast right now, and it's reasonable to expect some devices come out of hobbyist-tinkerer labs and prototype studios to regular homes and offices. However, it's not just 3D printing but all kinds of digitally driven manufacturing from custom shoes and jeans to customer-specified patterns or designs on everything. With laptop vinyl skins, tableware and lampshades done, what's next?.

While these deliver value in different ways, they're driven by the same trends powered by digital technology and data. Computing is no longer just computing. Ultimately, we're only a few short years away from Neal Stephenson's Diamond Age. Okay, perhaps not the nanotech, but most other stuff for sure.

Looking at my past predictions, I've been far more focused on the pure computing stuff before. On that note, we're still in the middle of the platform disruption. Though touch computing has clearly taken a leading position in application development, we're still missing a capable standard platform. iOS is capable but proprietary, HTML5 is still not fully here, Android is grabbing market share but at a massive fragmentation cost, and so on. I haven't seen this many new languages and frameworks pop up all over the place since the early 90's. What's going to be the Windows 95 and Visual Basic of this era?

- page 1 of 25