Fishpool

To content | To menu | To search

Tag - cloud

Entries feed - Comments feed

Monday 29 April 2013

Analytics infrastructure of tomorrow

If you happen to be interested in the technologies that enable advanced business analytics, like I am, the last year has been an interesting one. A lot is happening, on all levels of the tech stack from raw infrastructure to cloud platforms and to functional applications.

As Hadoop has really caught on and is now a building block for even conservative corporations, several of its weaknesses are also beginning to be tackled. From my point of view, the most severe has been the terrible processing latencies of the batch- and filesystem-oriented MapReduce approach, rather than solutions designed on top of streaming data. That's now being addressed by several projects. Storm provides a framework for dealing with incoming data, Impala makes querying stored data more processing-efficient, and finally, Parquet is coming together to make the storage itself more space- and I/O efficient. With these in place, Hadoop will move from its original strength in unstructured data processing to a compelling solution for dealing with massive amounts of mostly-structured events.

Those technologies are a bear to integrate and, in their normal mode, require investment in hardware. If you'd prefer to get a more flexible start to building a solution, Amazon Web Services has introduced a lot of interesting stuff, too. Not only have the prices for compute and storage dropped, they now offer I/O capacities comparable to dedicated, FusionIO-equipped database servers, very cost efficient long-term raw data storage (Glacier), and a compelling data warehouse/analytics database in the shape of Redshift. The latter is a very interesting addition to Amazon's already-existing database-as-a-service offerings (SimpleDB, DynamoDB and RDS), and, as far as I've noticed, gives it a unique capability other cloud infrastructure providers are today unable to match - although Google's BigQuery comes close.

The next piece in the puzzle must be analytical applications delivered as a service. It's clear that the modern analytics pipeline is powered by event data - whether it's web clickstreams (Google Analytics, Omniture, KISSMetrics or otherwise), mobile applications (such as Flurry, MixPanel, Kontagent) or internal business data, it's significantly simpler to produce a stream of user, business and service events from the operational stack than it is to try to retrofit business metrics on top of an operational database. The 90's style OLTP-to-OLAP Extract-Transform-Load approach must die!

However, the services I mentioned above, while excellent in their own niches, can not produce a 360-degree view across the entire business. If they deliver dashboards, customer insight is impossible. Even if they're able to report on customers, they don't integrate to support systems. They leave holes in the offering that businesses have to plug with ad-hoc tools. While it's understandable, as they're built on technologies that force nasty compromises, those holes are still unacceptable for a demanding digital business of today. And as the world increasingly turns more digital, what's demanding today is going to be run-of-the-mill tomorrow.

Fortunately, the infrastructure is now available. I'm excited to see the solutions that will arrive to make use of the new capabilities.

Monday 31 January 2011

Did common identities die with OpenID? No

About a year ago I posted here a summary of trends I expected would be relevant to our product development over 2010, and looking back at it, perhaps I should have put tablet computing on that list.. However, what prompted me to go back and look at it today was picking up on the news that 37signals has declared OpenID a failed experiment, and the related Quora thread I found. Wow, the top-voted answer there is one-sided. Here's what I think about it, to update my statement from a year ago. Comments would be welcome!

Facebook has established itself as a de-facto source of identity and social graph data for all but a few professional/enterprise-targeted Internet services. Over a medium- to long-term, it is still possible that another service or a federation of multiple services using standard APIs will displace Facebook as the central source. However, a networked, "external" social graph is a given. Majority of users are still behaving as if stand-alone services with individual logins and user-to-user relationships are preferred, but that's a matter of behavioral momentum.

This has not removed the problem of identity-related security issues, like identity theft. The nature of the problem will shift over time from account theft to impersonation and large-scale and/or targeted information theft. Consumers still remain uninterested and even hostile to improving security (at the cost of sometimes reduced convenience). Visible and wide-spread security scares are beginning to change the mindset though, and it's possible that even by the end of the year, at least one of the big players will introduce a "secure id" solution for voluntary access as a further argument for their services.

The spread of the social graph will have more impact to the scope of Internet services, however. Application development today should take it for granted that information about the users' preferences, friends, brand connections and activity history will be available and should be utilized (wisely) to improve service experience. The key to viral/social distribution is not whether applications can reach their users' network (which will be given), rather what would motivate the user to spread the message.

Thursday 14 January 2010

Technology factors to watch during 2010

Last week I posted a brief review of 2009 here, but didn't go much into predictions for 2010. I won't try to predict anything detailed now either, but here's a few things I think will be interesting to monitor over the year. And no, tablet computing isn't on the list. For fairly obvious reasons, this is focused on areas impacting social games. As a further assist, I've underlined the parts most resembling conclusions or predictions.

 

Social networks and virtual worlds interoperability

As more and more business transforms to use Internet as a core function, the customers of these businesses are faced with a proliferation of proprietary identification mechanisms that has already gotten out of hand. It is not uncommon today to have to manage 20-30 different userid/password pairs that are in regular use, from banks to e-commerce to social networks. At the same time, identity theft is a growing problem, no doubt in large part because of the minimum-security methods of identification.

Social networks today are a significant contributor to this problem. Each collects and presents information about its users that contribute to the rise of identity theft while having their own authorization mechanisms in a silo of low-trustworthy identification methods. The users, on the other hand, perceive little incentive to manage their passwords in a secure fashion. Account hijacking and impersonation is a major problem area to each vendor. The low trust level of individual account data also leads to a low relative value of owning a large user database.

A technology solution, OpenID is emerging and taking hold in a form of an industry-accepted standard for exchanging identity data between an ID provider and a vendor in need of a verified id for their customer. A few of current backers of the standard in the picture on the right. However, changing the practices of the largest businesses has barely begun and no consumer shift can yet be seen – as is typical for such “undercurrent” trends.

OpenID will allow consumers to use fewer, higher-security ids over the universe of their preferred services, which in turn will allow these services a new level of transparent interoperability in combining data from each other in near-automatic, personalized mash-ups via the APIs each vendor can expose to trusted users with less fear of opening holes for account hijacking.

 

Browsers vs desktops: what's the target for entertainment software?

Here's a rough sketch of competing technology streams in terms of two primary factors – ease of access versus the rich experience of high-performance software. “Browser wars” are starting again, and with the improved engines behind Safari 4, Firefox 4, IE 8 and Google Chrome, a lot of the kind of functionalitywe're used to thinking belongs to native software or at best browser plugins like Flash, Java or Silverlight will be available straight in the browser. This for sure includes high-performance application code, rich 2D vector and pixel graphics, video streams and access to new information like location-sensing. The plugins will most likely be stronger at 3D graphics and synchronized audio and at advanced input mechanisms like using webcams for gesture-based control. Invariably, especially the new input capabilities will also bring with them new security and privacy concerns which will not be fully resolved within the next 2-3 years.

While 3D as a technology will be available to browser-based applications, this doesn't mean the web will turn to represent everything as a virtual copy of the physical world. Instead, it's best use will be as a tool for accelerating and enhancing other UI and presentation concepts – think iTunes CoverFlow. For social interaction experiences, a 3-degrees-freedom pure 3D representation will remain a confusing solution, and other presentations such as axonometric “camera in the corner” concepts will remain more accessible. Naturally, they can (but don't necessarily need to) be rendered using 3D tech.

 

Increased computing capabilities will change economies of scale

The history of the “computer revolution” has been about automation changing economies of scale to enable entirely new types of business. Lately we've seen this eg by Google AdWords enabling small businesses to advertise and/or publish ads without marketing departments or involvement of agencies.

The same trend is continuing in the form of computing capacity becoming a utility in Cloud Computing, extreme amounts of storage becoming available in costs which allow terabytes of storage to organizations of almost any size and budget, and most importantly, developing data mining, search and discovery algorithms that enable organizations to utilize data which used to be impossible to analyze as automated business practices. Unfortunately, the same capabilities are available for criminals as well.

Areas in which this is happening as we speak:

  • further types and spread of self-service advertising, better targeting, availability of media
  • automated heuristics-based detection of risky customers, automated moderation
  • computer-vision based user interfaces which require nothing more than a webcam
  • ever increasing size of botnets, and the use of them for game exploits, money laundering, identity theft and surveillance

The escalation of large-scale threats have raised the need for industry-wide groups for exchanging information and best practices between organizations regarding the security relevant information such as new threats, customer risk rating, identification of targeted and organized crime.

 

Software development, efficiencies, bottlenecks, resources

Commercial software development tools and methods experience a significant shift roughly once every decade. The last such shift was the mainstreaming of RAD/IDE-based, virtual-machine oriented tools and the rise of Web and open source in the 90s, and now those two rising themes are increasingly mainstream while “convergent”, cross-platform applications which depend on the availability of always-on Internet are emerging. As before, it's not driven by technological possibility, but by the richness and availability of high-quality development tools with which more than just the “rocket-scientist” superstars can create new applications.

The skills which are going to be in short supply are those for designing applications which can smoothly interface to the rest of the cloud of applications in this emerging category. Web-accessible APIs, the security design of those APIs, efficient utilization of services from non-associated, even competing companies, and friction-free interfaces for end users of these web-native applications is the challenge.

In this world, the traditional IT outsourcing houses won't be able to serve as a safety valve for resources as they're necessarily still focused on serving the last and current mainstream. In their place, we must consider the availability of open source solutions not just as a method for reducing licensing cost, but as the “extra developer” used to reduce time-to-market. And as with any such relationship, it must be nurtured. In the case of open source, that requires participation and contribution back to the further development of that enabling infrastructure as the cost of outsourcing the majority of the work to the community.


Mobile internet

With the launch of iPhone, the use of Web content and 3rd party applications on mobile devices has multiplied compared to previous smart phone generations. This is due to two factors: the familiarity and productivity of Apple's developer tools for the iPhone, and the straightforward App Store for the end-users. Moreover, the wide base of the applications is primarily because of the former, as proven by the wide availability of unauthorized applications already before the launch of iPhone 2.0 and the App Store. Nokia's failure to create such an applications market despite the functionality available on S60 phones for years before the iPhone launch proves this – it was not the features of the device, but the development tools and application distribution platform were the primary factor.

The launch of Google's Android will further accelerate this development. Current Android-based devices lack the polish of iPhone, and the stability gained from years of experience of Nokia devices, yet the availability of development tools will supercharge this market, and the next couple of years will see accelerated development and polish cycle from all parties. At the moment, it's impossible to call the winner on this race, though.