Fishpool

To content | To menu | To search

Tag - conference

Entries feed - Comments feed

Monday 23 May 2011

Nordic Game followup

A week ago Thursday, I gave a presentation in Malmö on Nordic Game Conference's second day on a couple of related topics, slides below. I spoke about the lack of truly social interaction in this generation's "social games", and reflected on what a social game where players actually play together looks like. As you might guess, Habbo has been a social playground for a long time.. 11 years, in fact. The slides themselves are, typically for me, a bit difficult to understand since they're mostly just pictures. You should've been there :)

True Social Games - NG11 - Slides

Tuesday 11 May 2010

LOGIN presentation on Habbo's Flash transition and player-to-player market

Had my presentation as one of the first sessions of this year's LOGIN conference. Darius Kazemi liveblogged the speech at his blog, and the slides are here. Best viewed together.

Monday 26 April 2010

A new lean software manifesto

This weekend saw Eric Ries's Lean Startup movement produce a conference on the approach. People who were there have already summarized and documented the proceedings in quite a detail. One of the interesting take-aways seems to have been Kent Beck's proposal for the evolution of the Agile Manifesto into something more applicable to the startup context of continuous learning and adaptation. Apparently, it has created quite a bit of discussion, but apart from the video recording, I haven't seen it being stated completely anywhere. So, it goes something like this (original waterfall comparison parenthesized):

As practitioners of software development to support lean business, we have come to realize that the unknowns of the business context are more critical to the success of the enterprise than the attributes of the software we create. As we learn this, we have come to value:

Team vision and discipline over individuals and interactions (or processes and tools)
Validated learning over working software (or comprehensive documentation)
Customer discovery over customer collaboration (or contract negotiation)
Initiating change over responding to change (or following a plan)

That is, while there is value in the items on the right, we value the items on the left more.

I hope I did not butcher some subtlety when extracting those words out of the keynote speech. Now, for my own view: there's plenty in the above statements which I can resonate with, but some bits that I find myself somewhat uneasy about. And no, it's not over the second point, which apparently has ruffled the feathers of quite a few software engineers (I'll let Steve Freeman explain that one).

The biggest issue I have is with the third statement, preferring customer discovery to customer collaboration. Not because that's not a great thing in some situations, but because it limits the applicability of this model to a tiny cross section of where the lean principles truly apply. Namely, it works great for a garage startup that doesn't yet know what its market really is. It doesn't work all that great for a business which already has customers, revenue, and even profit - yet such a business is still well served by maintaining a lean approach. Now, one may argue that a growth business will always need to continue to discover new customers, either similar to those it already has, or entirely new segments, and I will not disagree. Still, there comes a point where greater success comes from collaborating with your customers than from looking for new ones.

The second issue I have is with the first statement of preferring teams and discipline over individuals and interaction. Again, not because I disagree, but because I know there are many people who will interpret the word "discipline" as "lets set up processes, plans and approval mechanisms", and turn the whole thing back to waterfall. Successful application of the agile principles has never been as easy as the books and educators make it sound like, and the subtlety of the differences between the values of the first statement is, I think, the primary reason why.

Tuesday 28 April 2009

The MySQL community outlook

While I can not consider myself a member of MySQL's community of developers, I've been watching those developments the same way I follow the development of Linux and many of the Java and Apache projects our own services depend on. It was great to meet many of the core members of the development community and get some insight into their thoughts about the future.

Baron Schwartz called in his Percona Performance Conference keynote on Thursday for a new, active MySQL community to take the driver's seat in the development of the database, not just in the incremental improvements way of bug fixing and performance improvement, but also by setting a vision for the next generation MySQL. It's a call to action greatly needed, and an important one despite the active existence of the Drizzle project. This is because while Drizzle already has a vision for the future, it's a radical diversion for the MySQL userbase and one which will not necessarily have smooth upgrade path. Many of the same MySQL users feeling most of the pain of MySQL's current limitations are also those who will not be able to easily upgrade to a radically different architecture due to the amount of data and dependencies in their existing infrastructure.

It's a gap which needs a careful approach of incremental changes to the MySQL base functionality to help users bridge over to a new, brighter future. These changes do not need to be slow. Rapid incremental changes are likely to be easier to digest with a clear upgrade and downgrade path from iteration to iteration leaving the organizations with biggest infrastructures to consider a way to set their own pace through the transition, rather than being forced to take one huge leap and risk a crash to the concrete wall of unexpected incompatibility.

A few such pieces of incremental community improvements I learned a great deal of during the week were the performance and scalability improvements by Google and Percona and their MySQL 5.4 equivalents, the Xtrabackup utility not only as an alternative, but improvement on the Innobackup tool which has significant limitations to its use in large-scale deployments, and the Tungsten Replicator providing useful cross-database replication and rapid failover features helping upgrades and transitions to new database installations while minimizing downtime and impact to users. I'm also curious about the storage engine development by Primebase - I don't think there's ultimately a lot of room for multiple transactional storage engines, but as a competitive research topic, it's certainly good to see alternatives to InnoDB.

[Be sure to check out my earlier posts of the conference learnings as well!]

Monday 27 April 2009

Database innovation on MySQL

If MySQL's core server development and release process has been somewhat of a frustration to the userbase over the past few years, clearly another part of the ecosystem has thrived in ways which brought exciting fruit to the Expo part of this year's conference. MySQL has become a hub of innovation in both transactional and analytics databases in ways which have turned many of my concerns to enthusiasm.

I've already discussed the technologies for data analytics on MySQL, in particular Infobright's storage engine technology. This year I took the opportunity to learn a bit more about their appliance-based competitor Kickfire as well, and it certainly looks like a solid product. I still don't completely understand what the "SQL chip" in their appliance does, but certainly the combination of a special-purpose columnar storage, high-speed memory interface and high-performance indexing should form basis for a great analytics system. How it compares in practice to Infobright's software-only approach, time will tell. I'd be interested in real-world experiences, so if you have some to share, please get in touch. Finally, I missed the Calpont info myself, but once it is released, I'll try to get the time to try it out.

I'm even more excited about the new solutions on the transactional side of things. I've certainly been among the people frustrated by MySQL/InnoDB's scaling issues on modern hardware, and glad to see that the optimization work done by Google, Innobase and Percona is being accepted to the "mainline" MySQL Enterprise Server. However, what I did not expect to see were the solutions shown by Virident and Schooner for accelerated, Flash-based storage appliances. It's interesting how both of these companies have chosen to apply their platforms to accelerate both InnoDB and Memcached, and I'm looking forward to the chance to spend more time with both solutions. While both are Flash-based approaches, they seem to have taken very different architectural choices in the way they're exposing the memory to the software layer, and I'm curious to see the impact those choices have on both IO and storage capacity scaling. In any event, these are unique technologies unlike what I've seen for other platforms at this time. I need to learn how they plan to work with the community and Sun/Oracle in keeping the solutions functionally compatible with standard MySQL server.

The ecosystem doesn't end at the appliances, though. On the software side of things, I was pleasantly surprised by the state of Primebase's PBXT storage engine as well as Continuent's new Tungsten Replicator. While both are still early in their development path, they seem to hold a lot of promise for improving the performance of MySQL's built-in functionality in InnoDB as well as in the replication subsystem. Robert Hodges's demo of Tungsten's set-up and management also looked like it will greatly simplify replication administration, which is a big deal for anyone who has to manage 20+ replicated database systems. What's more, if Robert and his team crack the multi-threaded replication problem, and major scalability concern is lifted.

[Be sure to check out my earlier posts of the conference learnings as well!]

Sunday 26 April 2009

MySQL 2009-2010 roadmap

The development model for MySQL Enterprise took a big step forward with the new community process Karen Padir announced in her Tuesday keynote. This is great for both the open source server as well as enterprise customers, because the closer the tie between the community and the development path, the better the quality and faster the progress towards new functionality. I'm not entirely sure everyone at Sun still completely understands why a working community process is a benefit for the enterprise customer base, but I'm happy steps are made in the right direction, and it seems to me that Karen Padir is going to be a good leader for the product.

A big improvement, for sure, and still there's more to improve here. To borrow the words of Baron Schwartz, MySQL currently "has" a community, while it would really be in everyone's benefit if instead MySQL would "be" a community. I would suggest that the goal should be not monthly "community" releases from Sun, but a completely out-in-the-open development process with the community members being on the driving seat regarding patch acceptance, quality management and releases, much like the Fedora process works. Sure, there's a role for corporate sponsorship and project management, but it's a distinct difference of responsibility. The Drizzle project is another good example of how this can work. An important point to realize here is that there is a difference between the community, an active partner in the process of making the software better, and the unpaid userbase. The latter is an acquisition and conversion vehicle for the former, but they're separate entities.

The announcement of the 5.4 server was at the same time an encouraging as well as confusing example of the changes. I would like to be enthuastic about it, but we've seen MySQL (if not Sun) announce pre-announce releases that didn't appear before, and it's a long way to the promised release time. I asked two questions from many, many MySQL staff members during the week: why is it that 5.4 was announced now, but is slated to be released GA only in December when it clearly demonstrates massive scalability improvements already, and why is it that the feature list for the final 5.4 release is much longer than what's already completed? I did not get a really coherent answer from anyone. Best I could decipher, there is somewhere a faceless "marketing" which decided that a) there should only be one release announced and b) 40% demonstrated improvement is not good enough when it's not the only improvement that can be made. I also learned that it's not unlikely that much of the work which has gone to 5.4.0-beta would be backported to the 5.1 branch and released in a 5.1 point release before the actual 5.4 release, because in fact they can be considered bugfixes.

I consider myself not an entirely unexperienced in the decision processes for release management, and know intimately the clarity hindsight provides to well-intentioned choices made with best available information. I know there are many areas to consider, and every decision made is a compromise. I still can't bring myself to completely understand what exactly led to this particular approach. Lets recap:

  • Improvements already made are announced and made available in beta test form, but beta does not contain everything planned for the release
  • Final release is intentionally delayed by 7 months adding significant project risk to it, despite having no previously committed release schedule
  • Former release version is planned to by improved by making significant performance-altering changes in a point release in order to offset the delay
  • Such a release adds risk to maintenance roadmap and steals away upgrade motivation from the upcoming version

How this plan serves either Sun, the community, the free userbase or the enterprise customers is a mystery to me. It would certainly seem far simpler and clearer to take an aggressive quality assurance and release testing position with the intent to push 5.4 out as a rock-solid replacement upgrade to 5.1 as soon as possible, and only then continue with further updates as a 5.5 release. This would definitely be welcomed by everyone but the class of enterprise customers who like to hear about future versions two years in advance - but keep in mind that such conservative enterprises are not MySQL's primary customer base anyway, and if MySQL is to make inroads there, rapidly improving the quality and performance of the product in the meantime would still be a sensible step.

There is the argument that if I want to get those performance features now, I can use Percona/XtraDB or MySQL 5.1 plus the InnoDB Plugin. While technically that route does work, and clearly is worth pursuing as a user, it does have its drawbacks in terms of requiring multiple sources and it's hard to see how it supports MySQL/Sun's commercial interests, the latter surely having been a consideration in the 5.4 release plans.

Thus far in the argument I have ignored one new component - Oracle. That's because to my understanding the process I've discussed did not consider the acquisition, which was unknown to most people before Monday. Clearly this changes a few points. It's not necessarily in the interests of Oracle for MySQL to continue making inroads to enterprise customers, though if someone's going to be cannibalizing Oracle's database sales, it might as well be Oracle. InnoDB Plugin will also be a product from the same company as MySQL Server in the near future - in fact, in a future likely to be fact before the final GA release of MySQL 5.4. What is the role of a delayed 5.4 release in this equation, then?

Recap of MySQL Conference 2009

This was an interesting week for sure. Of course, we all know it started with a bit of a shock news, but that's not nearly the most interesting bit about the conference. I'm posting a series of cleaned-up notes and opinions about what I saw there as I finish them. Will also try to link to further information where I've seen good notes. Please leave more links in the comments if you have any!

Thursday 23 April 2009

Three domains of data

My MySQL Conference presentation on Tuesday discussed my practical findings on how Infobright's technology works in developing a MySQL-based data warehouse. I also touched on a more high-level question of how to select a technology for a different kinds of data-related problem areas, and this article expands on that discussion.

Continue reading...

Wednesday 22 April 2009

Mining for insight - presentation materials

Completed my MySQL Conference presentation 45 minutes ago. Seemed to go over ok, got some followup questions. Trouble is, I got hit by amazing jetlag half an hour before the session, and almost fell asleep myself during the presentation. Fortunately, survived that anyway, and as far as I could see, was the only one having problems staying awake. Below is an embedded version of the slides, which should also appear on the conference proceedings site later. Now for a beer at the expo. Will blog with more description of the stuff later (update: see this follow-up article).

Read this doc on Scribd: Mining for insight

Wednesday 8 April 2009

Using the Infobright Community Edition for event log storage

Apart from the primary "here's how we ended up using Infobright for data warehousing and how is that working out" topic I'm going to discuss in my MySQL Conf presentation I'll touch on another application, the use of Infobright's open-source Community Edition server for collection and storage of event logs. This is a system we've implemented in the past couple of months to solve a number of data management problems that were gradually becoming problematic for our infrastructure.

We've traditionally stored structured event log data in databases for ease of management. Since Habbo uses MySQL for most everything else, putting the log tables in the same databases was pretty natural. However, there are significant problems to this approach:

  • MyISAM tables suffer from concurrency issues and crash-recovery is very slow due to table consistency check
  • InnoDB tables suffer from I/O bottlenecks and crash-recovery is very slow due to rollback segment processing
  • Both scale badly to hundreds of millions of rows (especially if indexing is required), and mixing them is not a recommended practice
  • Storage becomes an issue over time, especially as indexes can easily require many times as much disk than the data, and an event log is going to have a LOT of rows
  • Partitioning has only recently become available, and before that, managing "archive" tables needed manual effort
  • Perhaps worst of all (as it's very hard to measure), if any of this is happening on the primary DB servers, it's competing for buffer pool memory with transactional tables, thus slowing down everything due to cache misses

Over the years, we've tackled these issues in many ways. However, with our initial experience of scaling an Infobright installation for data warehousing needs, a pretty simple solution became apparent, and we rapidly implemented an asynchronous, buffered mechanism to stream data into an ICE database. We're early with this implementation, but it has turned out to be a satisfactory high-performance solution. Even better, it's a very simple thing to implement, even in a clustered service spanning many hosts, as long as log tables don't need to be guaranteed 100% complete or up-to-date to the last second. Here's a description of the simple solution; extending that to the complex solution providing those guarantees is left as an exercise to the reader.

Rather than running single INSERTs to a log table or writing lines to a text file log, each server buffers a small set of events, eg for the past second in a memory buffer. These are then sent over a message bus or lightweight RPC call to a log server, which writes them to a log file that is closed and switched to a new file after every megarow or every few minutes, whichever is smaller. A second process running on this log server wakes up periodically and loads each of these files (minus the last one, which is still being written to) into the database with LOAD DATA INFILE.

This has multiple general benefits:

  • Buffered messaging requires much less time on the "client" servers compared to managing database connections and executing thousands of small transactions
  • The asynchronous processing ensures database contention can not produce random delays to the normal service operation
  • Batch loading of text files is implemented by every DB server, so there's little in this implementation that is proprietary or dependent on any particular DB solution
Using the Infobright ICE as the backend database provides a number of additional specific benefits:
  • Excellent data load performance
  • No index management, yet capability to run queries on the data without first extracting it to another DB
  • No degradation of performance as deployment size grows, as would happen even to a MyISAM table should it have any indexes
  • Compressed storage, so less spinning iron required
  • Columnar datapack organization should not require table partitioning even over long periods
This works very well for structured events. For unstructured data, a different solution is required, which I will discuss at some later date.

Update: Mark Callaghan asked in the comments for some quantified details. We have not spent the time to produce repeatable benchmarks, so all I can offer on that front is anecdotal data - it's very conclusive for us, given it's addressing our real concerns, but less so for others. That said, ICE does not support inserts, only batch loads, so the solution had to be engineered to use that, which added some complexity, but brought orders of magnitude more performance. A simple benchmark run showed that the end-to-end performance for this exceeded 100,000 events per second when running all parts of the client-logserver-database chain on a single desktop machine.

Query performance depends on the queries made. Summary data is 2-3 orders of magnitude faster to access, the bigger the dataset, the bigger the performance benefit - but expecting that for single row accesses would disappoint badly. Storage compression varies wildly depending on the data in question -- we've seen up to 15:1 compression on some real-world data sets, but others (such as storing email addresses in a varchar column) actually expand on storage. This is why I think of this as a solution for structured, quantified event logs, not for general unstructured log file storage.

- page 1 of 2