Wednesday, December 3, 2008

Simple Designs are Hard

Reading Peter's last couple of posts got me thinking about a great TED conference presentation made by former Broadway pianist and NY Times tech columnist David Pogue. A couple of years ago he talked at TED about the design of technology and the importance of simplicity in design.

A lot of what we do as BI developers is design ways for people to mess around with, and learn from, information. One one of the key principles behind a doing this well is to ensure simplicity. A lot of what Tufte and other data visualisation experts talk about can be seen to derive from this principle, and as Peter said, despite the experimental and anecdotal evidence to support it, it's something the vendors often don't do well. One of the reasons for this is that it's just plain hard, and probably something that most software engineers are not very good at doing. Simple, elegant and intuitive interfaces for BI apps are not just aesthetically pleasing, they lead to better understanding on the part of decision makers, and creative uses of the tools that can lead to unexpected insights - which sounds awfully like the vendors' own jargon. I wish they'd listen to people like David Pogue a bit more.

An unexpected finding ...

It's a good time of year to be an academic, nearly all the marking and teaching related administration is done for the year, though next year is approaching fast - we do have some time to fully devote our attention to research. We have a lot of projects that are finishing up, which means it time for us to get out and start collecting data for the next round of case studies and investigations. We are also doing some tiding up of our infrastructure. We have had to move out of a room we had devoted to project related activities but that has given us an chance to throw some stuff out and generally get our "house" in order. For example we been have updating and sorting out our files hosted on various servers. None of that has any direct impact on this blog, except we have run out of excuses not to extend our blog related activities a little. Shortly, we'll have a podcast featuring presentations and interviews by and with staff from the Centre. Another thing we will start to do is talk a bit more here on the blog about our published research. So lets start that right now ...

Here is a link to a paper we published earlier this year.

A note on an experimental study of DSS and forecasting exponential growth

(The file is hosted on Science Direct and they own the copyright, so sorry if you can't access it. If you are on the Monash network, you'll be able to view it, or if you have a Monash authcate, try using the VPN. If you are at another Uni. you'll probably have a subscription)

I know that sounds a bit technical, and that its not of interest if you aren't into forecasting or that worried about exponential growth, but actually, its interesting beyond those areas. The paper presents and experiment we conducted where we asked subjects to forecast growth in iPod sales - which have been exponential. We conducted a similar study years ago, but used made up data, we thought it would be better to use a real exponential data series, so we re-did the study this time using iPod sales as the data series to forecast. Now, it turns out humans are poor at forecasting exponential growth - there is a cognitive bias at work related to the anchoring and adjustment heuristic - which means we just don't pick up on the exponential nature of a data series and forecast growth as a straight line and as a result under estimate growth of exponential data.

In our experiment, we gave the subjects some historical quarterly data, and asked them to forecast 2 quarters out (we knew that "actuals" for the periods we were asking them to forecast).

The idea we designed the experiment to test is a simple one. If you take a log of exponential data, you get a straight line. Humans are good at doing straight line forecasting so we reasoned that if you take a log of exponential data, forecast based on that, you'll get a better forecast than if you just have the data in its 'raw' state. The conversion to log data and back is something a computer system - a DSS - can do nicely, so that's the basic shape of the experiment. All the detail is in the paper - as you'd expect there is a control group using a paper based version of the data, but we built a nice little tool to perform the forecast. You can click on a chart to make a forecast - and it shows you the number, or type in a number and it shows where that number is on the chart. One version of the tool had just the raw data, the other showed both the raw data and the log data.

So, to the results ... the computer supported forecasts were better. Phew, often in these types of studies the DSS is of no help. In our case it was. However, the simpler version of the system, did better than the version that had the log data - the opposite of what we expected. Our explanation is that the simple version encouraged experimentation, letting the users think a bit more about their forecast - exactly what you want a DSS to to. However, rather than helping, the more complex system with the log data, intimidated the users, stopping them from experimenting and as a result they made poorer forecasts.

So forget about forecasting and exponential growth, the main lesson from this study is keep the interface simple.

POD