Showing posts with label Industry Trends. Show all posts
Showing posts with label Industry Trends. Show all posts

Thursday, March 25, 2010

Herding Web 2.0 Cats: Contributions to the Platform

This is the third of three posts looking at the details of a functional framework for Web 2.0 / social media. The earlier posts are:
  1. An introductory post is here
  2. A look at contributions to the content of social media platforms is here.
  3. A look at contributions to the social networks on the platforms is here.

The final category in our social media framework deals with contributions to the social media platform itself. One of the key ideas in O'Reilly's explanation of Web 2.0 is the adoption of lightweight, flexible development methodologies that rely on the active involvement of the community of users to build the platform. Intuitively this makes sense: the usage of a particular social media platform shifts and changes over time as community members use the technology for a variety of real-world purposes. Users stretch and pull the features of a particular platform: think of the conventions that have built up on Twitter: the use of the @user reply format or integration of Twitpic, TinyURL and other similar tools - none of which featured in the original design spec for the Twitter platform.

Platforms that don't adapt and change go into decline. MySpace, for example, is burning a US$580 million hole in Rupert Murchoch's pocket in part because News Corp didn't understand the need to allow the platform to evolve and change in a free and flexible manner. Decreased usage of that platform can be attributed to an increased emphasis on advertising (leading to a crappy user experience) and a glacial bureaucratic process for the implementation of design changes.

Involving the community in the development of a social media platform is an example of utilising the wisdom of the crowds, again a key component of O'Reilly's explanation of Web 2.0. A social media platform is more than just the site itself, but also includes the 'ecosystem' of related applications and support sites that allow the community to use the site for a variety of purposes. This means that there are two different ways in which the community of users of a social media platform can contribute to the platform itself:

  1. Contributing to the platform design
  2. Contributing to the ecosystem of applications around the platform.

Clearly the former is more common than the latter, and takes the form of both explicit and implicit feedback. Often, the developers of a platform will directly engage with the community to find out how they want to use the platform, looking for ideas for new features. Frequently, though, the feedback takes the form of a change of usage pattern. As users try to use the platform in unanticipated ways, developers respond by either making it easier to use existing features (eg. embedding @replies into the structure of the Twitter platform) or introducing brand new features (eg. the ability to geotag photographs on Flickr).

Very occasionally, a user of a platform will develop a 3rd party application to introduce some feature not directly supported by the platform, or to allow a customised means of interacting with the platform. The plethora of Twitter applications for devices like the iPhone is an example of this, as well as the image, video and link applications that extend the functionality of the basic platform: hardly anyone uses the web interface as their primary means of using Twitter. Sometimes the 3rd party applications end up being integrated into the platform itself (see the Twitter search tool which started as an independent web app), but even if that doesn't happen, the platform is extended and supported by this 3rd party ecosystem, leading to a wider range of uses and appealing to a larger group of users. The easier it is to develop a 3rd party application (through technologies like XML, SOAP or even plain old HTML APIs) the richer this ecosystem will be.

Friday, February 12, 2010

Herding Web 2.0 Cats: Contributing to the Social Network

This is the second of three posts looking at the details of a functional framework for Web 2.0 / social media. An introductory post is here, and the first substantive post on the framework can be found here.

Geez, you sure can tell when semester hits - blog posts here come to a grinding halt! What was supposed to be a short interlude of a couple of days has turned into a couple of weeks.

In the first post on the framework, we looked at three kinds of content contributions members of a social media community can make to a social media platform. This post addresses the second category of contributions: those to the social network itself.

The social network is a critical part of what makes Web 2.0 different - in fact, I think that the social nature of Web 2.0 is the thing that makes Web 2.0 fundamentally different to what came before. Social phenomena are notoriously hard to understand. Just look at the competing paradigms in sociology and social science research, all wrestling with the complexity of explaining non-rational human behaviour.

But for our purposes I think we can conceive of three different kinds of contributions to a social network supported by a social media platform:

  1. Creating new social networks or groupings
  2. Administering these networks
  3. General participation in the social network

Each of these three kinds of activities can be formalised to a greater or lesser extent and either explicitly built into the social media platform, or occur in a much more organic way. Indeed, even when formal mechanisms are in place to establish groups or networks, informal groups also tend to form. For example, Flickr.com's groups or Reddit's sub-reddits provide explicit, formal groups to which community members may belong. But the use of contact or friends lists, or simply engaging with other users on the site can lead to informal groupings that can socialise and collaborate. Networks or groups can be short-lived, forming around a specific event, or more permanent.

Regardless of the purpose of the grouping, at some stage the network has to be initiated which can happen in several ways. Many sites support mechanisms for reflecting real-world social networks such as families and friends through the use of contact lists and groupings. These also support intra-platform groupings as they form as well. Social networks will typically have various formal and informal norms and rules (with occasional discrepancies between the two). A group might form based on the initiative of one, or a small group of members, or the platform itself may encourage a grouping, such as with Facebook's country, city or school-based networks. In the former case, the initiators of a social network may require a comparatively high profile to encourage other users to connect with the group.

Social networks also rely on governance of the group to enforce the rules and norms. In some cases, these tasks fall to the group as a whole, in others there are one or more members designated as 'moderators' or something similar. At first, it may be that the members who started the group perform the role of group moderator, but over time, other members and the group as a whole can take on the tasks. A healthy social network, to a certain extent, will be self-governing, but from time-to-time issues arise where if there is disagreement on how the rules should be applied, or discrepancy between what the group as a whole expects and what is actually undertaken by the group moderators, the group itself may devolve into factions or simply go out of existence. Group governance, therefore, is an important part of ensuring that a social network remains healthy and functional. The health and functionality of a social media platform itself is a function of the health and functionality of the various networks that it supports.

Finally, there are the acts of socialisation of the community members themselves. Typically these socialisation acts will be in the form of the primary and secondary contributions to content I wrote about in the last post. But they also include private messages (like Twitter's direct message feature, or Flickr's mail system). These activities collectively make up the social fabric of the individual social networks as well as the broader community on a social media platform, all of which are further shaped by the technical design of the platform itself.

Interestingly, while the nature of the technology (ie. its design and features) shapes the nature of the social networks on the platform, the community also helps to shape the technology (for the academically minded, this is an example of Gidden's duality of structure and Orlikowksi's adaptation of this idea to technology). This idea, of the platform itself being shaped by the community will be the topic of my next post, which will hopefully not as long coming as this post was!

Monday, February 8, 2010

Herding Web 2.0 Cats: Contributions to Content

This is the first of three posts looking at the details of a functional framework for Web 2.0 / social media introduced in the previous post.

Tim O'Reilly's coining of the term 'Web 2.0' was based on an observation of the rise of a new breed of web company, based on a model of collaboration and socialisation rather than the traditional model of publisher/consumer. As I mentioned in the last post, O'Reilly talks about two different kinds of contributions that users of social media make to a web site that they didn't under the old model: contributions to the data set and contributions to the platform (he didn't use those exact words, though). We also added a third to that list: contribution to the social network. This post breaks down the first of these types of contributions to start filling out the details of our social media framework.

Contributions by users to the content of a site are perhaps the most obvious of the three different contributions, and we think think that there are three different types (how's that for symmetry, eh?): primary contributions, secondary contributions and passive contributions.

A primary contribution is typically the main purpose for which a social media platform is developed. For example, a photograph uploaded to Flickr, a 'tweet' on Twitter, or a video uploaded to YouTube would be considered primary contributions. A primary contribution tends to stand alone - that is, the contribution is worth the community's attention in and of itself. The primary contribution is often a 'package' of content. So, for example, a user doesn't simply upload a photograph to Flickr, they also attach metadata such as a title, tags, a description and so on.

A primary contribution - Flickr photo page, consisting of photo and metadata.

A secondary contribution, on the other hand, is not a standalone contribution, but is submitted as a response to a primary or other secondary contribution. The classic case is a text-based comment, but on Flickr would also include actions like adding to the photographer's list of tags on a photo page or adding the photo to a list of favourites. On a site like Reddit.com, a secondary contribution might be an up or downvote for a comment or link, while on YouTube it might be a rating of a video, or even a video-in-reply. Secondary contributions represent, on an individual level, a reaction to something on the site, and in an aggregate sense indicate the community's perception of that thing. Secondary contributions allow for dialog between users of a platform, thereby allowing communities and social networks to form.

Secondary contributions in the context of the primary contribution above.

Passive contributions are examples of O'Reilly's idea of "harnessing collective intelligence". They come about not as an explicit contribution on the part of a user, but rather as a consequence of their use of the platform, whether that be a primary or secondary contribution or simply through clicking around the site. Passive contributions tend to be contributions to aggregate data derived through some algorithm implemented on the platform: it may be as simple as a view count of a photograph or video, or could be as advanced as Google's PageRank algorithm based on web author's linking behaviour or Amazon's recommendation system based on purchasing patterns. Through the tracking of user actions and the application of algorithms, the platform can add value to the user experience, adding to the appeal of the platform. Think of the predictive search in Google, or the use of tag clouds to handle the problem of building taxonomies.

Contributing content to a social media platform is one of the key ways in which users establish their position within an online community and add value to the platform itself, and is the most obvious difference in the way Web 2.0 works compared to the old publisher/consumer model of Web 1.0. Instead of relying on a large database of proprietary information (the Web 1.0 mantra of "content is king!"), Web 2.0 harnesses the concept of "human computing" to build content that in many cases is more compelling than what you find with Web 1.0. With the incorporation of social elements to the online experience, the next most obvious difference is the formation of social networks supported or enhanced by social media. Contributions to social networks will be the topic of my next post.

Wednesday, February 3, 2010

Herding Web 2.0 Cats

The BI vendor marketing departments are scrambling to get on the BI 2.0 bandwagon, partly as a means of doing something different with their products and partly because they've seen the success of the Web 2.0 juggernaut and want some of that action. The efforts to date have ranged from the fairly ordinary (let's bolt on a comments feature to our reporting tool, but bury it on a separate screen!) to the potentially good (I'm watching the 12Sprints stuff with interest).


The problem with BI 2.0 is kind of the same problem that Web 2.0 has, though: it's easily dismissed as nothing more than a marketing term, and each commentator has their own take on what it means. At least for Web 2.0 there's Tim O'Reilly's fairly reasonable outline of what he meant when he coined the phrase. For BI 2.0, though, the term means pretty much whatever the marketing department wants it to mean: your tool supports decision automation? BI 2.0 baby! Got a poorly thought out commenting feature? You bet that's 2.0!


POD and I have been watching this go on for a while now with interest and occasional amusement. Clearly the BI 2.0 term is inspired by the Web 2.0 equivalent, but we're not sure that everyone who lays claim to the term actually gets it. We reckon that the idea of BI influenced by the kinds of things that have been happening on the web can be a good thing, but to show that, we need to do some research on it. Before that can begin, though, we need to be clear in our own minds about what Web 2.0 is, and how those Web 2.0 features might apply in a BI setting. Our problem is that no-one, apart from O'Reilly, has really done a good job of saying exactly what Web 2.0 is, so we chose that as our starting point - this post (and the next few) is a way of testing the water to see if what we've come up with in that regard is reasonable.


To break this all into consumable chunks, I'll set this out over several posts, but start with a bit of an overview of what we've come up with - the detail will come later. What we decided to do was to work up to a coherent statement of what Web 2.0 is from the ground up. Rather than trying to cover every commentator's pet definition, we thought it would be more realistic to come up with a functional definition of the term. In other words, forget formal definitions - Web 2.0 is best described by what people actually do with it.


O'Reilly's original outline talks about lots of characteristics of Web 2.0 firms like "harnessing collective intelligence", having a "light" approach to development and so on. At it's core, though, O'Reilly's characteristics boil down to the shift from a consumer model to a collaboration model: the community is not a group of passive consumers, the community fundamentally contributes to the Web 2.0 site. O'Reilly talks explicitly about two kinds of contributions the community makes: contribution of content and contribution to the design of the platform. We took that as our starting point and built up a functional framework for Web 2.0.


In addition to O'Reilly's two types of contributions, though, we added a third. The key difference between Web 1.0 and Web 2.0 is the social nature of the latter (we reckon the term social media is a much better descriptor). Beyond the technology platform and the content hosted on that platform, we reckon that the social network itself is something that people contribute to as well. So, at the highest level, we reckon Web 2.0 can be described functionally with the following categories of use:


  1. Contributions to content (see this post)

  2. Contributions to the social network

  3. Contributions to the platform


Where are we headed with this? I'll flesh out each of these three in three separate posts, and perhaps wrap up with a fourth that summarises it all. But our end game really is to take this framework and use it to look at BI: If BI 2.0 is treated as a social media platform, what features might it have? How might it work? And ultimately, does the idea of applying social media concepts to BI hold any water?

Friday, May 30, 2008

IT Folks are Luddites

A while ago in a previous post, I referred to an interview with Neil Raden entitled "Is Business Intelligence Stuck in the Past?" where he talked about how most business users today are more tech-savvy than the IT department. Peter O'Donnell just sent me through a link to a Gartner Voice interview with Peter Keen (one of the founding fathers of DSS/BI) where he makes essentially the same point:

IT is in danger of becoming the technology laggard.

Tune in here to listen to the interview. You can also subscribe to the Gartner Voice podcast in iTunes (the interview with Keen is the most recent episode).

Wednesday, March 19, 2008

Trends in Data Warehousing


Last year I co-authored a book chapter with two other colleagues, Peter O'Donnell and David Arnott, on the use of data warehouses for decision support, and it's just recently been published. The book is called Handbook on Decision Support Systems edited by Frada Burstein (another Monash colleague) and Clyde Holsapple. One section of the chapter that I wrote looked at current trends in DW practice, and I thought, as I wrote it in late 2006, that it would probably be better as a blog post, than part of a chapter in a (hopefully long-lived) book. Here's the excerpt. I'd be interested to hear what other people think are the big trends in DW and where it's headed.

Current Trends and the Future of Data Warehousing Practice

Forecasting future trends in any area of technology is always an exercise in inaccuracy, but there are a number of noticeable trends which will have a significant impact in the short-to-medium term. Many of these are a result of improvements and innovations in the underlying hardware and database management system (DBMS) software. The most obvious of these is the steady increase in the size and speed of data warehouses connected to the steady increase in processing power of CPUs available today, improvements in parallel processing technologies for databases, and decreasing prices for data storage. This trend can be seen in the results of Winter Corporation's "Top Ten Program," which surveys companies and reports on the top ten transaction-processing and data warehouse databases, according to several different measures. Figure 11 depicts the increase in reported data warehouse sizes from the 2003 and 2005 surveys (2007 data has not yet been released):


Ten Largest Global Data Warehouses by Database Size, 2003/2005. From Winter Corporation.

The data warehousing industry has seen a number of recent changes that will continue to have an impact on data warehouse deployments in the short-to-medium term. One of these is the introduction by several vendors, such as Teradata, Netezza and DATAllegro, of the concept of a data warehouse 'appliance' (Russom, 2005). The idea of an appliance is a scalable, plug-and-play combination of hardware and DBMS that an organization can purchase and deploy with minimal configuration. The concept is not uncontroversial (see Gaskell, 2005 for instance), but is marketed heavily by some vendors never-the-less.

Another controversial current trend is the concept of 'active' data warehousing. Traditionally, the refresh of data in a data warehouse occurs at regular, fixed points of time in a batch-mode. This means that data in the data warehouse is always out of date by a small amount of time (since the last execution of the ETL process). Active data warehousing is an attempt to approach real-time, constant refreshing of the data in the warehouse: as transactions are processed in source systems, new data flows through immediately to the warehouse. To date, however, there has been very limited success in achieving this, as it depends on not just the warehouse itself, but performance and load on source systems to be able to handle the increased data handling. Many ETL processes are scheduled to execute at times of minimal load (eg. overnight or on weekends), but active warehousing shifts this processing to peak times for transaction-processing systems. Added to this are the minimal benefits that can be derived from having up-to-the-second data in the data warehouse, with most uses of the data not so time-sensitive that decisions made would be any different. As a result, the rhetoric of active data warehousing has shifted to "right-time" data warehousing (see Linstedt, 2006 for instance), which relaxes the real-time requirement for a more achievable 'data when it's needed' standard. How this right-time approach differs significantly in practice from standard scheduling of ETL processing is unclear.

Other than issues of hardware and software, a number of governance issues are introducing change to the industry. One of these is the prevalence of outsourcing information systems - in particular the transaction-processing systems that provide the source data for warehouse projects. With many of these systems operated by third party vendors, governed by service level agreements that do not cover extraction of data for warehouses, data warehouse developers are facing greater difficulties in getting access to source systems. Arnott (2006) describes one such project where the client organization had no IT staff at all, and all 13 source systems were operated off-site. The outsourcing issue is compounded by data quality problems, which is a common occurrence. Resolution of data quality problems is difficult even when source systems are operated in-house: political confrontations over who should pay for rectifying data quality problems, and even recognition of data quality as a problem (in many cases, it's only a problem for data warehouse developers, as the transaction processing system that provides the source data is able to cope with the prevailing level of data quality) can be difficult to overcome. When the system is operated off-site and in accordance with a contractual service level agreement that may not have anticipated the development of a data warehouse, they become even more difficult to resolve.

In addition to the issues of outsourcing, alternative software development and licensing approaches are becoming more commonplace. In particular, a number of open source vendors have released data warehousing products, such as Greenplum's Bizgres DBMS (also sold as an appliance) based on the Postgres relational DBMS. Other open source tools such as MySQL have also been used as the platform for data warehousing projects (Ashenfelter, 2006). The benefits of the open source model are not predominantly to do with the licensing costs (the most obvious difference to proprietary licensing models), but rather have more to do with increased flexibility, freedom from a relentless upgrade cycle, and varied support resources that are not deprecated when a new version of the software is released (Wheatley, 2004). Hand-in-hand with alternative licensing models is the use of new approaches to software development, such as Agile methodologies (see http://www.agilealliance.org) (Ashenfelter, 2006). The adaptive, prototyping oriented approaches of the Agile methods are probably well suited to the adaptive and changing requirements that drive data warehouse development.

The increased use of enterprise resource planning (ERP) systems is also having an impact on the data warehousing industry at present. Although ERP systems have quite different design requirements to data warehouses, vendors such as SAP are producing add-on modules (SAP Business Warehouse) that aim to provide business intelligence-style reporting and analysis services without the need for a separate data warehouse. The reasoning behind such systems is obvious: since an ERP system is an integrated tool capturing transaction data in a single location, the database resembles a data warehouse, insofar as it's a centralized, integrated repository. However, the design aims of a data warehouse that dictate the radically different approach to data design described above in Sections 3.1 and 4 mean that adequate support for management decision-making requires something other than simply adding a reporting module to an ERP system. Regardless, the increased usage of ERP systems means that data warehouses will need to interface with these tools more and more. This will further drive the market for employees with the requisite skill set to work with the underlying data models and databases driving common ERP systems.

Finally, Microsoft's continued development of their Microsoft SQL Server database engine has produced a major impact on Business Intelligence vendors. Because of Microsoft's domination of end-user's desktops, it is able to integrate its BI tools with other productivity applications such as Microsoft Excel, Microsoft Word and Microsoft PowerPoint with more ease than their competitors. The dominance of Microsoft on the desktop, combined with the pricing of SQL Server, and the bundling of BI tools with the DBMS means that many business users already have significant BI infrastructure available to them, without purchasing expensive software from other BI vendors. Although SQL Server has been traditionally regarded as a mid-range DBMS, not suitable for large-scale data warehouses, Microsoft is actively battling this perception. They recently announced a project to develop very large data warehouse applications for an external and an internal client, to handle data volumes up to 270 terabytes (Computerworld, 2006). If Microsoft are able to dispel the perception that SQL Server is only suited for mid-scale applications, it will put them into direct competition with large-scale vendors such as Oracle, IBM and Teradata, with significantly lower license fees. Even if this is not achieved, the effect that Microsoft has had on business intelligence vendors will flow through to data warehousing vendors, with many changes being driven by perceptions of what Microsoft will be doing with forthcoming product releases.

Monday, April 30, 2007

HP: "Ooh look! We're a data warehousing vendor too!"

While hunting around in their software portfolio, it looks like HP discovered that they do data warehousing too. HP have long provided server hardware for Oracle-based databases with their NonStop line of servers. Turns out that the Tandem hardware design that HP had used to build the NonStop server line was originally developed to support OLAP-oriented databases as well as OLTP. Now HP want to compete with Teradata. I wonder what Teradata think of that...

HP claim 200,000 BI implementations per year. Um, yeah, ok. Ben Barnes talks about it all in the video below. Be warned though (if the IDG statistic above wasn't enough) - Ben claims that HP's product suite is "next generation" BI, because it provides an enterpise-wide information resource rather than silo-ed information stores. Teradata in particular would be raising more than an eyebrow there, since it's been their bread-and-butter for decades. It's what data warehousing has been for a long, long time (often with disastrous results when the enterprise approach has been naively adopted). If it wasn't for the (c) 2007 text superimposed on the video, you'd swear it was 1989. As for Ben's use of the idea of parallel querying and data load as a selling point, well, try telling that to users as the data in reports changes before their eyes...



Here are the main points of Ben's pitch for HP flavoured BI:

  • Cost-effectiveness, based, as far as I can tell, on the same argument that other DW appliance vendors use.
  • No need for the batch-window, upload data as people query it (see above).
  • Reliability for a large userbase - fine, but HP aren't the only ones selling hardware/DBMS for warehouses with large userbases in a reliable way.
  • Minimal need for tuning - again, standard appliance pitch.
Nothing at all revolutionary or "next generation" here, and certainly some worrying evidence that HP don't know much about BI beyond the hardware. Very little that's tricky about BI (and by the way, 50,000 users using a DW is not BI) has anything to do with the hardware or software platform.


Thanks to Craig for passing the Yahoo! the article along.

Friday, March 2, 2007

A Giant On The Move

The news is out, and the speculation has been confirmed: Oracle is going to buy Hyperion for a cool USD$3.3 billion. The purpose of the move is to let Oracle have a crack at toppling SAP from its enterprise systems pedestal, and so is only partly to do with the BI industry. Oracle's President Charles Phillip must have struggled to keep the smirk off his face when he announced:

Thousands of SAP customers rely on Hyperion as their financial consolidation, analysis and reporting system of record... Now Oracle's Hyperion software will be the lens through which SAP's most important customers view and analyze their underlying SAP ERP data.
Indeed, Cognos and Business Objects seem to be sitting back and enjoying the show a bit - they seem to think they'll pick up a few rats jumping ship as Hyperion/SAP customers re-evaluate their software license portfolio. Or maybe it's more to do with the anticipation that they'll be looked at as a potential marriage partner for SAP.

So what does this mean for the BI landscape? Probably not a lot. SAP may pick up one of the other BI vendors. Hyperion customers will probably get crappier service, but then that's already been happening apparently as Hyperion has grown. Fewer players in the market place will also lessen the likelihood of any fundamental innovation in the kinds of BI products available - but then the current crop of players don't really seem to be doing anything earth-shattering in that respect either. In reality, this is an ERP industry story that will encourage the view that BI is just another module of an enterprise system that provides enterprise reporting.

Monday, November 27, 2006

SQL Server to run 270TB Multi-node Data Warehouse

Computerworld are reporting that Microsoft are working on a massive data warehousing project for an external client, in an obvious aim to dispel the idea that SQL Server is a lightweight platform. From the article:
At the annual conference of the Professional Association for SQL Server (PASS) user group, Microsoft said it is designing a 270TB multinode data warehouse for a foreign (ie. non-US) government that it declined to identify. The software vendor is also working on a 162TB single-node installation for its own marketing department.
Of course, this doesn't explain how Microsoft are doing this (any tweaks under the hood?), or whether the warehouse itself will be of any use (no metrics on reporting, data mining, etc. throughput - and we all know how Microsoft are on benchmarking *wink*). Still, I quite like SQL Server (and the MS BI tool suite) as a product, and it has singlehandedly been responsible for the biggest shakeup in the BI industry in the past several decades (a much needed one at that). If this kind of data volume can be handled well by SQL Server 2005, as configured by your average corporate DBA, then Oracle and IBM are really going to be looking over their shoulders.

Tuesday, November 7, 2006

IBM and Business Objects form Strategic Global Alliance



Good news for those companies with both platforms - two of the largest software vendors, Business Objects (BO) and IBM have just announced a global strategic alliance. So what does this mean? The word is that the new agreement will provide enhanced support for those companies with both bits of software. What the agreement really means is that it will put BO and IBM in the position to capture even greater market share.
Read more...

Thursday, October 19, 2006

Nicholas Carr at it again.

Former executive editor of Harvard Business Review, business writer, speaker, and unfailing trouble maker, Nicholas Carr is at it again. Carr, author famous (or should I say infamous) for his 2003 HBR article "IT Doesn't Matter" appears poised to stir fresh controversy in the IT industry, warning organizations to stop spending on technology. Read more.

Thursday, October 12, 2006

BI Doesn't Matter?

As Marcus has pointed out, Nicholas Carr is at it again, as reported on ZDNet.com. I agree with Marcus' assessment over at 401 Percent that there are some holes in Carr's original article, but I tend to agree with his main point as reported in the ZDNet article: many companies spend too much on technology in the hope of a silver bullet, without considering alternative, even low-tech ways of achieving the same end. In the ZDNet article, Bob McDowell of Microsoft was quoted as saying "There was over-hype in the 90s and there was overspend... [we're] still paying the price now." The problem is, McDowell's assessment is too limited - the IT industry is still making the same mistake, driven by vendor marketing departments that oversell their product, whilst under-delivering. Nowhere is this more evident than in the BI sector.

Time and again I go to vendor presentations and seminars, and the sales pitch is the same: buy our product, and all of the analytic and reporting (read decision-making) needs of your organisation will be solved. Let us tell you about our volume licensing arrangements!

The problem is, few of these products solve any of the really thorny decision-making needs of organisations - especially highly strategic, novel, 'wicked' decision problems that require creative, lateral thinking for their resolution. The needs of decision-makers facing these problems are wide, varied and don't submit to the usual requirements elicitation techniques used during system analysis and design. The tricky part about supporting these needs is not the technology - it's understanding the human aspects of decision-making, including their limitations.

Carr's central point, that companies spend money on IT for a strategic advantage that isn't there, holds for BI too. Maybe, rather than forking out for the new, you-beaut dashboard addon in the latest version of a reporting tool, a manager would be better served by spending that money on some people to help with the decision process. Imagine, instead of spending millions on a BI package, an organisation spending that money on the salaries of a flying-squad of decision/business analysts, skilled in using IT as well as in strategic decision making - a kind of internal consulting company, able to build personalised decision support tools for individual decision-makers.

As an IT person, Carr's attitude bothers me, but that doesn't mean the bugger is wrong. Maybe BI folk should be arguing, too, for organisations to spend less on BI technology, and more on people who know how to get the most out of what's already there.

Friday, October 6, 2006

The inmates and the asylum.

Listening to your customers (or end-users) is a must for BI systems, or any system for that matter, right? If the users tell you they want a feature you should give it to them - and quickly if possible. Well, no. We often talk in our lectures that the role of an analyst in a decision support setting should be - what we call - an active one. The analyst shouldn't just be a passive interpreter of the user's requirements, translating everything the user's ask for into system designs. They need to take into account what the user community wants, but every now and again they need to intervene and change something. They might need to explain a better way of doing things, or explain that what the users are asking for is silly is somehow or another. All this takes great skill and care not to seem like a typical know it all IT person.

Here is an unusual example of taking what your user community wants too far - and sadly this is going to have a major impact on just about everybody working in BI. There is a feature in the upcoming version of Excel that is just plain wrong.

Microsoft has gone out of its way with its next release of office to take usability seriously. They have tried to embark on a brave remake of the product set. One of the potentially great new features they are including in Excel is the ability to quickly create bar charts that sit (ala Tufte's sparklines) in the spreadsheet cells, next to the numbers and text, not as a separate chart object. Great idea. However, the feedback they got from focus groups is that users don't like the look of these charts when one of the cells is 0 or very low compared to the others, so they have adjusted the bars displayed so there is a minimum that shows even when the quantity is 0. Oh dear. For some example charts and a discussion visit the (excellent) blog at Juice Analytics.