Your reporting, budgeting and forecasting reeks of arrogance and you don’t even know it!

Filter By Topic

All

Your reporting, budgeting and forecasting reeks of arroganceWhen did you last formally test any of the reporting, budgeting and forecasting spreadsheets you and your team have been using for the last few weeks, or months or years? Oh, every time you use it huh? So, do you have check totals? Do you drill into the data and check the numbers add up from the source? Got allocations only in your spreadsheets that are run through automated AND manual tests each time you enter data into the report?  Is the data you enter correct?  Who checks that? What about the development of the spreadsheet – what process did it go through, an Agile or Waterfall approach or just ad-hoc built up over time? Huh (imagine I am poking you in the shoulder)?

Get the picture?

Did you know that up to 88% of reporting, budgeting and forecasting spreadsheets have errors in them and the larger the workbook, you guessed it, the larger the number of errors. Why? Because of overconfidence that leads to poor or no testing. Check out this paper by Ray Panko. It's a study of errors in spreadsheets. He likens spreadsheet creation to software programming and while they are similar in many respects these days they are approached so differently!

So why are we so arrogant? Well that’s part of being human. Research has given us the notion of the “hard-easy effect” where overconfidence is highest when performance is lowest (Clarke, 1960; Dunning, Griffin, Milojkovic, & Ross, 1990; Lichtenstein, Fischoff & Philips, 1982; Panko, 2008; Plous, 1993; Wagenaar & Keren, 1986). In fact, as complexity increases confidence falls and this isn’t proportional. For example, make a “simple” change to a formula in a spreadsheet and hit print. The reports are ready for the board. With no testing, you don’t discover until later that formula was connected to another workbook that was connected to the budget. You just changed this year’s budget!

So how do you fix this massive (and it is massive) problem ?

You can’t stop using Excel can you? You can fix this and still use Excel.

Here's how we do it. When we implement CALUMO at clients, we start by putting your source data into a secure database. We use cubes and tables. We then help you or give you the tools to build the structures you need for your reports, budgets and forecasts. You are building a solid structure and you’re doing this in a planned process, not ad-hoc. This can take minutes or days depending on the complexity of your organization. As the structure building is taking place, you’re checking your structure and the aggregated and calculated results from CALUMO and comparing them to your reports.  Initially you’ll be changing structures in CALUMO to get the numbers right but very soon, you will find errors in your reports – often reports that are from other applications, not just Excel.

We take you through a rapid yet formal approach to building your reporting structures and reports. There’s testing, checking and all the while you can reconcile with a few clicks that the data is correct because you can drill to source.

So, would your rather have data you can really trust or continue to live with that false sense of confidence?

References:
Clarke, F. R. (1960). Confidence Ratings, Second-Choice Responses, and Confusion Matrices in Intelligibility Tests. Journal of the Acoustical Society of America, 32, 35-46.

Dunning, D., Griffin, D. W., Milojkovic, J. D., & Ross, L. (1990). The Overconfidence Effect in Social Prediction. Journal of Personality and Social Psychology, 4, 568-581.

Lichtenstein, S., Fischoff, B., & Philips, L. D. (1982). Calibration of Probabilities: The State of the Art to 1980. In D. Kahneman, P. Slovic, & A. Tversky (Eds.), Judgment Under Uncertainty: Heuristics and Biases (pp. 306-334). Cambridge, England: Cambridge University Press.

Panko, R. R. (1998). What We Know About Spreadsheet Errors. Journal of End User Computing, 10(2), 15-21.

Plous, S. (1993). The Psychology of Judgment and Decsion Making. Philadelphia: Temple University Press.

Wagenaar, W. A., & Keren, G. B. (1986). Does the Expert Know? The Reliability of Predictions and Confidence Ratings of Experts. In E. Hollnagel, G. Manici, & D. D. Woods (Eds.), Intelligent Decision Support in Process Environments (pp. 87-103). Berlin: Springer-Verlag.

Leave a Reply

Your email address will not be published.