Presentation on theme: "5th Module: Evaluating Information Systems: Structure: 1.Reasons for IS Evaluation 2.Difficulty of IS Evaluation 3.Evaluation Techniques 4.Evaluation in."— Presentation transcript:
5th Module: Evaluating Information Systems: Structure: 1.Reasons for IS Evaluation 2.Difficulty of IS Evaluation 3.Evaluation Techniques 4.Evaluation in an Organisational Context
Reasons for IS Evaluation IT costs organisations a lot of money. Evaluation is necessary in order to see whether the money was spent in the right way, and to make sure that the right investments are made in the future. In the 1990s, for example, the average UK company spent 5 percent of their total costs on IT. In the financial sector, IT consumed up to 20 percent of the total cost.
Typically, there are 2 main occasions for evaluation: 1.Before the system works: there might be a choice of alternative designs, software packages, etc. Assessment of these alternatives, e.g. as part of the business analysis or feasibility study, can be seen as evaluating a number of investment options.
Typically, there are 2 main occasions for evaluation: 2.Once the system works: Evaluation whether the expected benefits can be achieved, whether the investment was worthwhile (=summative evaluation at the end).
There are other occasions when evaluation is needed, e.g. to review an information system under development (continuous checks=formative evaluation), to find out about the total IS costs of a particular application, to assess the value that is added by a particular IS application. The main aim of many organisations is that the benefits of an IS application outweigh the costs.
The Difficulty of IS Evaluation The difficulty arises from the fact the costs and benefits are very difficult to identify and to measure. Costs can be classified better than benefits, because it can be added up what was spent to get a new IS application up and running. However, there are difficulties as well, because often IS resources are shared and accountants do not always agree about how costs for shared resources should be allocated.
Assessing the benefits is even more difficult. Benefits can be an improvement in terms of efficiency and effectiveness (which is a subjective criterion and thus evaluated in different ways by different people). Effectiveness is therefore a qualitative criterion (i.e. it cannot be measured by quantitative measures such as monetary costs/gains, etc.). What does qualitative mean in this context? Other qualitative criteria are (beautiful, nice, rude, etc.). Quantitative criteria (6 foot tall, 200 pounds, etc.).
Because of the difficulties to evaluate the success of IS systems, IS professionals avoided performing detailed evaluations of IS in the past. According to Ginzberg and Zmud (1988),the most difficult criteria are: systems do not have adequate initial descriptions of success and failure evaluation must also take social aspects into account, which might benefit the organisation on other levels evaluation is subjective, because individuals have different judgments about benefits even if the system objectives could be set, they would differ from the final objectives because IT is a rapidly evolving field.
Evaluation Techniques Nevertheless and in spite of these difficulties, an increasing number of organisations asks for a formal evaluation of information systems. Many of the techniques discussed in IS Strategy and Planning (Chapters 2 and 3) are considered evaluative techniques (e.g. individual pharmacies survival on the market).
Evaluation in an Organisational Context IS evaluation is complex and takes a lot of time. The outcome is often a report submitted to the senior management, which needs to authorise IS investment, but does not necessarily understand much about it. Any decision thus remains a political one (does the senior management has a good attitude towards its IS staff) Traditional cost/benefits analyses are often understood by the management.
It is often hard to tell the management that traditional cost/benefits analyses are very difficult to achieve in an IS environment. Traditional other approaches often fail to take knowledge or information into account at all. Some authors think that information is practically impossible to quantify (e.g. Kaplan and Norton, 1992). Consequently, there is often a gap between senior management (who rarely have adequate IS knowledge) and people responsible for IS.
Because the relationship between senior management and IS experts is based on trust, the probability of a senior manager accepting an evaluation is often dependent on how favourably inclined s/he is towards the system in principle.
References: Duncan, W.M. (1997). Information Systems Management. London: University of London Press. Ginzberg, M.J. & Zmud, R.W. (1988). Evolving Criteria for Information System Assessment. In N. Bjorn-Andersen & G.B. Davies (Eds.): Information Systems Assessment: Issues and Challenges. New York: North-Holland.
Kaplan, R.S. & Norton, D.P. (1992). The Balanced Scorecard – Measures that Drive Performance. Harvard Business Review, 1, 71-79. The references for students who are interested in further reading can be found on page 17 of the study guide.