Presentation is loading. Please wait.

Presentation is loading. Please wait.

Week 16 Technology Adoption Model

Similar presentations


Presentation on theme: "Week 16 Technology Adoption Model"— Presentation transcript:

1 Week 16 Technology Adoption Model
CO5021 Systems Development Week 16 Technology Adoption Model

2 How did you feel about this proposed change?
Often, people in organisations are put through inappropriately applied change. Change is good, but week 14’s lecture showed that change should be managed correctly. In fact, we identified that different groups of people existed within our teaching cohort with different behavioral characteristics. As a Systems/Business Analyst, your job is to identify different groups and work up deployment strategies to get them on-board. This lecture looks at identifying different groups within a culture, using a number of different techniques.

3 Determining different attitudes and behaviours
We all have different opinions on various matters. Key ambition is to identify who is likely to behave differently than others. Particularly important for systems analysts so that development, training and support strategies can be amended to meet the different needs of the various groups that are identified. This lecture will introduce academic views on why the different groups exists and how we can look out for them.

4 Lecture coverage Frameworks to evaluate users of IT to support development. Framework 1: Roger’s Diffusion of Innovations. Diffusion within social systems. Problematic diffusion – Moore’s Chasm. Successful v unsuccessful diffusion – what can make the difference? Framework 2: The Technology Adoption model. Framework 3: Socio-demographic profiling. Bringing it all together – measuring peoples propensity to adopt new innovations.

5 Framework 1: Roger’s Diffusion of Innovations
All products have a lifecycle, from the time they launch to the time they mature, and are made obsolete by newer approaches and technologies. e.g. B&W TV was made obsolete by colour TV. Dial-up modem was made obsolete by broadband. MS Office 2007 was made obsolete by 2010. In 1966, Everett Roger’s identified that new innovations (e.g. Radio, TV, electronic calculators), share a common ‘diffusion curve’. He published his findings in ‘Diffusion of Innovations’.

6 The Diffusion Curve – is S-shaped
Diffusion is about innovators, early adopters and early majority. When operating correctly, communication between early adopter groups yields a full and expansive adoption cycle. When operating poorly, communication between groups is not full enough to encourage adoption by early majority and later adopters – Moore’s Chasm emerges and a new innovation fails to be adopted. Systems Analysts don’t want this scenario on their hands! When a new innovation is brought into the workplace or society, providing it yields useful benefits to users, then the innovators (nerds!) will examine and learn how to use the new product. Often, at this stage, the product may be quite ‘shaky’. For example, the first digital camera provided awful images, yet innovators, true to their form, still kept pressing buttons and enjoying the shaky experience. Innovators have a clear role to play in the game of diffusion. Their role is to identify with the product and sing its praises to the other potential users we’d call early adopters. If things go well, communication between existing adopters reaches out to the wider social system and via the early adopters, late majority and even the notorious laggards. If things go badly, however, even new and good innovations fail to meet eh market. Moore described this as Moores Chasm, the process of technology adoption failure that occurs quite frequently. A key takeaway for systems analysts is that implementation of a new and innovative business solution requires identification of the innovators and early adopters so they may be encouraged to communicate their findings to the remaining, and possibly skeptical, population.

7 Dealing with Moore’s Chasm
Avoidance is the best solution. Lack of communication is seen as the most significant driver of innovation failure. Within a business context, failure to communicate change to all employees may limit the success of a new and expensive business innovation (e.g. intranet). To be successful, projects must have (from prior lecture) good change management: Top down support. Communication with users at all stages of the project. Identification of user needs: Different users perceive things differently, and this must be considered. Training must also be considered. New technology must be supported. Moore’s Chasm is best avoided by ensuring communication with those impacted by the change is adequate to firstly, identify users and their various concerns. Nothing annoys people more than feeling ‘out of the loop’ on the change that is at hand. When this happens they tend to feel undervalued and lacking in motivation. For example, they may fear the change and reject it due to possibly misguided feelings about the impact it may have on their job. This suggests that we should somehow determine approaches, embedded into the project, that keep people within the sphere of communication. Otherwise people have a habit of ‘dreaming up’ stories about the change which may create a problem for the critical innovators and early adopters that we should harness to quicken diffusion within the organisation. We as analysts must also consider who we are dealing with from a training and support perspective. We should try to understand the needs of the users, they are likely to respond in kind by using the toolset that we hope will raise their productivity,

8 How does good change management affect diffusion?
Faster diffusion – better change management Slower diffusion – poor change management Good change management doesn’t cost a great deal, in fact, it can save your project. When a project goes well and the product is adopted quickly, it amounts to leftward shifts in the curve. In an ideal world, when a new innovation launches, if the solution actually works and the innovators and early adopters are sufficiently motivated (e.g. encouraged by managers) they provide positive murmerings into the remaining groups that have not yet adopted. Proper communication of the advantages of the innovation may also have this affect. For example, if people become motivated that their job will become easier if they work with the new system, it may also increase the rate of uptake. If the murmerings are not positive, or for other reasons nonexistent, the Moore’s Chasm may await your project which is something we must avoid. By the way, mostly, Roger’s diffusion framework is used to forecast the adoption of new technologies in a large population. It can certainly be seen working in very large companies where a new technology is deployed in an almost voluntary fashion. In cases like this diffusion can be quickened or slowed by a variety of factors. A key factor, as explored during last weeks lecture, was that every project should have top down support. I was once told of a case in British Telecom where the new CEO wanted to encourage more fluid communication by deploying an all singing and dancing intranet. The system was to run in parallel with the old system and gradually people were expected to ‘diffuse’ across to the new system. Unusually for a large company diffusion was rapid. It was mentioned that the CEO and the project team had encouraged people to ‘diffuse’ by using it extensively themselves. You could guess that the old communication system was rather poor also! I think the key takeaway is to identify with the idea that different groups of people behave differently when something new happens. Interestingly, innovators and early adopters tend to stay that way, and laggards (possible called technophobes) will tend to be the last ones to drink from the font of technology. That is the way humanity tends to be. Your goal, when developing systems is to understand scale of existence among the users you are developing for. Roger’s diffusion model is just an interesting way of describing it.

9 Approaches to understanding the user
During the 1980s a multitude of technology developments took place that did not meet with user approval. Fred Davis in 1989 published a paper on the psychology of technology adoption – the Technology Adoption Model (herein TAM). He identified a key problem in software design: Developers wrote packages for businesses assuming high technical competency among users. This is not at all true, especially at a time when household adoption of computers was virtually nil. He proposed the TAM as a way of identifying user capabilities during application design work. As it turns out, the model is useful in many circumstances where you want to identify groups with different behavioural characteristics… That is, the innovators, early majority etc. As students of computer science, you need to be aware of one item of importance – computer science as an academic discipline is actually very young. When we consider that economics and the sciences have roots spanning 100s of years, it makes computer science look quite juvenile indeed. And as with all of our young, mistakes are made. When computing began going mainstream in larger businesses during the 1980s many software houses emerged who began developing business packages, such word processors, accounting packages and so on. The SDLC as we know it today didn’t really exist and most programmers had long beards and were truly nerd like (apologies to nerd who now are pleasantly cool!). The tools they developed tended to be developed about their persona, which meant systems were overly complex and difficult to use. Typical users were much less activated towards the new innovations and as a result, many hit Moore’s Chasm where the general business population failed to use the technology. During the 1980s researchers began looking at the problem of adoption failure as an academic study. One person, Professor Fred Davis published a seminal paper on his undertaking to explain how people behave around technology. Even then, the paper’s keywords were: User acceptance, end user computing and user measurement. Something systems analysts apply today. TAM was proposed as a way of identifying user capabilities during design work. Since 1989 there have been many variations about Fred’s initial model. I’ll introduce you to one such model and how we can measure user characteristics based on its outcomes.

10 Apologies for the old slide, I’ve not has time to remake the slide with out branding. The TAM model above contains 3 key features, Usefulness, Ease of Use and Enjoyment and was focused around the study on the adoption and use of Internet. The basic idea underlying the framework is that the average person will tend to adopt and use a system (e.g. internet) providing it is easy to use (which is fundamental!). The only time this may be violated is when additional complexity is warranted because of the increase in usefulness that may arise. In a workplace setting, someone may take time to learn the new system if it improved their productivity. In this case, even highly complex systems may be adopted. Also, if the ease of use of a system is high, then a person is likely to enjoy using the system to a greater extent, therefore improving the likelihood of adoption and use. These findings are not astounding, as they are common sense. But they have been tested in the field and to a good extent they work in isolating peoples perceptions towards various technologies. The question we now address is how we may measure user perceptions on usefulness, ease of use and enjoyment. Actually, it isn’t too complex at all.

11 Measuring TAM perceptions – The survey approach
Let’s see TAM embedded in a survey, please complete the following short questionnaire: Results: The survey approach to measuring TAM perceptions are applied when you envisage a large sample (say, 150 or more responses). Both sets of questions provided above, one for computers and one for internet, were applied to a survey in 2005 and from the results we were able to identify various different user groups ranging from those with absolute distaste towards technology (laggards), to those with an absolute love of technology (innovators). The style of question used here is called a Likert scale question in that people were asked to respond on a scale of Strongly Disagree to Strongly agree. This is the same question style used by TAM researchers the world over, so it’s a good starting point. Below the question images you’ll see a link. This targets as short survey on your use or non-use of FB. Please complete this now, as I hope (although its experimental!) to highlight the value of the tools to understand different peoples needs. Once you have completed the survey we’ll have a discussion of the results. By the way, the results will be a combination of Manx and Chester students and staffs from both institutions. [Allow time to take survey – then look at results for any interesting patterns and discuss]

12 Measuring using TAM – Focus group or Interview based
Very often we work in relatively small groups which makes surveying much less useful. In this situation you can try to ‘gently uncover’ perceptions towards the measures during discussions. Be careful not to upset anyone… Older generations don’t like to be thought of as tech-illiterate by “young upstarts”. The TAM perceptions were initially designed by Davis to be used in large company settings where surveying was feasible. More normally this is not the case, and then only interviews, JADs (a type of focus group) are realistic options for you. You should build in the questions gently however, as some people can become upset if they feel they’ve been identified as a ‘laggard’. Never the less, some questions about the then of usefulness, ease of use and enjoyment may lead to some interesting results. A typical finding from older generations is that they find computerised tools useful, but not easy to use. Hence, they’ll try the innovation then walk away from it.

13 The role of demographics to identify user characteristics
A second approach to use when evaluating a user group for attitudes towards technology is to explore their socio-demographic pattern. Key drivers of household technology choice are educational attainment, income and age. For example, if your target users are from a low income, lower educational attainment background who are generally middle aged – then you need to develop development and deployment strategies suited to them. Easy to use software. Solid training. Solid post-deployment support. For many years social scientists have used socio-demographic profiling to identify people with different behavioral traits. We can take advantage as systems analysts by identifying group that we know may be hard to deal with who may require specialist developments to make the technology work for them. It has been published widely that educational attainment, income (i.e. wages) and age are strong players in defining many aspects of us. In particular they define us as technology users. Much of the academic literature identifies that educational attainment drives use of technology as those who have spent more time in education are more likely to have come in contact with complex computing during their studies, and their jobs are likely to be more technology applied than those without degrees. Income is also known to influence the propensity to adopt technology as those with more money tend to add more technology to their lives. Income is also intertwined with educational attainment as those that are better qualified tend to earn more. Finally, and rather carefully, age. Age is a known strong factor in technology adoption as older generation did not grow up with computers as younger generations presently do. This means that they are not native to the keyboard as we may be and will need more time to develop skills around a new system. In some case it would be wise to develop systems around older generations if they are your predominant user base. The example on the slide highlight that if you come across a development focused at low income, lower educational income and middle aged group, the would most certainly need to adjust the development to meet their needs. You should also consider extensive training and post-deployment support. If, however, the group the technology was targeted at was average age 25, higher earning and well educated, then you could adjust the development accordingly and reduce the intensity of training and post-deployment support. Fail to develop this correctly and Moore’s Chasm awaits!

14 Educational attainment and the propensity to adopt internet
The next few slides provide evidence of how strong the socio-demographic factors are in identifying different groups with differing propensities to adopt technologies generally. The chart here identifies the percentage of households with internet (actually split by dial-up and broadband) against educational attainment. It can be readily seen that internet adoption is heavily influenced by educational attainment with approximately 85% of degree educated households having internet at home versus just 15% of those households with no formal education. Source:

15 Income and propensity to adopt internet
In this slide we see that those households without internet have a much lower income than those with (that is, in those days, those with either a dial-up or broadband account). This may imply that if you are dealing with a low income group, you may need to account for low propensity to adopt technology as a guidance on a: the design of the technology (that is, must be easy to use) and b: Training and support levels to be provided.

16 Bringing TAM and Socio-demographics together
There is a good reason why you may opt to evaluate both socio-demographics of the population and TAM simultaneously. It helps us as systems analysts work out ‘who is who’ within our population. For example, it is rather a blunt instrument to say that all people from low income, low educational attainment backgrounds are all laggards. Generally, it may be the case but we will always find those individuals who buck the trend. We call this variation heterogeneity, and we should examine heterogeneity in larger populations as it can guide the deployment strategy. In table 2, to make the cross tab feasible, we grouped TAM using a process called cluster analysis (you won’t have covered this in Data Analysis, so just assume the process works for now) on the 3 computer questions perception questions, usefulness, ease of use and enjoyment. This allows us to determine 4 cohorts within the survey data who had significantly different psychological attitudes towards ICT. TAM = 1 is the group that neither enjoyed, found useful or computer technology easy to use – these can be thought of as the laggards. On the upper scale TAM =3&4 are a combination of innovators (TAM = 4) and early adopters (TAM = 3). There were very few TAM = 4 so we grouped them with TAM 3 to make the Chi Sq. test reliable. The table highlights a cross-tabulation between income and TAM and how household computer adoption was affected. So, for TAM = 1 and income less than £10,000, computer adoption was 17%. In the same TAM category, but for the highest income group, this rises to 68%. This shows that even those with low TAM outcomes can still be attracted into the ‘technology game’ providing they have the finances to afford it. More interestingly however, the punch line, is that for the lowest income household, some are still innovators and early adopters (TAM = 3&4) of which 77% have a computer in the household. This group may be important for your deployment and something you can use to act as a seed to encourage the rest into adopting and using your system. Note that for educational attainment (not unrelated to income) and age, the same table is likely to apply, meaning that even some older generation will be innovators and early adopters, you just need to evaluate this possibility when determining feasibility of your project. The following slide provides the evidence for this, although age is not considered. For example, if we find among our low income, low educational attainment group who are middle aged, that 10% of them love using computerised systems, we may be able to take advantage of them but using them for initial training which they may then cascade into the woder cohort.

17 Source: http://eprints.lancs.ac.uk/48884/1/Document.pdf
TAM and Socio-demographics together The table above provides evidence of the usefulness of both socio-demographics (that is educational attainment, household income and the presence of children in the household). The model presented above is generated in much the same way as regression (remember data analysis last year?) but its output is the probability of a given event occurring. In this case the model is attempting to determine how the socio demographics and TAM model together impact the choice of whether a household should adopt a personal computer. Big numbers mean a big impact. So the marginal effect on income at ‘£25000 or over’ at means that a household within this income bracket, even if they only had a basic education, would have a probability of 30% of being found to own a computer. Should they come from a highly educated background we add the marginal effects of University education (0.2859) and that on the £25000 or over income category (0.2986) which implies that this type of household would have a probability of 59% of being found to own a computer. Now enter TAM. This variable was created from the TAM model shown before. 4 different types of person were identified ranging from low TAM (that is, people that tend to dislike technology) to hi-TAM, those that LOVE technology. This means that for those the love technology the TAM value for them is 0.04*4 = Or put another way, for our rich and highly educated household the probability of finding a computer in their household rises from 59% to 75%. Given that this data was collected during 2005, and that household adoption of computers was around 60%, these values don’t appear to be too far wrong. Question: Was is the probability of a household with income £10 to 14.9K and education of General normal schooling and a high rating of TAM of 4 (loves technology) of owning a computer? Answer: *12 = 31% (approximately). Again, this shows the value of socio-demographics and psychological factors in evaluating technology environments. The question we now face is how can we gather information, as Systems Analysts, that would enable us to either better design computer systems (i.e. built around the user) or at least evaluate those users that’ll need more support should we implement a news computer system. Now, I’m not saying you have to generate a big survey and estimate a fancy model, you most certainly don’t, but you do need to know how to ask the right question to the right group. Source:

18 Summary Roger’s diffusion model is a healthy way to think of new innovations being deployed. Innovators and early adopters are critical to successful deployment. To identify, say innovators and early adopters, TAM and socio-demographics act as useful methods. Strong academic background in their use. Above all, the research provided here might give you an interest in conducting dissertation research in this area.


Download ppt "Week 16 Technology Adoption Model"

Similar presentations


Ads by Google