Presentation on theme: "Software Quality Management Dr.S.Sridhar, Ph.D.(JNUD), RACI(Paris, NICE), RMR(USA), RZFM(Germany) Which of these would be the better way to do a project:"— Presentation transcript:
Software Quality Management Dr.S.Sridhar, Ph.D.(JNUD), RACI(Paris, NICE), RMR(USA), RZFM(Germany) Which of these would be the better way to do a project: Do it right as quickly as possible Do it quickly as right as time allows Setting out to do it quickly as right as time allows not only means there is a perfect excuse for not bothering too much about quality Set out to do it right as quickly as that is possible - that should mean the project delivers better quality and gets done more quickly.
Poor quality in systems: errors in live running system crashes floods of improvement requests unhappy users overtime during testing Mushiness Mushiness? Nothing seems to be firm, pinned down or disciplined. It can be an indication that quality, amongst other things, isn't being managed.
What causes errors in systems? continuously changing requirements no clear "manufacturing process", no-one is quite sure what should be done at each stage lack of testing no process for removing errors and many more Even in some large projects, there are huge numbers left to be found at the end - And some project teams never ask the question: what causes us to make errors?
When building anything complicated the errors probably won't be distributed. So often people say they want to deliver good quality but in practice on the ground they are doing little or nothing to achieve it. What is quality in the project context? Delivering what the customer/user asked for in terms of functionality, auditability, enhanceability, performance, etc and meeting the requirement to deliver on time and on budget.
If the customer wants a system with 1000 bugs in it and we deliver one with 1000 bugs in it we have met the customer's requirement - a quality job has been done. If the customer, by implication, wanted a system with few or no bugs and we deliver one with 1000 bugs we have failed. Can we guarantee zero defects in software? On the contrary, we can be almost certain there will be some defects - we can almost guarantee we will fail to meet the zero defect target.
Quality should not be something we hope for vaguely and we should not only measure it after delivery - that's rather too late. Quality should be something that has objective, even numerical, targets set at the outset of the project, a plan for achieving those targets Quality should be as planned, predicted and measured.
The user functions design is sort of right but again not complete or correct and not fully understood or agreed by the users. Then during the build stage all the problems start coming up. And sometimes there is a huge panic peak in testing where a very large number of errors are tested out
When people get the quality message they invest a lot more during the requirements step: rigorous, house sessions to identify business processes fully and in detail and then time and money to check that the requirements are right. At the UFD stage it is very hard to over-invest in getting it right. And then no panic in testing and many fewer problems when the system goes live. what you invest up front in getting the requirements and design right you will almost certainly more than get back in build and test - and the real benefit comes when the system goes live.
Unfortunately the notion that going for quality will make the project cost less is not how it feels at the beginning: all people perceive is this 'enormous' cost of being meticulous and of checking, checking, checking. The easy option is not to fight to incur that upfront cost, and instead to fire- fight the 'inevitable' problems at the end of the project. When talking to business users a saying is : "We should improve quality, we are getting far too many defects, far too many bugs, far too many crashes." And the users have no idea what you're talking about - they don't know
what these words mean and so they aren't interested. How might we interest the sponsor in quality? Talk about money. Some bugs found in live running are cheaply and easily fixed. Others can be very expensive indeed, not so much to IT to fix, but expensive for the business. We are all aiming for no errors in live running - they can have expensive consequences.
Finding errors in system test and user acceptance test isn't quite as expensive. Ideally there would be no errors to be found in system test or user test. If programmers can find errors when testing units of code it's cheaper. Finding errors by inspecting requirements and design documents is very cost effective but certainly isn't free. The only ultimate aim a quality program : no error is ever made at any stage of any project.
Quality planning In the PDD or stage agreement, under the heading 'quality plan' we could say: 'in this project we will not check anything, we will not do any testing and the errors will be found by our customers.' That is a perfectly valid quality plan. It's not a very good one, but it is a very clear statement of what we intend to do, or rather not to do, quality-wise. But we hope you'd be saying something more like this: 'we will check the requirements and UFD documents like this... we will analyse error causes like this... we'll measure quality like this... and these are the team's quality related targets...'.
In a small project deciding what should be checked by whom is not too difficult. In a large project deciding who should check what and building quality checking tasks into the plan can be complicated, which is why some project managers have a specialist - We should measure the time gap between when errors are introduced and when they are found and if it's on average a short gap financially reward the project team, and if it's a long gap (and all the errors are found near or after the end of the project) financially penalise the team.
Quality in all projects Within a company it could be that some projects deliver excellent quality while others deliver terrible quality. Include appropriate quality checking activities in their plan? Or should we leave it to project managers and hope they plan quality tasks One rule is that at the start of each stage the project manager must review his plan with project support. In a complex project quality tasks are properly planned.
Error cost in business case Project manager must estimate, perhaps based upon past projects, how many errors might be found in live running. He must multiply by an average cost per error The sponsor is interested in quality. The project manager than describes how he could make less errors And you will need the sponsor's support for this because much of that up front cost will fall upon his people - the users.
Team quality briefing It is a good idea to have a kick off meeting at the beginning of each project stage. The sponsor is interested in quality, not just any old thing delivered on the right date. The ground is now fertile for the project manager to run through the quality plan:... we are going to measure and analyse errors like this... we are going to analyse and address error causes like this..." The team know s that the sponsor wants good quality and also that it is in their personal interest to deliver good quality. “
Inspections An inspection is a meeting to identify errors and omissions in a piece of work. Each section is inspected as soon as it is produced. Because the teams are essentially checking each other's work, inspections are also a great aid to understanding what's going on elsewhere in the project. The inspection is attended by the author, a moderator to chair the meeting and a number of inspectors -
There is then a meeting - another task in each attendee's work plan. The moderator asks those present: "how long did you spend preparing for this inspection?" If they all say "no time at all" the inspection is cancelled. You might complement them and ask them if they would quickly run through all the errors they found. The moderator has to make sure inspections are done properly, we're not just going through the motions.
Sometimes moderators will invite someone to read the material out loud. This is a meeting to record errors and possible errors, not a meeting to have long discussions about how things should be corrected. The author makes corrections after the meeting and, strictly speaking, the moderator should check that they have. Again, this correction task will be in the author's work plan Occasionally there are so many errors that major rework is needed and the moderator decides a re-inspection is required.
The moderator records the total cost of the inspection - preparation hours plus meeting hours - and the number of errors found. Only real errors are counted. it's five times cheaper to find and fix errors via UFD inspection than via system testing. In principle any project deliverable can be inspected but the biggest task will be the inspecting requirements and UFD documents.
And finally: Everyone in a project team should have a written quality objective upon which they know their next pay rise partly depends. Set out with the aim of getting it right first time every time but have a secondary aim of finding any errors as soon as possible after they are made. Build good quality in. No manufacturer ultimately succeeds by throwing bad quality things together and then trying to test out the bad quality at the end of the production line. Plan quality checking tasks into the project plan. It may feel expensive at the outset but will make the project cost less in the end: rework is expensive. Identify error causes and take action to reduce them. There is only one reason for driving for quality. It saves money. Crosby said quality is free - it's better than that!