Presentation is loading. Please wait.

Presentation is loading. Please wait.

An alternative approach to software testing to enable SimShip for the localisation market, using Shadow Dr. K Arthur, D Hannan, M Ward Brandt Technologies.

Similar presentations


Presentation on theme: "An alternative approach to software testing to enable SimShip for the localisation market, using Shadow Dr. K Arthur, D Hannan, M Ward Brandt Technologies."— Presentation transcript:

1 An alternative approach to software testing to enable SimShip for the localisation market, using Shadow Dr. K Arthur, D Hannan, M Ward Brandt Technologies

2 Abstract Our approach to automated software testing is novel. Test several language instances of an application simultaneously. Direct engineer interaction or through a record/playback script. Examine the effect of separating out the functions of a test engineer into a product specialist and QA specialist.

3 Shadow Developed by Brandt Technologies. Different from other remote control applications. Allows control of many machines simultaneously. Control Mode: Mimic mode and Exact match mode.

4 Shadow setup

5 SimShip SimShip is the process of shipping the localised product to customers at the same time as the original language product. Delta – time difference between shipping original language product and localised products.

6 Why SimShip? To increase revenue. To avoid loss of revenue. To maintain/increase market share.

7 SimShip issues Unrecoverable costs: – Localised software not taking off in markets. – Localised software being substandard. – Localised software having been created inefficiently. – Changing build means additional content or different user interface. We can address some of these issues with Shadow.

8 What is quality? Crosby defines quality as conformance to requirements. Fitness for purpose. ISO 9126 provides a definition of the characteristics of quality in software which can be used in evaluating software quality: – Functionality, Usability, Maintainability, Portability. For our purposes, software quality will be defined as software that conforms to customer driven requirements and design specifications.

9 Software testing Software testing involves operation of a system or application under controlled conditions and evaluating the results. Testing is performed: – to find defects. – to ensure that the code matches specification. – to estimate the reliability of the code. To improve or maintain consistency in quality.

10 Software testing Testing costs – schedule and budget. Not performing testing also costs. In 2002 a US government agency suggests that software bugs cost $59.5 billion annually.

11 Software testing Testing should be an integral part of the software development process. Fundamental to eXtreme Programming. Never write a line of code without a failing test. Quality of the test process and effort will have an impact on the outcome.

12 Software testing There are two types of software testing; – Manual testing. – Automated testing. Within these testing types there are many subdivisions. Either process runs a set of tests designed to assess the quality of the application.

13 Manual testing Running manual tests requires a team of engineers to write and execute a test script. Advantages: – Infrequent cases might be cheaper to perform. – Ad hoc testing can be very valuable. – Engineers can perform variations on test cases. – Tests product usability.

14 Manual testing Disadvantages: – Time consuming. – Tedious. – Difficult sometimes to reproduce some issues manually all of the time.

15 Automated testing Requires a test script to be written. Requires team of specialised engineers to code the test script. Advantages: – If tests have to be repeated – cheaper to reuse. – Useful for a large testing matrix. – Consistency in producing results. – High productivity – 24 x 7.

16 Automated testing Disadvantages: – Can be expensive to code from scratch – Can require specialised skills to code – It takes longer to write, test, document and automate tests. – Test automation is a software development activity – with all of the implications. – Test cases are fixed. – Automation cannot perform ad hoc testing. – Difficulty in keeping in-synch with a changing build.

17 Which to use? Neither mode of testing is a silver bullet. Successful software development and localisation should use both manual and automated testing methodologies. Balance between use of each mode can be decided using budgetary and schedule constraints.

18 Shadow Shadow is a software-testing tool for performing automated and manual tests on original or localised software applications. Allows user to simultaneously test localised software applications running on either different language operating systems or original language products running in different configurations. Quick to set up and use.

19 Shadow In the following slide we see the Shadow setup. One PC is running 3 VMWare machines. Each VMWare machine is shown displaying the Start menu.

20 Shadow remote control

21 Shadow interface Shadow is shown in the following slide in control of 4 VMWare machines. Each VMWare machine running the Catalyst application with a TTK open.

22 Shadow interface

23 Shadow architecture Shadow consists of 3 pieces of software; – Dispatcher server – Client Viewer – Client Target

24 Shadow architecture

25 Demonstration This demonstration shows Shadow connecting to three Windows XP clients. Shadow can connect to a PC running Windows XP Professional in exactly the same way as it can connect to VMWare running Windows XP Pro clients.

26 Connection Demo

27 Screenshot demo

28 Shadow differentiators Makes automated testing easier to use with less programming. Separating Test Engineer into Product Specialist and Quality Assurance Specialist. Making software testing more like the actions of a human user. Accelerating the manual testing process through the unique Shadow user interface. Recording screenshot data by default.

29 Case study: Client A Client A produces ERP software. Task list: – Write test scripts – Update test scripts – Set up the hardware and software – Execute the test script on the machines; using both Shadow and WinRunner – LQA – performed by linguists using the screenshots – Localisation functional QA using Shadow and WinRunner

30 Case study: Client A - results 40 screenshotsShadowWinRunner TaskDays Comment Write LQA script3 – 4 Tool independent Update LQA script1 – 2 Tool independent Write TSL script1 – 2WinRunner only Execution script21Shadow and WinRunner LQA1 – 2 Tool independent Functional QA1 – 2 Tool independent Total8 – 128 – 13

31 Case study: Client A - results 400 screenshotsShadowWinRunner TaskDays Comment Write LQA script20 – 25 Tool independent Update LQA script10 – 15 Tool independent Write TSL script025 – 30WinRunner only Execution script168Shadow and WinRunner LQA5 – 6 Tool independent Functional QA9 – 10 Tool independent Total60 – 7277 – 94

32 Case study: Client A - conclusions Shadow and WinRunner take approximately the same time to setup for small number of screenshots. For a larger number, Shadow is faster. The client setup had 3 machines, this could be improved. WinRunner requires build preparation. Wait for feature.

33 Case study: Brandt Translations Project – localisation of multimedia tours in several languages. Recent projects include; – 6 tours x 5 languages – 3 tours x 7 languages This type of project occurs often. Need to be efficient, as the schedule is tight.

34 Case study: Brandt Brandt uses Shadow for the purposes of testing and as an automation tool to perform tasks that need to be repeated frequently. Tasks in the localisation of Captivate tours; – Audio integration. – Text integration. – Font assignment. Recorded scripts perform these tasks.

35 Case study: Brandt Text Integration – Localised text on every slide in the tour. Audio integration – Importing a single WAV file per slide, voice reads the text. Font assignment – Localised text font has to be consistent for all slides.

36 Case study: Brandt - results TaskAutomation per 30 slide tour – minutes Manual time per 30 slide tour – minutes Audio integration1015 Text integration1520 Font assignment1015 Efficiency is in the parallelism. Automated repetitive task – reduced issues due to human errors.

37 Brandt Demo

38 Case study: Brandt - conclusions Shadow was used as an automation tool for this project. Characteristics of this project making Shadow efficient: repeated tasks, easily done in parallel. Shadow was essential to the effectiveness of the engineering team.

39 Conclusions We looked at the advantages and disadvantages of the different modes of testing. Mix of manual and automated testing is essential. Shadow allows separation of QA from specialist product knowledge and hardware setup. For localisation, Shadow can take screenshots of the application, for linguistic review. Shadow can be used by the engineer, with specialist product knowledge, to walk through the different language versions of an application simultaneously.

40 Shadow: Future developments Addition/integration of an OCR module Enhanced AI modules

41 Acknowledgements Gary Winterlich provided the Camtasia demonstrations. Bibliography is included a forthcoming paper to accompany this presentation in LRC XII Thank you for your time. Q&A


Download ppt "An alternative approach to software testing to enable SimShip for the localisation market, using Shadow Dr. K Arthur, D Hannan, M Ward Brandt Technologies."

Similar presentations


Ads by Google