Quality of System requirements 1 Performance The performance of a Web service and therefore Solution 2 involves the speed that a request can be processed.

Slides:



Advertisements
Similar presentations
Configuration management
Advertisements

Performance Testing - Kanwalpreet Singh.
Qusay H. Mahmoud CIS* CIS* Service-Oriented Computing Qusay H. Mahmoud, Ph.D.
CTO Office Reliability & Security Distinctions and Interactions Hal Lockhart BEA Systems.
Distributed components
Network Management Overview IACT 918 July 2004 Gene Awyzio SITACS University of Wollongong.
Slide 1 Client / Server Paradigm. Slide 2 Outline: Client / Server Paradigm Client / Server Model of Interaction Server Design Issues C/ S Points of Interaction.
CS 582 / CMPE 481 Distributed Systems Fault Tolerance.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 23 Slide 1 Software testing.
Introduction to z/OS Basics © 2006 IBM Corporation Chapter 8: Designing and developing applications for z/OS.
SIM5102 Software Evaluation
1 SOFTWARE QUALITY ASSURANCE Basic Principles. 2 Requirements System Design Detailed Design Implementation Installation & Testing Maintenance SW Quality:
Essential Software Architecture Chapter Three - Software Quality Attributes Ian Gorton CS590 – Winter 2008.
What is adaptive web technology?  There is an increasingly large demand for software systems which are able to operate effectively in dynamic environments.
.NET Mobile Application Development Remote Procedure Call.
SIMULATING ERRORS IN WEB SERVICES International Journal of Simulation: Systems, Sciences and Technology 2004 Nik Looker, Malcolm Munro and Jie Xu.
Testing - an Overview September 10, What is it, Why do it? Testing is a set of activities aimed at validating that an attribute or capability.
DISTRIBUTED COMPUTING
©Ian Sommerville 2006Critical Systems Slide 1 Critical Systems Engineering l Processes and techniques for developing critical systems.
MSF Testing Introduction Functional Testing Performance Testing.
Lecture slides prepared for “Business Data Communications”, 7/e, by William Stallings and Tom Case, Chapter 8 “TCP/IP”.
Introduction to Databases Transparencies 1. ©Pearson Education 2009 Objectives Common uses of database systems. Meaning of the term database. Meaning.
Load Test Planning Especially with HP LoadRunner >>>>>>>>>>>>>>>>>>>>>>
Introduction to the new mainframe © Copyright IBM Corp., All rights reserved. Chapter 7: Designing and developing applications for z/OS.
Software Dependability CIS 376 Bruce R. Maxim UM-Dearborn.
Computer System Lifecycle Chapter 1. Introduction Computer System users, administrators, and designers are all interested in performance evaluation. Whether.
TESTING STRATEGY Requires a focus because there are many possible test areas and different types of testing available for each one of those areas. Because.
Commercial Database Applications Testing. Test Plan Testing Strategy Testing Planning Testing Design (covered in other modules) Unit Testing (covered.
Samuvel Johnson nd MCA B. Contents  Introduction to Real-time systems  Two main types of system  Testing real-time software  Difficulties.
ABSTRACT Zirous Inc. is a growing company and they need a new way to track who their employees working on various different projects. To solve the issue.
1 Software Testing (Part-II) Lecture Software Testing Software Testing is the process of finding the bugs in a software. It helps in Verifying and.
©Ian Sommerville 2006Software Engineering, 8th edition. Chapter 12 Slide 1 Distributed Systems Architectures.
Security Security is a measure of the system’s ability to protect data and information from unauthorized access while still providing access to people.
Lecture On Database Analysis and Design By- Jesmin Akhter Lecturer, IIT, Jahangirnagar University.
Software Metrics - Data Collection What is good data? Are they correct? Are they accurate? Are they appropriately precise? Are they consist? Are they associated.
CH2 System models.
6.1. Transport Control Protocol (TCP) It is the most widely used transport protocol in the world. Provides reliable end to end connection between two hosts.
Software Engineering Quality What is Quality? Quality software is software that satisfies a user’s requirements, whether that is explicit or implicit.
Software Project Documentation. Types of Project Documents  Project Charter  Requirements  Mockups and Prototypes  Test Cases  Architecture / Design.
Chapter 14 Part II: Architectural Adaptation BY: AARON MCKAY.
Computer Emergency Notification System (CENS)
XML Web Services Architecture Siddharth Ruchandani CS 6362 – SW Architecture & Design Summer /11/05.
The Client/Server Database Environment Ployphan Sornsuwit KPRU Ref.
ICOM 6115: Computer Systems Performance Measurement and Evaluation August 11, 2006.
Chapter 5 McGraw-Hill/Irwin Copyright © 2011 by The McGraw-Hill Companies, Inc. All rights reserved.
Chapter 3: Software Project Management Metrics
Lesson 19-E-Commerce Security Needs. Overview Understand e-commerce services. Understand the importance of availability. Implement client-side security.
Slide 1 Service-centric Software Engineering. Slide 2 Objectives To explain the notion of a reusable service, based on web service standards, that provides.
Manish Kumar,MSRITSoftware Architecture1 Remote procedure call Client/server architecture.
What is a level of test?  Defined by a given Environment  Environment is a collection of people, hard ware, software, interfaces, data etc.
Testing and Evaluating Software Solutions Live Test Data.
HNDIT23082 Lecture 09:Software Testing. Validations and Verification Validation and verification ( V & V ) is the name given to the checking and analysis.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 23 Slide 1 Software testing.
Tool Support for Testing Classify different types of test tools according to their purpose Explain the benefits of using test tools.
 Software reliability is the probability that software will work properly in a specified environment and for a given amount of time. Using the following.
Software Design and Development Development Methodoligies Computing Science.
Powerpoint Templates Data Communication Muhammad Waseem Iqbal Lecture # 07 Spring-2016.
SOFTWARE TESTING Date: 29-Dec-2016 By: Ram Karthick.
PREPARED BY G.VIJAYA KUMAR ASST.PROFESSOR
The Development Process of Web Applications
Chapter 8 – Software Testing
The Client/Server Database Environment
Big Data - in Performance Engineering
Lecture 09:Software Testing
Lecture 1: Multi-tier Architecture Overview
Software testing.
Test Case Test case Describes an input Description and an expected output Description. Test case ID Section 1: Before execution Section 2: After execution.
Software System Testing
Case Study 1 By : Shweta Agarwal Nikhil Walecha Amit Goyal
Presentation transcript:

Quality of System requirements 1 Performance The performance of a Web service and therefore Solution 2 involves the speed that a request can be processed and serviced. This requirement can be determined by measurements like throughput and latency. How? Throughput is the measure of the number of requests serviced in a given time. Latency is the delay experienced from when a request is submitted to when a response initiated. The response time and the throughput depend on the workload that the Web server is experiencing at the time. Both latency and throughput can be measured by using the timestamps at the request time and response times. 1

Quality of System requirements 2 Reliability Reliability measures the quality of Solution 2 in terms of performance, given an amount of time and the network conditions, while maintaining a high service quality. It can also defined by the number of failures per day and the medium of delivery. Reliability shows the percentage of the times a request is completed by Solution 2 with success or the times a request has failed. The count on failures can be based on the number of dropped deliveries, duplicate deliveries, faulty message deliveries, and out-of-order deliveries. An event may either succeed or fail and therefore take the values of 0 or 1. How? Web Service Reliability (or WSReliability) is a latest specification for open, reliable Web service messaging. The WS-Reliability can be embedded into SOAP as an additional extension rather than to a transport level protocol. This specification provides reliability in addition to interoperability, thus allowing communication in a platform and vendor-independent manner [2]. 2

Quality of System requirements 3 Scalability Scalability defines how expandable Solution 2 can be. Solution 2 can be introduced to new interfaces and techniques and this makes keeping the service up-to-date a necessity. Solution 2 should be able to handle heavy load while making sure that the performance in terms of response time experienced by their clients is not objectionable. How? The Performance Non-Scalability Likelihood (PNL) metric is predict whether the system is going to be able to withstand the higher loads of traffic without affecting the performance levels. This metric is used to calculate the intensity of the loads at which the system cannot perform without degrading the response time and throughput. The calculation of PNL involves generating potential workloads and studying the behaviour of the system which will be similar to how the system would react given such varying workloads. If the system crashes, then that shoes that it is not scalable enough to accommodate potential future workloads. 3

Quality of System requirements 4 Accuracy Accuracy is defined as the level to which Solution 2 gives accurate results for the received requests. How? An experiment can be conducted to measure the accuracy of the system by calculating the standard deviation of the reliability. The number of errors generated by Solution 2, the number of fatal errors, and the frequency of the situation determine the amount of accuracy for the system. The closer the value to zero the most accurate the measurement is considered. 4

Quality of System requirements 5 Integrity Integrity guarantees that any modifications to the system based on Semantic Web Services are performed in an authorized manner. Data integrity assures that the data is not corrupted during the transfer, and if it corrupted, it assures that there are enough mechanisms in the design that can detect such modifications. How? Data integrity is the measure of a Web service’s accurate transactional and data delivery abilities. The data messages that are received are verified to see if they have not been modified in transit. There are a number of tools in the market like SIFT that can collect and monitor the data being sent and received between the communicating parties. These tools can be used to monitor the number of faulty transactions that are unidentified and the data messages that are received but with the checksum or hash that cannot be tallied. Data integrity can only take values of 0 or 1, meaning that data either has integrity or does not; there is no middle ground. 5

Quality of System requirements 6 Availability Availability is the probability that Solution 2 is up and in a ready-to-use state. High availability assures that there is the least amount of system failures or server failures even during the peak times when there is heavy traffic to and from the server and that the given service is available at all times. How? As the system is either available or unavailable, the remaining time after subtracting the down time can be termed as the “up time”, the time that the system is available. Since checking upon the time that the system is easier (because down time is smaller than uptime), calculating down time help us to measure the availability of the system. Keeping track on all the events failed during an operation can reveal the down time. 6

Quality of System requirements 7 Accessibility Accessibility is a measure of the success rate of a service instantiation at a given time. Solution 2 for example, might not be accessible even though it is still available. It may be up and running but might not be able to process a request due to high work load. Accessibility in turn depends on how scalable the Solution 2 system design is, because a highly scalable system serves the requests irrespective of their volume. How? Accessibility is the ratio of the number of successful responses received from the server to the number of requests messages sent by the clients. 7

Quality of System requirements 8 Interoperability Web services are accessed by thousands of clients around the world using different system architectures and different operating systems. Therefore, Interoperability within Solution 2 means that the solution can be used by any system, irrespective of operating system or system architecture and that accurate and identical result is rendered in any environment How? The interoperability can be calculated as the ratio of the total number of environments the Web service runs to the total number of possible environments that can be used. This interoperability value measures the successful execution of Solution 2 in different environments such as operating systems, programming languages, and hardware types. 8

Quality of System requirements 9 Unit testing The primary goal of unit testing is to take the smallest piece of testable software in the application, isolate it from the remainder of the code, and determine whether it behaves exactly as system administrators expect. How? Each unit is tested separately before being integrated into modules to test the interfaces between modules. Unit testing effectiveness resides on the a large percentage of defects which are identified during its use. 9

Quality of System requirements 10 Integration/Interaction testing Integration testing is the phase in software testing in which individual software modules are combined and tested as a group. It occurs after unit testing and before system testing. How? Integration testing takes as its input modules that have been unit tested, groups them in larger aggregates, applies tests defined in an integration test plan to those aggregates, and delivers as its output the integrated system ready for system testing. 10

Quality of System requirements 11 Usability and accessibility testing The user will be involved in the evaluation process beginning with the early stages of the development. How? The usability and accessibility of the application will be evaluated through the following methods: - Heuristic evaluation – a theoretical stage, based on the heuristics developed by Jacob Nielsen, needed to ensure that most of the usability problems have been taken into account. 11

Quality of System requirements 11 Usability and accessibility testing - Formative evaluation – implemented along the entire development process, from early stages until the final solution. Part of this evaluation procedure is already taking place through the Metamorphosis platform, which is considered as a testbed application for mEducator developments (e.g. the metadata scheme implementation). - Summative evaluation – will take place at the end of the development process, using the final version. At this stage, in the evaluation process users outside the consortium will be also involved. 12

Quality of System requirements Thank You 13