Presentation is loading. Please wait.

Presentation is loading. Please wait.

DOMAIN 3.

Similar presentations


Presentation on theme: "DOMAIN 3."— Presentation transcript:

1 DOMAIN 3

2 INTEGRATED MANUFACTURING SYSTEMS
These is/are application(s) traditionally used in the manufacturing sector to automate common operations. These applications integrate the manufacturing processing from recording raw materials, work in progress and finished goods transaction, inventory adjustments, purchases, supplier mgt, sales, account payables, account receivables, goods received, inspection, invoices, cost accounting, maintenance. Integrated Manufacturing System (IMS) or Manufacturing Resource Planning (MRP) is a typical module of most ERP packages such as SAP, Oracle, J.D. Edwards, Navision and it usually integrated in modern CRM & SCM systems. 2

3 Some examples of IMS Bill of Materials ( BOM)
Bill of material Processing (BOMP) Manufacturing resources Planning(MRP) Computer Assisted Design (CAD) Computer-integrated (or Computer-intensive) manufacturing (CIM) Manufacturing accounting and production (MAP)

4 What is Lean Manufacturing?
It is focusing on the ELIMINATION of WASTE (non-value-added activities) through CONTINUOUS IMPROVEMENT! It is not about eliminating people. It is about Expanding capacity by reducing costs and shortening cycle times btw order and ship date.

5 Bill of materials Bill of materials (BOM) is a list of the raw materials, sub-assemblies, intermediate assemblies, sub-components, components, parts and the quantities of each needed to manufacture an end item (final product) It may be used for communication between manufacturing partners, or confined to a single manufacturing plant.

6 Manufacturing Resources Planning
Manufacturing resource planning, also known as MRP II, is a method for the effective planning of a manufacturer's resources. MRP II is composed of several linked functions, such as business planning, sales and operations planning, capacity requirements planning, and all related support systems. The earliest form of manufacturing resource planning was known as material requirements planning (MRP). Material requirements planning (MRP) is a computer-based, time-phased system for planning and controlling the production and inventory function of a firm from the purchase of materials to the shipment of finished goods.

7 Computer Integrated Manufacturing
(CIM) in engineering is a method of manufacturing in which the entire production process is controlled by computer. The traditional separated process methods are joined through a computer by CIM. This integration allows that the processes exchange information with each other and they are able to initiate actions.

8 CAD/CAM The heart of computer integrated manufacturing is CAD/CAM. Computer-aided design (CAD) and computer-aided manufacturing (CAM) systems are essential to reducing cycle times in the organization. CAD/CAM is a high technology integrating tool between design and manufacturing. CAD techniques make use of group technology to create similar geometries for quick retrieval. Electronic files replace drawing rooms.

9

10 Continuity planning What is a BCP?
It is a plan that gives a recovery team the information it needs to: 􀂓 Recover from a disaster 􀂓 Continue the business operations 􀂓 Return to normal operations RTO VS RPO

11

12 ELECTRONIC FUND TRANSFER
The underlying goal of the automated environment is to wring out costs inherent in the business processes. This generally refers to the transfer of money from one account to another account without physical exchange of money. EFT allows parties. to move money from one account to another account , replacing traditional check writing and cash collection procedures

13 In the settlement between parties, EFT transactions usually function via an internal bank transfer from one party's account to another or via a clearinghouse network. Usually, transactions originate from a computer at one institution (location) and are transmitted to a computer at another institution (location) with the monetary amount recorded in the respective organization's account.

14 Because of its sensitivity, access security and authorisation are important controls.
EFT switch network is also an audit concern. The IS Auditor should review back up arrangements for continuity of operations. Central bank requirements should be reviewed for application in these processes.

15 CONTROLS IN AN EFT ENVIRONMENT
Because of the potential high volume of money being exchanged these systems may be in an extremely high-risk category and security in an EFT environment becomes extremely critical. Security includes the methods used by the customer to gain access to the system, the communications network and the host or application processing site. Individual consumer access to the EFT system is generally controlled by a plastic card and a PIN. Both items are required to initiate a transaction. IS auditor should review the physical security of unissued plastic cards and the procedures used to generate PINs. Access to commercial EFT systems generally does not require a plastic card but the IS audit or should ensure that reasonable identification methods are required. The communications network should be designed to provide maximum security. Data encryption is recommended for all transactions.

16 An EFT switch involved in the network is also an audit concern.
The IS auditor should review the contract with the switch and the third party audit of the switch operations. I f a third party audit has not been performed, the auditor should consider visiting the switch location. At the application processing level, the IS auditor should review the interface between the EFT system and the application systems that process the accounts from which funds are transferred . Availability of funds or adequacy of credit limits should be verified before funds are transferred. Because of the penalties for failure to make a timely transfer, the IS auditor should review backup arrangements or other methods used to ensure continuity of operations. Since EFT reduces the output of paper and consequently reduces normal audit trails, the IS auditor should determine that alternative audit trails are available.

17 INTEGRATED CUSTOMER FILE
ICF provides details and history about all business relationships a customer maintains with an organisation. ICF aids in customer profiling for the purpose of marketing and tailoring of customized services.

18 OFFICE AUTOMATION This basically refers to a variety of electronic devices and techniques to aid in the conduct of business. A good examples are common office packages like Word, Excel, PowerPoint e.t.c Local area network can equally be considered as one.

19 AUTOMATED TELLER MACHINE
This basically is a form of Point of Sale terminal. It is designed as unmanned terminal used by a customer of a financial institution. It customarily allows a range of banking credit and debit operations. ATM are usually located in uncontrolled area to facilitate easy access to customer after hours Controls must be in place for issuance and delivery of PINs, exception reporting, restriction to accounts after small number of unsuccessful attempts, PIN should not be stored unencrypted, e.t.c Wait a minute! what is the first step in establishing controls???

20 Recommended internal control guidelines for ATMs
Audit of ATM Page ? Are u waiting ? Read !!!!

21 TEASER Automated teller machines (ATMs) are a specialized form of a point of sale terminal which: A. allow for cash withdrawal and financial deposits only. B. are usually located in populous areas to deter theft or vandalism. C. utilize protected telecommunication lines for data transmissions. D. must provide high levels of logical and physical security.

22 EXPLANATION Automated teller machines (ATMs) are a specialized form of a point of sale terminal and their system must provide high levels of logical and physical security for both customer and the machinery. ATMs allow for a variety of transactions including cash withdrawal and financial deposits, are usually located in unattended areas and utilize unprotected telecommunication lines for data transmissions.

23 COOPERATIVE PROCESSING SYSTEMS
These are systems divided into segments. Different parts run on different independent computer devices. The system divides the problem into units that are processed in a number of environments and communicates the results among them to produce a solution to the total problem. The system must be designed to minimize and maintain the integrity of communication between the component parts.

24 parallel computing: Solving a problem with multiple computers or computers made up of multiple processors. It is an umbrella term for a variety of architectures, including symmetric multiprocessing (SMP), clusters of SMP systems, massively parallel processors (MPPs) and grid computing. grid computing, the concurrent application of the processing and data storage resources of many computers  in a network to a single problem. It also can be used for load balancing as well as high availability by employing multiple computers—typically personal computers and workstations—that are remote from one another, multiple data storage devices, and redundant network connections. Grid computing requires the use of parallel processing  software that can divide a program among as many as several thousand computers and restructure the results into a single solution of the problem. Primarily for security reasons, grid computing is typically restricted to multiple computers within the same enterprise.

25 VOICE RESPONSE ORDERING SYSTEMS
VROS are systems in which the user interacts with the computer over a telephone connection in response to verbal instructions given by the computer system Interactive voice response (IVR) systems good for large call volumes.

26 PURCHASE ACCOUNTING SYSTEM
These basically refers to a set of integrated systems usually triggered when purchases are made. In a departmental store for example, a customer purchases triggers the following processes: Sales accounting processes Account receivable processes (if payment is through credit card) Cash or bank processes (if payment is through cash) Inventories processes Purchase accounting processes to initiate replacement of inventory Ultimately, the transaction is recorded in the general ledger

27 3 basic functions Accounts payable processing- Recording transactions in the accounts payable records Goods received processing- Recording details of goods received but not yet invoiced Order processing- Recording goods ordered but not yet received

28 IMAGE PROCESSING Image processing refers to computer manipulation of images. It is the replacement of paper document with electronic document. An imaging system stores, retrieves and processes graphic data such as pictures, charts, graphs either in addition to text data or instead of it. This system usually requires enormous storage capacity and by implication, costly This system includes techniques that identify level of shades and colors that can not be differentiated by human eye.

29 ADVANTEGES OF IMAGE PROCESSING
Merits include: Item processing (e.g. signature storage & retrieval) Immediate retrieval Increased productivity Improved control over paper files Reduced deterioration due to handling Enhanced disaster recovery procedure.

30 ISSUES WITH IMAGE PROCESSING
Risk areas that management should address when installing imaging systems and that IS auditors should be aware of when reviewing an institution's controls over imaging systems include: Planning -Critical issues include converting existing paper storage files and integration of the imaging system into the organization workflow and electronic media storage to meet audit and document retention legal requirements. Audit- may change or eliminate the traditional control s as well as the checks and balances inherent in paper-based systems Redesign of workflow-Institutions generally redesign or reengineer workflow processes to benefit from imaging technology. Scanning devices Software security - unauthorized access and modifications Training

31 TEASER Which of the following is NOT an advantage of image processing?
A. Verifies signatures B. Improves service C. Relatively inexpensive to use D. Reduces deterioration due to handling

32 ARTIFICIAL INTELLIGENCE
the study and design of intelligent agents. where an intelligent agent is a system that perceives its environment and takes actions which maximize its chances of success. The field was founded on the claim that a central property of human beings, intelligence—can be so precisely described that it can be simulated by a machine.

33 ARTIFICIAL INTELLIGENCE
Artificial intelligence is the study and application of the following principles: Knowledge acquisition and usages; Goals generation and achievement; Information communication; Achievement of collaboration; Concept formation; Language development.

34 The two main programming languages for AI :
LISP PROLOG

35 Major Branches of AI Perceptive system
A system that approximates the way a human sees, hears, and feels objects Vision system Capture, store, and manipulate visual images and pictures Robotics Mechanical and computer devices that perform tedious tasks with high precision Expert system Stores knowledge and makes inferences Learning system Computer changes how it functions or reacts to situations based on feedback Natural language processing Computers understand and react to statements and commands made in a “natural” language, such as English Neural network Computer system that can act like or simulate the functioning of the human brain

36 Artificial intelligence
Vision systems Learning systems Robotics Expert systems Neural networks Natural language processing

37 EXPERT SYSTEMS Expert systems are an area of artificial intelligence and perform a specific function or are prevalent in certain industries. This is a branch of AI that allows users to specify certain basic assumptions or formulas and then uses theses assumptions or formulas to analyze events. Based on the information used as input to the system, a conclusion is produced. expert system is a computer program that simulates the thought process of a human expert to solve complex decision problems in a specific domain.

38 BENEFITS OF EXPERT SYSTEM
Capturing the knowledge and experience of individuals before they leave the organisation. Sharing knowledge and experience in areas where there is limited expertise Facilitating consistent and efficient quality decisions Enhancing personnel productivity and performance Automating highly repetitive tasks (help desk) Operating in environment where a human expert is not available (e.g. medical assistance on board of a ship, satellites)

39 COMPONENT OF EXPERT SYSTEM
Database Knowledge base (decision tree, Rules & semantic nets) Inference engine Explanation module These are called shells when they are not populated with particular data

40

41 COMPONENTS contd… Knowledge base represents the key to the system.
It contains information or fact patterns associated with a particular subject matter and the rules for interpreting these facts. Knowledge base interfaces with a database in attaining data to analyze a particular problem in deriving an expert conclusion.

42 KNOWLEDGE BASE contd… The information in KB can be expressed in several ways: Decision trees – this uses questionnaire to lead the user through a series of choices, until a conclusion is reached. With this, flexibility is compromised because user must answer the question in exact manner and sequence. Rules – expressing declarative knowledge through the use of ‘if-then’ relationship e.g. if temperature is over 390C, and pulse is under 60 ,then the patient suffers from OMO-LARIA ! Semantic nets – Consists of graphs. It resembles dataflow diagram and makes use of an inheritance mechanism to prevent data duplication.

43 INFERENCE ENGINE The inference engine is a program that uses the KB and determines the most appropriate outcome based on the information supplied by the user Inference engine Seeks information and relationships from the knowledge base and provides answers, predictions, and suggestions in the way a human expert would

44 In addition, an expert system includes the following components: • Knowledge interface-Allows the expert to enter knowledge into the system without the traditional mediation of a software engineer . • Data interface- Enables the expert system to collect detail from nonhuman sources, such as measurement instruments in a power plant.

45 An explanation module that is use-oriented in addressing the problem is analyzed, and the expert conclusion reached is also provided. This mode allows the system to explain its conclusions and its reasoning process. This ability comes from the AND/OR trees created during the production system reasoning process. Expert systems are gaining acceptance and popularity as audit tools. as operating systems, online software environments, Access control products microcomputer environments These tools can take the form of a series of a well designed questionnaires actual software that integrates and report on system parameters and data sets

46 Expert Systems in Action
Medical management Telephone network maintenance Credit evaluation Tax planning Detection of insider securities trading Detection of common metals Mineral exploration Irrigation and pest management Diagnosis and prediction of mechanical failure Class selection for students

47 stringent change control procedures should be followed since the basic assumptions and formulas may need to be changed as more expertise is gained. As with other systems, access should be on a need–to-know basis. The IS auditor needs to be concerned with the controls relevant to these systems when used as an integral part of an organization‘s business process or mission critical functions, and the level of experience or intelligence used as a basis for developing the software

48 TEASER The use of expert systems:
A. facilitates consistent and efficient quality decisions. B. captures the knowledge and experience of industry experts. C. cannot be used by IS auditors since they deal with system specific controls. D. improves system efficiency and effectiveness, not personal productivity and performance.

49 BUSINESS INTELLIGENCE
Business intelligence is a broad field of IT that encompasses the collection and dissemination of information to assist decision making and assess organizational performance. Business intelligence basically assists in the understanding of a wide range of business questions.

50 BI contd...... Business intelligence (BI) refers to skills, technologies, applications and practices used to help a business acquire a better understanding of its commercial context. Business intelligence may also refer to the collected information itself.

51 Business intelligence (BI) is a set of theories, methodologies, architectures, and technologies that transform raw data into meaningful and useful information for business purposes. BI can handle enormous amounts of unstructured data to help identify, develop and otherwise create new opportunities. BI, in simple words, makes interpreting voluminous data friendly. Making use of new opportunities and implementing an effective strategy can provide a competitive market advantage and long-term stability

52 BUSINESS INTELLIGENCE contd…
Some of the business questions include: Process cost, efficiency and quality Customer satisfaction with product and service Customer profitability Staff and business unit achievement of key performance indicators Risk management e.g. by identifying unusual transaction patterns and accumulation of incident and loss statistics.

53 BUSINESS INTELLIGENCE contd…
Reasons buy business intelligence: Increasing size and complexity of organisation Pursuit of competitive advantage Legal requirements – SOX ( Sarbanes-Oxley Act), CBN’s directive of KYC and their transactions BI vs competitive intelligence

54 BI uses technologies, processes, and applications to analyze mostly internal, structured data and business processes while competitive intelligence gathers, analyzes and disseminates information with a topical focus on company competitors. If understood broadly, business intelligence can include the subset of competitive intelligence

55 Do you need Business Intelligence?
Companies continuously create data whether they store it in flat files, spreadsheets or databases. This data is extremely valuable to your company. It’s more than just a record of what was sold yesterday, last week or last month. 1. It should be used to look at sales trends in order to plan marketing campaigns 2. to decide what resources to allocate to specific sales teams. 3. It should be used to analyse market trends to ensure that your products are viable in today’s marketplace. 3. It should be used to plan for future expansion of your business. 4. It should be used to analyse customer behaviour. The bottom line is that your data should be used to maximize revenue and increase profit.

56 BUSINESS INTELLIGENCE contd…
In order to deliver effective BI, a company needs to design and implement a data architecture. A complete data architecture consists of two components: • The enterprise data flow architecture (EDFA) • A logical data architecture

57 Data Architecture Data Architecture in enterprise architecture is the design of data for use in defining the target state and the subsequent planning needed to hit the target state. A data architecture describes the data structures used by a business and/or its applications. There are descriptions of data in storage and data in motion; descriptions of data stores, data groups and data items; and mappings of those data artifacts to data qualities, applications, locations etc.

58

59 DATA FLOW ARCHITECTURE
Presentation/desktop access layer – this is where end users directly deal with information. This layer includes familiar desktop tools like MS Access, MS Excel and other direct querying tools. Data Mart Layer – this represents a subset of information contained in the core data warehouse, selected and organized to meet the needs of a particular business unit or business line. This may be in form of a relational DB or OLAP( online analytical processing)

60 OLAP Short for Online Analytical Processing, a category of software tools that provides analysis of data stored in a database. OLAP tools enable users to analyze different dimensions of multidimensional data. For example, it provides time series and trend analysis views. OLAP often is used in data mining. data mining A class of database applications that look for hidden patterns in a group of data that can be used to predict future behavior. For example, data mining software can help retail companies find customers with common interests. The term is commonly misused to describe software that presents data in new ways. True data mining software doesn't just change the presentation, but actually discovers previously unknown relationships among the data.

61 DATA FLOW ARCHITECTURE contd…
Data Feed/Data Mining Indexing Layer – this is otherwise called data preparation layer. It is concerned with the assembly and preparation of data for loading to data mart. Only presorted and pre-calculated values should be loaded into the data repository to increase access speed. Data Warehouse Layer – this is where all the data (or at least the majority) of interest to an organisation is captured and organized to assist reporting and analysis. A properly constituted data warehouse should support three basic forms of inquiry:

62 DATAWAREHOUSE contd… Drilling up and drilling down – this implies flexibility in data aggregation e.g. drilling up: sum store sales to get region sales and ultimately national sales. drilling down: break store sales down to computer sales Drilling across – use of common attributes to access a cross section of information in the warehouse e.g. sum sales across all product lines by customer and groups of customers according to any attribute of interest Historical analysis – the warehouse should be capable of holding historical, time variant data.

63 DATA FLOW ARCHITECTURE contd…
Data staging & quality layer – this layer is responsible for data copying, transformation into data warehouse format and quality control. Data access layer – this layer operates to connect the data staging and quality layer with data stores in the data source layer. Data source layer – this basically depicts data and information source. It includes: operational data, -Data captured and maintained by an organization's ,existing systems external data and - Data provided to an organization by external sources non-operational data. - Information needed by end users that is not currently maintained in computer- accessible format .

64 DATA FLOW ARCHITECTURE contd…
Metadata repository layer – this is data about data. Warehouse management layer – the function of this layer is the scheduling of the tasks necessary to build and maintain the data warehouse and populate data marts. Application messaging layer – this layer is concerned with transporting information between the various layers. Internet/intranet layer – this layer is concerned with basic data communication. It includes browser based user interfaces and TCP/IP network

65 BUSINESS INTELLIGENCE GOVERNANCE
Governance determines how an organization is controlled and directed. An important part of the governance process involves determining: Which BI initiative to fund; What priority to assign to initiative; How to measure their return on investment (ROI) In the area of BI funding governance, it is advisable to establish a business/IT advisory team that allows different functional perspectives to be represented. Final funding decisions should rest with a technology steering committee that comprises senior management.

66 GOVERNANCE contd.. another important part is data governance. Which includes: Establishing std definition of data. Business rules and metrics Identifying approved data source Establishing stds for data reconciliation and balancing

67 DECISION SUPPORT SYSTEM
A DSS is an interactive system that provides the user with easy access to decision models and data from a wide range of sources, to support semi-structured decision-making tasks typically for business purposes. It assists in making decisions through data provided by business intelligence tools. A decision support system (DSS) is a computer-based information system that supports business or organizational decision-making activities. DSSs serve the management, operations, and planning levels of an organization and help to make decisions, which may be rapidly changing and not easily specified in advance.

68 DSS contd… Typical information that a DSS might gather and present would be: Comparative sales figures between one week and the next; Projected revenue figures based on new product sales assumptions Consequence of different decision alternatives, given past experience in the described context.

69 DSS contd… Characteristics of DSS include:
Aims at solving less structured, under specified problems that senior managers face; Combines the use of models or analytic techniques with traditional data access and retrieval functions; Emphasizes flexibility and adaptability to accommodate changes in the environment and the decision-making approach of the users. The degree to which a problem or decision is structured corresponds roughly to the extent to which it can be automated or programmed.

70 DSS IMPLEMENTATION & USE
The main challenge is to get the users to accept the use of DSS. The following are the steps involved in changing user behaviors: Unfreezing- Altering the forces acting on individuals such that they are distracted sufficiently to change. (increasing the pressure/ reducing the threats to change) Moving – this step presents a direction of change & process of learning new attitudes Refreezing- this step integrates the changed attitudes into the individual’s personality

71 DSS RISK FACTOR There are basically eight implementation factors:
Non existence or unwilling users; Multiple users or implementers; Disappearing users, implementers or maintainers; Inability to specify purpose or usage patterns in advance; Inability to predict and cushion impact on all parties; Lack or loss of support Lack of experience with similar systems; Technical problems and cost-effectiveness issues.

72 CUSTOMER RELATIONSHIP MGT
For competitive reasons, companies are shifting their focuses from products to customers. CRM emphasizes the importance of focusing on information relating to: Transaction data Customer preferences Customer purchase pattern Customer status Contact history Demographic information.

73 CRM contd Customer relationship management (CRM) consists of the processes a company uses to track and organize its contacts with its current and prospective customers. CRM software is used to support these processes; information about customers and customer interactions can be entered, stored and accessed by employees in different company departments. Typical CRM goals are to improve services provided to customers, and to use customer contact information for targeted marketing.

74 CRM contd… CRM centers all business processes around the customer rather than marketing, sales or any other function. This business model makes use of telephony, web and database technologies and enterprise integration technologies. It also extends to other business partners who can share information, communicate and collaborate with the organization with the seamless integration of web-enabled applications.

75 SUPPLY CHAIN MGT This is about linking the business processes between the related entities e.g the buyer and the seller. The link could cover: Managing logistics & exchange of information Exchange of goods and services between supplier, consumer, warehouse, wholesale/retail distributors and the manufacturer of goods.

76 SCM Contd Supply chain management (SCM) is the management of a network of interconnected businesses involved in the ultimate provision of product and service packages required by end customers . Supply Chain Management spans all movement and storage of raw materials, work-in-process inventory, and finished goods from point of origin to point of consumption (supply chain).

77 SUPPLY CHAIN MGT EDI, which is extensively used in SCM aids in data interchange between business entities SCM is all about managing the flow of goods, services and information among stakeholders SCM shifts the focus as all entities in the supply chain can work collaboratively and in a real time mode, reducing the inventory to a great extent. JIT inventory approach becomes more possible and the cycle becomes shorter with the objective toward reducing unwanted inventory.

78 INFRASTRUCTURE DEVELOPMENT/ ACQUISITION PRACTICES

79 The physical architecture analysis, the definition of a new one and the necessary road map to move from one to the other is a critical task for an IT department. Its impact is not only economic, but also technological, since it decides many other choices down stream such as operational procedures, training needs, installation issues and total cost of ownership (TCO). Thus physical architecture analysis cannot be based solely on price or isolated features. A formal, reasoned choice must be made.

80 INFRASTRUCTURAL ACQUISITION PRACTICES
Factors that might render legacy system obsolete: Deficits in functionality; Endangered future reliability; Increase in cost; Product development handicapped; Deficits in information supply; Future business requirements not fulfilled Insufficient process support

81 Goals of migrating technical architecture to a new one
To successfully analyze the existing architecture To design a new architecture that takes into account the existing architecture and a company's particular constraints/requirement such as:. Reduced costs increased functionality Minimum impact on daily work - Security and confidentiality issues Progressive migration to the new architecture To write the functional requirements of this new architecture To develop a proof of concept based on these functional requirements: To characterize price functionality and performance To identify additional requirements that will be used later The resulting requirements will be documents and drawings describing the reference infrastructure that will be used by all projects downstream The requirements are validated using a proof of concept

82

83 PROJECT PHASES OF PHYSICAL ARCHITECTURE ANALYSIS
1. Review of Existing Architecture To start the process, the latest documents about the existing architecture must be reviewed. Participants of the first workshop will be specialists of the lCT department in all areas directly impacted by physical architecture The output of the first workshop is a list of components of the current infrastructure and constraints defining the target physical architecture. 2. Analysis and Design After reviewing the existing architecture, the analysis and design of the actual physical architecture has to be undertaken , adhering to best practices and meeting business requirements. 3. Draft Functional Requirements . Wit h the first physical architecture design in hand the first (draft) of functional requirements is composed. This material is the input for the next step and the vendor select ion process.

84 4. Vendor and Product Selection .
While the draft functional requirements are written, the vendor selection process proceeds in parallel 5. Writing Functional Requirements After finishing the draft functional requirements and feeding the second part of this project , the functional requirements document is written, which will be introduced at the second architecture workshop with staff From all affected parties. The results will be discussed and a list of the requirements that need to be refined or added will be composed. This is the last checkpoint before the sizing and the proof of concept (POC) starts. although the planning of the POC starts after the second workshop. With the finished functional requirements, the proof of concept phase begins.

85 Proof of Concept Establishing a POC is highly recommended to prove that the selected hardware and software are able to meet all expectations, including security requirements. The deliverable of the POC should be a running. prototype, including the associated document and test protocols describing the tests and their results.

86 HARDWARE ACQUISITION Selection of a computer hardware and software environment frequently requires the preparation of specifications for distribution to hardware/software (HW/SW) vendors and criteria for evaluating vendor proposals. The specifications are sometimes presented to vendors in the form of an invitation to tender (IIT), also known as a request for proposal (RFP). The specifications must define, as completely as possible, the usage, tasks and requirements for the equipment needed a description of the environment where that equipment will be used.

87 Acquisition Steps When purchasing (acquiring) hardware and software from a vendor, consideration should be given to the following: Testimonials or visits with other users • Provisions for competitive bidding • Analysis of bids against requirements • Comparison of bids against each other using predefined evaluation criteria • Analysis of the vendor's financial condition • Analysis of the vendor's capability to provide maintenance and support (including training) • Review of delivery schedules against requirements • Analysis of hardware and software upgrade capability • Analysis of security and control facilities • Evaluation of performance against requirements • Review and negotiation of price • Review of contract terms (including right to audit clauses) • Preparation of a formal written report summarizing the analysis for each of the alternatives and justifying the selection based on benefits and cost

88 The criteria used for evaluating vendor proposals
Turnaround time- The time that the help desk or vendor takes to fix a problem from the moment it is logged in Response time- The time a system takes to respond to a specific query by the user System reaction time- The time taken for logging into a system or getting connected to a network Throughput- The quantity of useful work made by the system per unit of time. Workload- The capacity to handle the required volume of work , or the volume of work that the vendor's system can handle in a given time frame Compatibility- The capability of an existing application to run successfully on the newer system supplied by the vendor Capacity- The capability of the newer system to handle a number of simultaneous requests from the network for the application and the volume of data that it can handle from each of the users Utilization- The system availability time vs. the system downtime

89 IS AUDITOR’S CONCERN When performing an audit of this area , the IS auditor should : • Determine if the acquisition process began with a business need and whether the hardware requirements for this need were considered in the specifications. • Determine if several vendors were considered and whether the comparison between them was done according to the aforementioned criteria .

90 SYSTEM SOFTWARE ACQUISITION
It is IS management's responsibility to be aware of HW/SW capabilities since they may improve business processes and provide expanded application services to businesses and customers in a more effective way. It is important that organizations stay current by applying the latest version or release and updates/patches of system software to remain protected and competitive. If the version or release is not current, the organization risks being dependent on software that may have known vulnerabilities or may become obsolete and no longer supported by the software vendor Short- and long-term plans should document IS management‘s plan for migrating to newer, more efficient and more effective operating systems and related systems software .

91 When selecting new system software, a number of business and technical issues must be considered including: Business functional and technical needs and specifications Cost and benefits Obsolescence Compatibility with existing systems Security Demands on existing staff Training and hiring requirements Future growth needs Impact on system and network performance Open source code vs. proprietary code

92 SYSTEM SOFTWARE IMPLEMENTATION
System software implementation involves identifying features, configuration options and controls for standard configurations to apply across the organization. Additionally, implementation involves testing the software in a non production environment and obtaining some form of certification and accreditation to place the approved operating system software into production.

93 SYSTEM SOFTWARE CHANGE CONTROL PROCEDURES
All test results should be documented, reviewed and approved by technically qualified subject matter experts prior to production use. Change control procedures are designed to ensure that changes are authorized and do not disrupt processing. This requires that IS management and personnel are aware of and involved in the system software change process Change control procedures should ensure that changes impacting the production systems (particularly in relation to the impact of failure during installation ) have been assessed appropriately, and that appropriate recovery/backout (rollback) procedures exist. E.g. a configuration management system in place .for maintaining prior OS versions or prior states when applying security patches related to high risk security issues Change control procedures should also ensure that all appropriate members of the management team who could be affected by the change have been properly informed and have made a previous assessment of the impact of the change in each area.

94 INFORMATION SYSTEM MAINTENANCE PRACTICES
This primarily refers to the process of managing change to application systems while maintaining the integrity of both the production source and executable code. Once a system is moved into production, it seldom remains static. Change is an expected event in all system regardless of whether they are vendor-developed or internally developed.

95 SYSTEM MAINTENANCE contd..
Reasons for change in normal operations include: IT changes; Business changes; Changes in classification related to either sensitivity or criticality; Audit; Adverse incidents such as intrusions and virus.

96 system changes Must: be appropriate to the needs of the organization.
Appropriately authorized. documented. thoroughly tested and approved by management The process typically is established in the design Phase of the application when application system requirements are baselined.

97 CHANGE MANAGMENT Change management begins with authorizing changes to occur. Change requests are initiated from end users as well as operational staff and system development/maintenance staff. For purchased system, a vendor may distribute period updates, patches or new releases of the software. User and system management should review such changes

98 CHANGE PROCEDURE User department should decide whether the changes are appropriate for the organization. Change request should be in a format that is track-able with unique serial number. All requests for changes and related information should be maintained by the system maintenance staff as part of the system’s documentation.

99 CHANGE PROCEDURE contd…
Maintenance records of all changes should be kept. Maintenance information usually consists of the programmer ID, time and date of change, project or request number associated with the change, before-and-after images of the line of code that were changed. In lieu of the manual process of management approving changes before the programmer can submit them into production, management could have automated change control software installed to prevent unauthorized program changes. By doing this the programmer is no longer responsible for migrating changes into production. The change control software becomes the operator that migrates programmer changes into production based on approval by management. Programmers should not have write, modify or delete access to production data

100 Deploying Changes Documentation
After the end user is satisfied with the system test results and the adequacy of the system documentation , approval should be obtained from user management . User approval could be documented on the original change request or in some other fashion (memo or e- mail ). Documentation To ensure the effective utilization and future maintenance of a system, it is important that all relevant system documentation be updated. Procedures should be in place to ensure that documentation stored offsite for disaster recovery purposes is al so updated .

101 TEASER Which of the following is MOST effective in controlling application maintenance? A. Informing users of the status of changes B. Establishing priorities on program changes C. Obtaining user approval of program changes D. Requiring documented user specifications for changes User approvals of program changes will ensure that changes are correct as specified by the user and that they are authorized. Therefore, erroneous or unauthorized changes are less likely to occur, minimizing system downtime and errors.

102 CHANGE TESTING Changed programs should be tested and certified with the same discipline as newly developed system to ensure that the changes perform the functions intended. Effort must be made to verify that existing functionality is not damaged by the change; existing performance is not degraded because of the change; No security exposures have been created because of the change

103 EMERGENCY CHANGES There may be times when emergency changes must be carried out to resolve system problems and enable critical ‘production job’ This is done typically through special logon IDs (emergency IDs). It grants programmer/analyst temporary access to the production environment. Special logon IDs possess powerful privileges. Its use should be logged and carefully monitored

104 EMERGENCY CHANGES contd…
Changes done in this fashion are held in a special emergency library, from where they should be moved into normal production libraries in a controlled manner, and through the change management process. Management should ensure that all normal change management controls are retroactively applied even after effecting the emergency change.

105 MIGRATING CHANGES TO PRODUCTION
Once user management has approved the change, the changed or modified programs can be moved into production environment. It must be noted that a group, independent of the programmer/analyst that maintained the system should move changes to production. Such group could include computer operations, quality assurance or a change control group designated for that purpose. To ensure that only authorized individuals have the ability to migrate programs to production, an access control software could be implemented

106 Change Exposures (Unauthorized Changes)
An unauthorized change to application system programs can occur for several reasons: The programmer has access to production libraries containing programs and data including object code. The user responsible for the application was not aware of the change (no user signed the maintenance change request approving the start of the work). A change request form and procedures were not formerly established. The appropriate management official did not sign the change form approving the start of the work , etc

107 CONFIGURATION MANAGEMENT
Because of the difficulties associated with exercising control over programming maintenance, activities, some organizations implement configuration management Systems. Configuration management involves procedures throughout the software life cycle (from requirements analysis to maintenance) to identify. define and baseline software items in the system and thus provide a basis for problem management, change management and release management. The process involves identification of the items that are likely to change (called configuration items). These include things such as programs, documentation and data. Once an item is developed and approved, it is handed over to a configuration management team for safekeeping and assigned a reference number. Once base lined in this way, an item should only be changed through a formal change control process.

108 SYSTEM DEVELOPMENT TOOLS AND PRODUCTIVITY AIDS
This includes Code generators; CASE applications; Fourth-generation languages

109 CODE GENERATORS Code generators are tools that generate program code based upon parameters defined by a system analyst or on data/entity flow diagrams developed by the design module of a CASE product. It allows most developers to implement software programs with efficiency. An IS auditor should be aware of nontraditional origins of source code.

110 COMPUTER AIDED SOFTWARE ENGINEERING -- (CASE)
CASE is the use of automated tools to aid in the software development process. Its use may include the application of software tools for software requirement analysis, software design, testing, document generation and other software development activities. CASE products are generally divided into three categories: Upper CASE Middle CASE Lower CASE

111 CASE CATEGORIES Upper CASE – These products are used to describe and document an application requirement. Middle CASE – These products are used for developing the detailed design. Lower CASE – These products are involved with the generation of program code and database definitions.

112 TEASER Which of the following computer aided software engineering (CASE) products is used for developing detailed designs, such as screen and report layouts? A. Super CASE B. Upper CASE C. Middle CASE D. Lower CASE

113 FOURTH-GENERATION LANGUAGES
Often abbreviated 4GL, fourth-generation languages are programming languages closer to human languages than typical high-level programming languages. Most 4GLs are used to access databases. For example, a typical 4GL command is FIND ALL RECORDS WHERE NAME IS "SMITH" The other four generations of computer languages are first generation: machine language second generation: assembly language third generation: high-level programming languages, such as C, C++, and Java. fifth generation: languages used for artificial intelligence and neural networks.

114 Machine languages are the only languages understood by computers
Machine languages are the only languages understood by computers. While easily understood by computers, machine languages are almost impossible for humans to use because they consist entirely of numbers An assembly language contains the same instructions as a machine language, but the instructions and variables have names instead of being just numbers Programs written in high-level languages are translated into assembly language or machine language by a compiler. Assembly language programs are translated into machine language by a program called an assembler.

115 Characteristics include:
4GL is identified by its xteristics and does not have a standard definition. Characteristics include: Non-procedural- Most 4GLs do not obey the procedural paradigm of continuous statement execution and subroutine call and control instructions . Instead. they are event­ driven and make extensive use of object-oriented programming concepts such as objects. Properties and methods. Environmental independence (portability)-Many 4GLs are portable across computer architectures, operating systems and telecommunications monitors Software facilities- (ability to design, paint, retrieval screen format, training routines, help screens and produce graphical output)

116 Programmer workbench concept – programmer has access through the terminal to easy filing facilities, temporary storage, text editing and operating system commands.(IDE) Simple language subset – simple language concept that can be used by less-skilled users in an information centre Care should be taken when using 4GLs. Unlike traditional languages, the 4GLs can lack the lower level detail commands necessary to perform certain types of data intensive or online operations. These operations are usually required when developing major applications. For this reason, the use of 4GLs as development languages should be weighed carefully against traditional languages already discussed

117 4GLs classifications: Query and report generators- These specialized languages can extract and produce reports (audit software). Recently, more powerful languages have been produced that can access database records, produce complex on line outputs and be developed in an almost natural language. Embedded database 4GLs- These depend on self contained database management systems. This characteristic often makes them more user-friendly but also may lead to applications that are not integrated well with other production applications. Examples include FOCUS, RAM IS II and NOMAD 2. Relational database 4GLs- These high level language products are usually an optional feature on a vendor's DBMS product line. These allow the applications developer to make better use of the DBMS product, but they often are not end- user 0riented . Examples include SQL+, MANTIS and NATURAL. Application generators- These development tools generate lower level programming. languages (3GLs) such as COBOL and C. The application can be further tailored and customized . Data processing development personnel , not end users, use application generators.

118 VERY IMPORTANT! Wait a minute!!!
What is the most common demerit of 4th Generation Language??? It can lack lower level detail command necessary for certain data intensive or online operations/calculation.

119 BUSINESS PROCESS REENGINEERING
A business process can be seen set of interrelated work activities characterized by specific inputs and value-added tasks that produce specific customer­focused outputs. Business processes consist of horizontal work flows that cut across several departments of functions. BPR is the process of responding to competitive and economic pressures, and customer demands to survive in the current business environment. This is usually done by automating system processes so that there are fewer manual interventions and manual controls. BPR achieved with the help of implementing an ERP system is often referred to as package-enabled reengineering (PER)

120 BPR contd… Benefits of BPR are usually experienced where the reengineered process appropriately suits the business needs. BPR has increased in popularity as a method for achieving the goal of cost savings, through streamlining operations and gaining the advantages of centralization within the same process.

121 BPR STAGES Define the areas to be reviewed; Develop a project plan;
Gain understanding of the process under review; Redesign and streamline the process; Implement and monitor the new process; Establish a continuous improvement process.

122 BPR contd… The newly designed business processes inevitably involve changes in the ways of doing business and could impact the philosophy, finances and personnel of the organization, its business partners and customers; Throughout the change process, the change management team must be sensitive to organization culture, structure, direction and the component of change. They must also be able to predict and/or anticipate issues and problems and offer appropriate resolutions that will accelerate the change process.

123 BPR contd… A major concern in Business Process Reengineering is that key controls may be reengineered out of a business process. The IS Auditor should identify the existing controls and evaluate the impact of removing them. If the controls are key preventive controls, the IS Auditor must ensure management is aware of the removal of the control and that management is willing to accept the potential material risk of not having that preventive control.

124 BENCHMARKING This is about improving business process. It is defined as a continuous, systematic process for evaluating the products, services and work processes of organization recognized as representing best practices for the purpose of organization improvement.

125 BENCHAMARKING PROCESS
Plan; Research; Observe; Analyze; Adapt; Improve.

126 BENCHMARKING PROCESS Plan – critical processes are identified for the benchmarking exercise. Benchmarking team should understand the kind of data needed. Research – benchmarking team should collect data about its own processes, before collecting this data about others. Benchmarking partners are identified through media sources; Observe – next step is to collect data and visit benchmarking partners. There should be an agreement with the partner organization, a data collection plan and methods to facilitate proper observation

127 BENCHMARKING PROCESS Analyze – data collected so far are analyzed and interpreted for the purpose of identifying gaps between the organization and the partner’s process. Converting key findings into new operational goals will be the goal of this stage. Adapt – results of the process is adapted to organization's process. This involves translating the findings into core principles Continuous improvement – this is the key focus in a benchmarking exercise.

128 BPR Audit and Evaluation
When reviewing an organization's business process change (reengineering) efforts, l S auditors must determine whether: The organization's change efforts are consistent with the overall culture and strategic plan of the organization The reengineering team is making an effort to minimize any negative impact the change might have on the organization's staff The BPR team has documented lessons to be learned after the completion of the BPR/process change project The IS auditor would also provide a statement of assurance or conclusion with respect to the objectives of the audit.

129 ISO 9126 ISO 9126 is an international standard to assess the quality of software product This standard provides the definition of the xteristics and associated quality evaluation process to be used when specifying the requirements for, and evaluating the quality of, software products throughout their life cycle. Attributes evaluated include: Functionality; Reliability; Usability; Efficiency; Maintainability; Portability

130 ISO 9126 Functionality – existence of a set of functions and their specified properties; Reliability – capability of software to maintain its level of performance under stated conditions for a stated period of time; Usability – effort needed to use the software and individual assessment of such use; Efficiency – amount of resources needed by the software to maintain a given level of performance Maintainability – effort needed to make modifications; Tj, talk about cohesion & coupling !!!!!!!!!!! Portability – ability of the software to be transferred from one environment to another.

131 TEASER Functionality is a characteristic associated with evaluating the quality of software product throughout their life cycle, and is best described as the set of attributes that bear on the: A. The existence of a set of functions and their specified properties B. Ability of the software to be transferred from one environment to another C. Capability of the software to maintain its level of performance under stated conditions D. Relationship between the level of performance of the software and the amount of resource used

132 TEASER Various standards have emerged to assist IS organizations in achieving an operational environment that is predictable, measurable and repeatable. The standard that provides the definition of the characteristics and associated quality evaluation process to be used when specifying the requirements for and evaluating the quality of software products throughout their life cycle is: A. ISO B. ISO C. ISO D. ISO Explanation: ISO 9126 is the standard that focuses on the end result of good software processes, i.e., the quality of the actual software product. ISO 9001 contains guidelines about design, development, production, installation or servicing. ISO 9002 contains guidelines about production, installation or servicing, and ISO 9003 contains guidelines final inspection and testing.

133 CAPABILITY MATURITY MODEL INTEGRATION
CMM was adopted for softwares: other models were developed for disciplines as system engineering etc CMMI was conceived as a means of combining various models into a set of integrated models. CMMI is a means to improving processes and rules CMMI offers practices inform of activities and task

134 CMMI is less directly aligned with the waterfall/SDLC/Traditional approach but aligns directly with cotemporary software development practices like :iterative development etc CMMI is useful to evaluate management of a computer centre, the development function mgt process and implement and measure the IT change mgt process.

135

136 ISO/IEC ( SPICE) This internationally standardises Maturity Models It is a series of documents that provide guidance on process improvement, benchmarking and assessment It includes detailed guidance that can be leveraged to create enterprise best practices See page …….211/212

137 APPLICATION CONTROLS Application controls could be manual or programmed. Objective is to ensure completeness, accuracy and validity of the entries made into a system from both manual and programmed processing. Application control are controls over input, processing and output at ensuring that: only complete, accurate and valid data are entered and updated in a computer system; processing accomplishes the correct task; processing result meet expectation.

138 The IS auditor's tasks Identifying the:
significant application components The flow of transactions through the system, gaining a detailed understanding of the application by reviewing the available documentation and interviewing appropriate personnel Identifying the application control strengths, and evaluating the impact of the control weaknesses Developing a testing strategy Testing the controls to ensure their functionality and effectiveness by applying appropriate audit procedures Evaluating the control environment by analyzing the test results and other audit evidence to determine that control objectives were achieved

139 INPUT/ORIGINATING CONTROLS
This ensures that every transaction to be processed is received, processed and recorded accurately and completely. It ensures that only valid and authorized information is input and that these transactions are only processed once. Therefore the system receiving the output of another system as input /origination must in turn apply edit checks: validations and access controls to those data.

140 INPUT AUTHORIZATION This helps ensure that only authorized data are entered into the computer system for processing by applications. This could be performed on-line; a computer generated report, listing items requiring manual authorization may be generated. Types of authorization include: Signatures on batch forms or source documents; Online access controls--Ensure that only authorized individuals may access data or perform sensitive functions Unique passwords-Necessary to ensure that access authorization cannot be compromised through use of another individual's authorized data access Terminal or client workstation identification- Used to limit input to specific terminals or workstations as well as to individuals Source documents-A well designed source document increases the speed and accuracy with which data can be recorded & reference checking, controls work flow etc

141 Ideally, source documents should be pre-printed forms to provide consistency. accuracy and legibility. Source documents should include standard headings, titles, notes and instructions. Source document layouts should : • Emphasize ease of use and readability • Group similar field s together to facilitate input • Provide predetermined input codes to reduce errors • Contain appropriate cross-reference numbers or a comparable identifier to facilitate research and tracing • Use boxes to prevent field size errors • Include an appropriate area for management to document authorization All source documents should be appropriately controlled . Procedures should be established to ensure that all source documents have been input and taken into account. Prenumbering source documents facilitate this control.

142 BATCH CONTROLS AND BALANCING
Batch controls group inputs transactions to provide control totals. It includes: Total monetary amount; Verification that the total monetary value of items processed equals the total monetary value of the batch documents Total items; verification that the total number of items included on each document in the batch agrees with the total number of items processed Total documents; Verification that the total number of documents in the batch equals the total number of documents processed Hash totals. Verification that the total in a batch agrees with the total calculated by the system

143 Batch Balancing Batch balancing can be performed through manual or automated reconciliation. It ensures that all documents are included in a batch, all batches are submitted for processing, all batches are accepted by the computer. It includes: Batch register-These registers enable manual recording of batch totals and subsequent comparison with system reported totals Control accounts-performed through an initial edit file to determine batch totals. The data are then processed to the master file, and a reconciliation is performed between the totals processed during the initial edit file and the master file Computer agreement-Computer agreement with batch totals is performed through the input of batch header details that record the batch totals; the system compares these to calculated totals, either accepting or rejecting the batch Incase there is error, please review possible actions to error reporting on pg 214

144 INPUT CONTROL TECHNIQUES
Reconciliation of data Error correction procedures Anticipation Documentation Transaction log Transmittal log Cancellation of source documents Page ………212

145 Data Validation and Editing Procedure
This is a process of ensuring that input data are validated and edited as close to the time and point of origination as possible If input procedures allow supervisor overrides of data validation and editing, automatic logging should occur. A management individual who did not initiate the override should review this log Above all gentlemen & ladies, pleaaaaaaaaase note that Data Validation and Edit procedure are PREVENTIVE CONTROL that are used before data are processed.

146 DATA VALIDATION AND EDIT CONTROL
1. Check digit 7. Reasonableness check 2. Completeness check 8. Duplicate check 3. Limit check 9. Validity check 10. Table look-ups 4. Logical relationship check 5. Sequence check 11. Existence check 6. Range check 12. Key Verification

147 PROCESSING CONTROLS Processing procedures and controls ensure the reliability of application program processing. The following are processing control techniques: Manual recalculation; A sample of transactions may be recalculated manually to ensure that processing is accomplishing the anticipated task Editing; An edit check is a program instruction or subroutine that tests the accuracy, completeness and validity of data . It may be used to control in put or later processing of data Run to run totals; Run-to-run totals provide the ability to verify data values through the stages of application processing Programmed controls; Software can be used to detect and initiate corrective action for errors in data and processing

148 Reasonableness verification of calculated amounts; Application programs can verify the reasonableness of calculated amounts. The reasonableness can be tested to ensure appropriateness to predetermined criteria Limit checks on calculated amounts; -An edit check can provide assurance through the use of predetermined limits, that amounts have been keyed or calculated correctly Reconciliation of file totals; Reconciliations may be performed through the use of a manually maintained account, a file control record or an independent control file Exception reports. An exception report is generated by a program that identifies transactions or data that appear to be incorrect :These items may be outside a predetermined range or may not conform to specified criteria

149 TEASER The editing/validation of data entered at a remote site would be performed MOST effectively at the: A. central processing site after running the application system. B. central processing site during the running of the application system. C. remote processing site after transmission of the data to the central processing site. D. remote processing site prior to transmission of the data to the central processing site.

150 Data File Control Procedure
Data files, or indeed database tables, generally fall into four categories: System control parameters; The entries in these files change the workings of the system and may alter controls exercised by the system Standing data; These master files include data, such as supplier/customer names and addresses. that do not frequently change and are referred to during processing Master data/balance data; Running balances and totals that are updated by transactions should not be capable of adjustment except under strict approval and review controls Transaction files. These are controlled using validation checks, control totals exception report s etc.

151 METHODS OF DATA FILE CONTROL
1. File updating & Maintenance authorization 7. Source documentation retention 2. Transaction log 8. Data file security 3. Pre-recorded input 9. One for one checking 4. Parity checking 10. Version usage 5. Before & After image Reporting 11. Internal & External labeling 6. Maintenance error reporting and handling

152

153 TEASER As updates to an online order entry system are processed, the updates are recorded on a transaction tape and a hard copy transaction log. At the end of the day, the order entry files are backed up on tape. During the backup procedure, a drive malfunctions and the order entry files are lost. Which of the following is necessary to restore these files? A The previous day’s backup file and the current transaction tape B The previous day’s transaction file and the current transaction tape C The current transaction tape and the current hard copy transaction log D The current hard copy transaction log and the previous day’s transaction file

154 OUTPUT CONTROLS Output controls provide assurance that the data delivered to users will be presented, formatted and delivered in a consistent and secure manner. They include: Logging and storage of sensitive and critical forms; Computer generation of critical and sensitive forms; Report distribution; Balancing and reconciling; Output error handling; Output report retention; Verification of receipt of report. PAGE …….214 Are u waiting? ………R-E-A-D!!!!

155 DATA INTEGRITY TESTING
This is a set of substantive tests that examines the accuracy, completeness, consistency and authorization of data presently held in the system. Data integrity test will indicate failures in input or processing controls. Controls for ensuring the integrity of accumulated data in a file can be done against authorized source documentation. When this checking is done against authorized source documentation, it is common to check only a portion of the file at a time. Since the whole file is regularly checked in cycles, the control technique is often referred to as cyclical checking.

156 TYPES OF DATA INTEGRITY TEST
Relational integrity test: Relational integrity tests are performed at the data element and record-based levels and usually involve calculating and verifying various calculated fields, such as control totals . It is enforced through data validation routines built into the application or by defining the input condition constraints and data xteristics at the table definition in the database stage itself. Sometimes, it is a combination of both.

157 DATA INTEGRITY TEST contd…
Referential integrity test: Referential integrity tests define existence relationship between entities in a database that needs to be maintained by the DBMS. It is required for maintaining interrelation integrity in the relational data model. Whenever two or more relations are related through referential constraints (primary & foreign key), it is necessary that references be kept consistent in the events of insertions, deletions and updates to these relations

158 TEASER In a relational database with referential integrity, the use of which of the following key would prevent deletion of a row from a customer table as long as the customer number of that row is stored with live orders on the orders table? A. foreign key B. primary key C. secondary key D. public key

159 TEASER In a relational database with referential integrity, the use of foreign keys would prevent events such as primary key changes and record deletions, resulting in orphaned relations within the database. It should not be possible to delete a row from a customer table when the customer number (primary key) of that row is stored with live orders on the orders table (the foreign key to the customer table). A primary key works in one table, so it is not able to provide/ensure referential integrity by itself. Secondary keys that are not foreign keys are not subject to referential integrity checks. Public key is related to encryption and not linked in any way to referential integrity.

160 TEASER Which of the following controls would provide the GREATEST assurance of database integrity? A. Audit log procedures B. table link/reference checks C. query/table access time checks D. rollback and rollforward database features

161 DATA INTEGRITY TEST contd…
Domain integrity test – used to confirm whether data validation and edits controls and procedures are working appropriately. Also used to confirm whether data exists in its correct domain.

162 DATA INTEGRITY IN ONLINE TRANSACTION PROCESSING SYSTEM
In multi-user transaction systems, it is necessary to manage parallel user access to stored data, typically controlled by a DBMS. Of particular importance are 4 online data integrity requirements, known as ACID principle: Atomicity; From a user perspective, a transaction is either completed in its entirety (i.e. all relevant database tables are updated) or not at all. If an error or interruption occurs, all changes made up to that point are backed out. Consistency; All integrity conditions in the database are maintained with each transaction, taking the database from one consistent state into another consistent state Isolation; Each transaction is isolated from other transactions and hence each transaction only accesses data that are part of a consistent database state. Durability. If a transaction has been reported back to a user as complete, the resulting changes to the database survive subsequent hardware or software failures

163 TESTING APPLICATION SYSTEM
Exhaust the list of Application Systems Techniques on pages 213 Also review the last paragraph after Application System Testing

164 CONTINOUS ON-LINE AUDITING
Continuous on-line auditing is becoming increasingly important in today’s e-business world, because it provides a method for the IS auditor to collect evidence on system reliability while normal processing takes place. The continuous audit approach cuts down on needless paperwork and leads to the conduct of an essentially paperless audit. In this sense, an IS auditor can report directly through the microcomputer on significant errors or other irregularities that may require immediate management attention.

165 TYPES OF ONLINE AUDITING TECHNIQUES
SCARF/EAM Snapshots Audit hooks ITF Continuous and intermittent simulations ALSO DISCUSS THE FOREGOING IN ORDER OF COMPLEXITY

166 BROAD CLASSIFICATIONS
Broadly, any concurrent audit technique would fall within: Those that can be used to evaluate application systems with test/live data during normal production processing runs – Examples is Integrated Test Facility (ITF) Those that can be used to select transactions for audit reviews during normal production processing runs – Examples are Snapshot and Extended Records Those that can be used to trace or map changing states of application systems during normal production processing runs – Examples are System Control Audit Review Files (SCARF) and Continuous and Intermittent Simulations

167 Integrated Test Facility (ITF)
In this technique, dummy entities are set up and included in an auditee's production files. The IS auditor can make the system either process live transactions or test transactions during regular processing runs, and have these transactions update the records of the dummy entity. The operator enters the test transactions simultaneously with the live transactions that are entered for processing. The auditor then compares the output with the data that have been independently calculated to verify the correctness of the computer processed data.

168 ITF Live data Application Database with ITF system Dummy entity
Test data Tag live transaction Application system Database with Dummy entity ITF

169 Audit hooks- This technique involves embedding hooks in application systems to function as red flags and to induce IS auditors to act before an error or irregularity gets out of hand.

170 SNAPSHOT For application systems that are large or complex, tracing the different execution paths through the system can be difficult. If auditors wish to perform transaction walkthroughs, therefore, they could face a difficult or impossible task. A simple solution to the problem is to use the computer to assist with performing transaction walkthroughs.

171 SNAPSHOT The Snapshot technique involves having software take “pictures” of a transaction as its flows through an application system. Typically, auditor embed the software in the application system at those points where they deem material processing occurs. The embedded software then captures images of a transaction as it progresses through these various processing points. These software takes a beforeimages and afterimages of the transaction and the transformation that has occurred on the transaction.

172 SNAPSHOT A A A Snapshot Points 1,2,3 Snapshot points 4,5,6,7,8
Transaction Input error file Sorted Transaction file Input master file Snapshot Points 1,2,3 Input Validation program Update program Snapshot points 4,5,6,7,8 Snapshot report Error report Output Error file Valid transaction Snapshot report Update report Output Master file Report program A Snapshot report Mgt report

173 EXTENDED RECORD TECHNIQUE
This is a modification of Snapshot technique As opposed to having the software write one record for each Snapshot point, auditors can have it construct a single record that is built up from the images captured at each Snapshot point. Extended records have the merit of collecting all the Snapshot data related to a transaction in one place, thereby facilitating audit evaluation work.

174 EXTENDED RECORD TECHNIQUE
Snapshot Point 1 Before image After image Snapshot Point 2 Before image After image Snapshot Point n Before image After image The Snapshot and Extended record techniques can be used in conjunction with the ITF Technique to provide extensive audit trail

175 SCARF This is the most complex of all the concurrent auditing technique. It involves embedding audit software modules within a host application to provide continuous monitoring of the system’s transaction. These audit module are placed at predetermined points to gather information about transactions or events. Information collected is written onto a special file – the SCARF master file.

176 SCARF In many ways, the SCARF technique is like the snapshot/extended record technique. Indeed, the SCARF embedded software can be used to capture snapshots and to create extended records It must however be noted that SCARF technique uses a more complex reporting system than snapshot and extended record technique.

177 SCARF SCARF Audit reports SCARF Reporting system Transaction
Input master File Update program containing SCARF Embedded audit routines SCARF SCARF Reporting system Update report Output Master file Audit reports

178 CONTINOUS AND INTERMITTENT SIMULATION
This is a variation of SCARF It can be used whenever application systems use a DBMS. Whereas SCARF requires embedding audit module within an application to trap exceptions, CIS uses the DBMS to trap these exceptions. This way, the application system is left intact. When application system invokes the services provided by the DBMS, the DBMS in turn indicate to CIS that a service is required.

179 CIS DBMS Peripherals Mainframe Transaction Application DBMS Database
Working storage CIS Exception log

180 CIS TECHNIQUE CIS then determines whether it wants to examine the activities to be carried out by the DBMS on behalf of the application The DBMS provides the CIS with all the data required by the application system to process the selected transaction. Using this data, CIS also processes the transaction. In other words, CIS replicates application system processing logic. Every update to the database that arises from processing the selected transaction will be checked by CIS to determine whether discrepancies exist between its results and that of application systems

181

182 TEASER Which of the following online auditing techniques is most effective for the early detection of errors or irregularities? A. Embedded audit module B. Integrated test facility C. Snapshots D. Audit hooks

183 EXPLANTION Explanation: The audit hook technique involves embedding code in application systems for the examination of selected transactions. This helps the IS auditor to act before an error or an irregularity gets out of hand. An embedded audit module involves embedding specially written software in the organization's host application system so that application systems are monitored on a selective basis. An integrated test facility is used when it is not practical to use test data, and snapshots are used when an audit trail is required.

184 TEASER Which of the following audit tools is MOST useful to an IS auditor when an audit trail is required? A. Integrated test facility (ITF) B. Continuous and intermittent simulation (CIS) C. Audit hooks D. Snapshots

185 TEASER Which of the following would BEST ensure the proper updating of critical fields in a master record? A. Field checks B. Control totals C. Reasonableness checks D. Before and after maintenance report Before and after maintenance report is the best answer because a visual review would provide the most positive verification that updating was proper.

186 TEASER An IS auditor reviewing an organization's data file control procedure finds that transactions are applied to the most current files, while restart procedures use earlier versions. The IS auditor should recommend the implementation of: A. source documentation retention B. data file security C. version usage control D. one for one checking

187 TEASER Which of the following types of data validation and editing are used to determine if a field contains data, and not zeros or blanks? A. Check digit B. Existence check C. Completeness check D. Reasonableness check

188 TEASER Edit controls are considered to be:
A. preventive controls. B. detective controls. C. corrective controls. D. compensating controls.

189 TEASER Which of the following provides the ability to verify data values through the stages of application processing? A. Programmed controls B. Run-to-run totals C. Limit checks on calculated amounts D. Exception reports

190 TEASER Which of the following is intended to reduce the amount of lost or duplicated input? A. Hash totals B. Check digits C. Echo checks D. Transaction codes Hash totaling involves totaling specified fields in a series of transactions or records. If later checks do not result in the same number, then records are either lost, entered or transmitted incorrectly, or are duplicated.

191 TEASER Which of the following is NOT an objective of application controls? A. Detection of the cause of exposure B. Analysis of the cause of exposure C. Correction of the cause of exposure D. Prevention of the cause of exposure Controls are usually classified in three categories; preventive, corrective or detective. No control is gained by a routine that analyzes an exposure.

192 TEASER Procedures for controls over processing include:
A. hash totals. B. reasonableness checks. C. online access controls. D. before and after image reporting Reasonableness checks are a form of processing controls that can be used to ensure that data conforms to predetermined criteria. Before and after image reporting is essentially a control over data files that makes it possible to trace the impact transactions have on computer records. Online access controls prevent unauthorized access to the system and data. Hash totals are a form of batch control that are used to verify a predetermined numeric field for all documents in a batch to the agreed number of documents processed.

193 TEASER Parity bits are a control used to validate: Data accuracy
Data completeness Data authentication Data source

194 TEASER Which of the following BEST describes an integrated test facility? A. A technique that enables the IS auditor to test a computer application for the purpose of verifying correct processing B. The utilization of hardware and/or software to review and test the functioning of a computer system C. A method of using special programming options to permit the printout of the path through a computer program taken to process a specific transaction D. A procedure for tagging and extending transactions and master records that are used by an IS auditor for tests

195 ANSWER The correct answer is: A. A technique that enables the IS auditor to test a computer application for the purpose of verifying correct processing You answered correctly. Explanation: Answer A best describes an integrated test facility, which is a specialized computer-assisted audit process that allows an IS auditor to test an application on a continuous basis. Answer B is an example of a systems control audit review file; answers C and D are examples of snapshots.

196


Download ppt "DOMAIN 3."

Similar presentations


Ads by Google