Presentation is loading. Please wait.

Presentation is loading. Please wait.

Human Resources Security Part 1 of 3

Similar presentations


Presentation on theme: "Human Resources Security Part 1 of 3"— Presentation transcript:

1 Human Resources Security Part 1 of 3
This chapter covers a number of topics that, for want of a better term, we categorize as human resources security. The subject is a broad one, and a full discussion is well beyond the scope of this book. In this chapter, we look at some important issues in this area. Human Resources Security Part 1 of 3

2 Benefits to Organizations
The topic of security awareness, training, and education is mentioned prominently in a number of standards and standards-related documents, including ISO ( Code of Practice for Information Security Management ) security awareness, training, and education programs provide four major benefits to organizations: improving employee behavior increasing employee accountability mitigating liability for employee behavior complying with regulations and contractual obligations Security awareness, training, and education programs provide four major benefits to organizations: • Improving employee behavior • Increasing the ability to hold employees accountable for their actions • Mitigating liability of the organization for an employee’s behavior • Complying with regulations and contractual obligations

3 Human Factors & Learning Continium
employee behavior is a critical concern in ensuring the security of computer systems and information assets principal problems associated with employee behavior are: errors and omissions fraud actions by disgruntled employees Employee behavior is a critical concern in ensuring the security of computer systems and information assets. A number of recent surveys show that employee actions, both malicious and unintentional, cause considerable computer-related loss and security compromises (e.g., [CSI10], [VERI11]). The principal problems associated with employee behavior are errors and omissions, fraud, and actions by disgruntled employees. Security awareness, training, and education programs can reduce the problem of errors and omissions. Such programs can serve as a deterrent to fraud and actions by disgruntled employees by increasing employees’ knowledge of their accountability and of potential penalties. Employees cannot be expected to follow policies and procedures of which they are unaware. Further, enforcement is more difficult if employees can claim ignorance when caught in a violation. Ongoing security awareness, training, and education programs are also important in limiting an organization’s liability . Such programs can bolster an organization’s claim that a standard of due care has been taken in protecting information. Finally, security awareness, training, and education programs may be needed to comply with regulations and contractual obligations . For example, companies that have access to information from clients may have specific awareness and training obligations that they must meet for all employees with access to client data.

4 Table 17.1 Comparative Framework
Table 17.1 illustrates some of the distinctions among awareness, training, and education. We look at each of these categories in turn.

5 Awareness seeks to inform and focus an employee's attention on security issues within the organization aware of their responsibilities for maintaining security and the restrictions on their actions users understand the importance of security for the well-being of the organization promote enthusiasm and management buy-in program must be tailored to the needs of the organization and target audience must continually promote the security message to employees in a variety of ways should provide a security awareness policy document to all employees In general, a security awareness program seeks to inform and focus an employee’s attention on issues related to security within the organization. The hoped-for benefits from security awareness include the following: 1. Employees are aware of their responsibilities for maintaining security and the restrictions on their actions in the interests of security and are motivated to act accordingly. 2. Users understand the importance of security for the well-being of the organization. 3. Because there is a constant barrage of new threats, user support, IT staff enthusiasm, and management buy-in are critical and can be promoted by awareness programs. An awareness program must continually promote the security message to employees in a variety of ways. A wide range of activities and material can be used in such a program. This can include publicity material such as posters, memos, newsletters, and flyers that detail key aspects of security policies and act to generally raise awareness of the issues from day to day. It can also include various workshops and training sessions for groups of staff, providing information relevant to their needs. These may often be incorporated into more general training programs on organizational practices and systems. The standards encourage the use of examples of good practice that are related to the organization’s systems and IT usage. The more relevant and easy to follow the procedures are, the more likely it is that a greater level of compliance and hence security will be achieved. Suitable security awareness sessions should be incorporated into the process used to introduce new staff to the organization and its processes. Security awareness sessions should also be repeated regularly to help staff members refresh their knowledge and understanding of security issues. To emphasize the importance of security awareness, an organization should have a security awareness policy document that is provided to all employees. The policy should establish three things: 1. Participation in an awareness program is required for every employee. This will include an orientation program for new employees as well as periodic awareness activities. 2. Every one will be given sufficient time to participate in awareness activities. 3. Responsibility for managing and conducting awareness activities is clearly spelled out. An excellent, detailed list of considerations for security awareness is provided in The Standard of Good Practice for Information Security , from the Information Security Forum [ISF11]. This material is reproduced in Appendix H .3.

6 NIST SP ( Information Security Handbook: A Guide for Managers ) describes the content of awareness programs, in general terms, as follows: “Awareness tools are used to promote information security and inform users of threats and vulnerabilities that impact their division or department and personal work environment by explaining the what but not the how of security, and communicating what is and what is not allowed. Awareness not only communicates information security policies and procedures that need to be followed, but also provides the foundation for any sanctions and disciplinary actions imposed for noncompliance. Awareness is used to explain the rules of behavior for using an agency’s information systems and information and establishes a level of expectation on the acceptable use of the information and information systems.” NIST SP ( Information Security Handbook: A Guide for Managers ) describes the content of awareness programs, in general terms, as follows: Awareness tools are used to promote information security and inform users of threats and vulnerabilities that impact their division or department and personal work environment by explaining the what but not the how of security, and communicating what is and what is not allowed. Awareness not only communicates information security policies and procedures that need to be followed, but also provides the foundation for any sanctions and disciplinary actions imposed for noncompliance. Awareness is used to explain the rules of behavior for using an agency’s information systems and information and establishes a level of expectation on the acceptable use of the information and information systems. [SZUB98] provides a useful list of goals for a security awareness program, as follows: Goal 1: Raise staff awareness of information technology security issues in general. Goal 2: Ensure that staff are aware of local, state, and federal laws and regulations governing confidentiality and security. Goal 3: Explain organizational security policies and procedures. Goal 4: Ensure that staff understand that security is a team effort and that each person has an important role to play in meeting security goals and objectives. Goal 5: Train staff to meet the specific security responsibilities of their positions. Goal 6: Inform staff that security activities will be monitored. Goal 7: Remind staff that breaches in security carry consequences. Goal 8: Assure staff that reporting of potential and realized security breakdowns and vulnerabilities is responsible and necessary behavior (and not trouble-making behavior). Goal 9: Communicate to staff that the goal of creating a trusted system is achievable.

7 programmers, developers, system maintainers
Training designed to teach people the skills to perform their IS-related tasks more securely what people should do and how they should do it general users focus is on good computer security practices programmers, developers, system maintainers develop a security mindset in the developer managers how to make tradeoffs involving security risks, costs, benefits executives risk management goals, measurement, leadership A security training program is designed to teach people the skills to perform their IS-related tasks more securely. Training teaches what people should do and how they should do it. Depending on the role of the user, training encompasses a spectrum ranging from basic computer skills to more advanced specialized skills. For general users, training focuses on good computer security practices, including the following: • Protecting the physical area and equipment (e.g., locking doors, caring for CD-ROMs and DVDs) • Protecting passwords (if used) or other authentication data or tokens (e.g., never divulge PINs) • Reporting security violations or incidents (e.g., whom to call if a virus is suspected) Programmers, developers, and system maintainers require more specialized or advanced training. This category of employees is critical to establishing and maintaining computer security. However, it is the rare programmer or developer who understands how the software that he or she is building and maintaining can be exploited. Typically, developers don’t build security into their applications and may not know how to do so, and they resist criticism from security analysts. The training objectives for this group include the following: • Develop a security mindset in the developer. • Show the developer how to build security into development life cycle, using well-defined checkpoints. • Teach the developer how attackers exploit software and how to resist attack. • Provide analysts with a toolkit of specific attacks and principles with which to interrogate systems. Management-level training should teach development managers how to make trade-offs among risks, costs, and benefits involving security. The manager needs to understand the development life cycle and the use of security checkpoints and security evaluation techniques. Executive-level training must explain the difference between software security and network security and, in particular, the pervasiveness of software security issues. Executives need to develop an understanding of security risks and costs. Executives need training on the development of risk management goals, means of measurement, and the need to lead by example in the area of security awareness.

8 Education most in depth program
targeted at security professionals whose jobs require expertise in security fits into employee career development category often provided by outside sources college courses specialized training programs The most in-depth program is security education. This is targeted at security professionals and those whose jobs require expertise in security. Security education is normally outside the scope of most organization awareness and training programs. It more properly fits into the category of employee career development programs. Often, this type of education is provided by outside sources such as college courses or specialized training programs.

9 Employment Practices and Policies
managing personnel with potential access is an essential part of information security employee involvement: unwittingly aid in the commission of a violation by failing to follow proper procedures forgetting security considerations not realizing that they are creating a vulnerability knowingly violate controls or procedures This section deals with personnel security: hiring, training, monitoring behavior, and handling departure. [SADO03] reports that a large majority of perpetrators of significant computer crime are individuals who have legitimate access now, or who have recently had access. Thus, managing personnel with potential access is an essential part of information security. Employees can be involved in security violations in one of two ways. Some employees unwittingly aid in the commission of a security violation by failing to follow proper procedures, by forgetting security considerations, or by not realizing that they are creating a vulnerability. Other employees knowingly violate controls or procedures to cause or aid a security violation. Threats from internal users include the following: • Gaining unauthorized access or enabling others to gain unauthorized access • Altering data • Deleting production and backup data • Crashing systems • Destroying systems • Misusing systems for personal gain or to damage the organization • Holding data hostage • Stealing strategic or customer data for corporate espionage or fraud schemes

10 Security in the Hiring Process
objective: “to ensure that employees, contractors and third party users understand their responsibilities, and are suitable for the roles they are considered for, and to reduce the risk of theft, fraud or misuse of facilities” need appropriate background checks and screening investigate accuracy of details for highly sensitive positions: have an investigation agency do a background check criminal record and credit check ISO lists the following security objective of the hiring process: to ensure that employees, contractors, and third-party users understand their responsibilities and are suitable for the roles they are considered for, and to reduce the risk of theft, fraud, or misuse of facilities. Although we are primarily concerned in this section with employees, the same considerations apply to contractors and third-party users. From a security viewpoint, hiring presents management with significant challenges. [KABA09] points out that growing evidence suggests that many people inflate their resumes with unfounded claims. Compounding this problem is the increasing reticence of former employers. Employers may hesitate to give bad references for incompetent, underperforming, or unethical employees for fear of a lawsuit if their comments become known and an employee fails to get a new job. On the other hand, a favorable reference for an employee who subsequently causes problems at his or her new job may invite a lawsuit from the new employer. As a consequence, a significant number of employers have a corporate policy that forbids discussing a former employee’s performance in any way, positive or negative. The employer may limit information to the dates of employment and the title of the position held. Despite these obstacles, employers must make a significant effort to do background checks and screening of applicants. Of course, such checks are to assure that the prospective employee is competent to perform the intended job and poses no security risk. Additionally, employers need to be cognizant of the concept of “negligent hiring” that applies in some jurisdictions. In essence, an employer may be held liable for negligent hiring if an employee causes harm to a third party (individual or company) while acting as an employee. General guidelines for checking applicants include the following: • Ask for as much detail as possible about employment and educational history. The more detail that is available, the more difficult it is for the applicant to lie consistently. • Investigate the accuracy of the details to the extent reasonable. • Arrange for experienced staff members to interview candidates and discuss discrepancies. For highly sensitive positions, more intensive investigation is warranted. [SADO03] gives the following examples of what may be warranted in some circumstances: • Have an investigation agency do a background check. • Get a criminal record check of the individual. • Check the applicant’s credit record for evidence of large personal debt and the inability to pay it. Discuss problems, if you find them, with the applicant. People who are in debt should not be denied jobs: if they are, they will never be able to regain solvency. At the same time, employees who are under financial strain may be more likely to act improperly. • Consider conducting a polygraph examination of the applicant (if legal). Although polygraph exams are not always accurate, they can be helpful if you have a particularly sensitive position to fill. • Ask the applicant to obtain bonding for his or her position. For many employees, these steps are excessive. However, the employer should conduct extra checks of any employee who will be in a position of trust or privileged access—including maintenance and cleaning personnel.

11 Employment Agreements
employees should agree to and sign the terms and conditions of their employment contract, which should include: employee and organizational responsibilities for information security a confidentiality and non- disclosure agreement reference to the organization's security policy acknowledgement that the employee has reviewed and agrees to abide by the policy Employment Agreements As part of their contractual obligation, employees should agree and sign the terms and conditions of their employment contract, which should state their and the organization’s responsibilities for information security. The agreement should include a confidentiality and nondisclosure agreement spelling out specifically that the organization’s information assets are confidential unless classified otherwise and that the employee must protect that confidentiality. The agreement should also reference the organization’s security policy and indicate that the employee has reviewed and agrees to abide by the policy.

12 During Employment objectives with respect to current employees:
ensure that employees, contractors, and third-party users are aware of information security threats and concerns and their responsibilities and liabilities with regard to information security are equipped to support the organizational security policy in their work reduce the risk of human error two essential elements of personnel security during employment are: a comprehensive security policy document an ongoing awareness and training program security principles: least privilege separation of duties limited reliance on key employees ISO lists the following security objective with respect to current employees: to ensure that employees, contractors, and third-party users are aware of information security threats and concerns and their responsibilities and liabilities with regard to information security and are equipped to support organizational security policy in the course of their normal work and to reduce the risk of human error. Two essential elements of personnel security during employment are a comprehensive security policy document and an ongoing awareness and training program for all employees. These are covered in Sections 17.1 and In addition to enforcing the security policy in a fair and consistent manner, there are certain principles that should be followed for personnel security: • Least privilege: Give each person the minimum access necessary to do his or her job. This restricted access is both logical (access to accounts, networks, programs) and physical (access to computers, backup tapes, and other peripherals). If every user has accounts on every system and has physical access to everything, then all users are roughly equivalent in their level of threat. • Separation of duties: Carefully separate duties so that people involved in checking for inappropriate use are not also capable of making such inappropriate use. Thus, having all the security functions and audit responsibilities reside in the same person is dangerous. This practice can lead to a case in which the person may violate security policy and commit prohibited acts, yet in which no other person sees the audit trail to be alerted to the problem. • Limited reliance on key employees: No one in an organization should be irreplaceable. If your organization depends on the ongoing performance of a key employee, then your organization is at risk. Organizations cannot help but have key employees. To be secure, organizations should have written policies and plans established for unexpected illness or departure. As with systems, redundancy should be built into the employee structure. There should be no single employee with unique knowledge or skills.

13 Termination of Employment
termination security objectives: ensure employees, contractors, and third party users exit organization or change employment in an orderly manner the return of all equipment and the removal of all access rights are completed critical actions: remove name from all authorized access lists inform guards that ex-employee general access is not allowed remove personal access codes, change physical locks and lock combinations, reprogram access card systems recover all assets, including employee ID, documents, data storage devices notify by memo or appropriate departments ISO lists the following security objective with respect to termination of employment: to ensure that employees, contractors, and third-party users exit an organization or change employment in an orderly manner, and that the return of all equipment and the removal of all access rights are completed. The termination process is complex and depends on the nature of the organization, the status of the employee in the organization, and the reason for departure. From a security point of view, the following actions are important: • Removing the person’s name from all lists of authorized access • Explicitly informing guards that the ex-employee is not allowed into the building without special authorization by named employees • Removing all personal access codes • If appropriate, changing lock combinations, reprogramming access card systems, and replacing physical locks • Recovering all assets, including employee ID, disks, documents, and equipment • Notifying, by memo or , appropriate departments so that they are aware

14 Email and Internet Use Policies
organizations are incorporating specific and Internet use policies into their security policy document concerns for employers: work time consumed in non-work-related activities computer and communications resources may be consumed, compromising the mission that the IS resources are designed to support risk of importing malware possibility of harm, harassment, inappropriate online conduct and Internet access for most or all employees is common in office environments and is typically provided for at least some employees in other environments, such as a factory. A growing number of companies incorporate specific and Internet use policies into the organization’s security policy document. This section examines some important considerations for these policies. Widespread use of and the Internet by employees raises a number of concerns for employers, including the following: 1. Significant employee work time may be consumed in non-work-related activities, such as surfing the Web, playing games on the Web, shopping on the Web, chatting on the Web, and sending and reading personal . 2. Significant computer and communications resources may be consumed by such non-work-related activity, compromising the mission that the IS resources are designed to support. 3. Excessive and casual use of the Internet and unnecessarily increases the risk of introduction of malicious software into the organization’s IS environment. 4. The non-work-related employee activity could result in harm to other organizations or individuals outside the organization, thus creating a liability for the organization. 5. and the Internet may be used as tools of harassment by one employee against another. 6. Inappropriate online conduct by an employee may damage the reputation of the organization.

15 reasonable personal use unlawful activity prohibited
Suggested Policies business use only policy scope content ownership privacy standard of conduct reasonable personal use unlawful activity prohibited security policy company policy company rights disciplinary action The development of a comprehensive and Internet use policy raises a number of policy issues. The following is a suggested set of policies, based on [KING06]. • Business use only: Company-provided and Internet access are to be used by employees only for the purpose of conducting company business. • Policy scope: Policy covers access; contents of messages; Internet and intranet communications; and records of , Internet, and intranet communications. • Content ownership: Electronic communications, files, and data remain company property even when transferred to equipment not owned by the company. • Privacy: Employees have no expectation of privacy in their use of company provided or Internet access, even if the communication is personal in nature. • Standard of conduct: Employees are expected to use good judgment and act courteously and professionally when using company-provided and Internet access. • Reasonable personal use: Employees may make reasonable personal use of company-provided and Internet access provided that such use does not interfere with the employee’s duties, violate company policy, or unduly burden company facilities. • Unlawful activity prohibited: Employees may not use company-provided and Internet access for any illegal purpose. • Security policy: Employees must follow the company’s security policy when using and Internet access. • Company policy: Employees must follow all other company policies when using and Internet access. Company policy prohibits viewing, storing, or distributing pornography; making or distributing harassing or discriminatory communications; and unauthorized disclosure of confidential or proprietary information. • Company rights: The company may access, monitor, intercept, block access, inspect, copy, disclose, use, destroy, recover using computer forensics, and/ or retain any communications, files, or other data covered by this policy. Employees are required to provide passwords upon request. • Disciplinary action: Violation of this policy may result in immediate termination of employment or other discipline deemed appropriate by the company. A useful document to consult when developing an and Internet use policy is Guidelines to Assist Agencies in Developing and Internet Use Policies , from the Office of e-Government, the Government of Western Australia, July A copy is available at this book’s Web site.

16 Security Incident Response
response procedures to incidents are an essential control for most organizations procedures need to reflect possible consequences of an incident on the organization and allow for a suitable response developing procedures in advance can help avoid panic benefits of having incident response capability: systematic incident response quicker recovery to minimize loss, theft, disruption of service use information gained during incident handling to better prepare for future incidents dealing properly with legal issues that may arise during incidents The development of procedures to respond to computer incidents is regarded as an essential control for most organizations. Most organizations will experience some form of security incident sooner rather than later. Typically, most incidents relate to risks with lesser impacts on the organization, but occasionally a more serious incident can occur. The incident handling and response procedures need to reflect the range of possible consequences of an incident on the organization and allow for a suitable response. By developing suitable procedures in advance, an organization can avoid the panic that occurs when personnel realize that bad things are happening and are not sure of the best response. NIST SP [SCAR08] lists the following benefits of having an incident response capability: • Responding to incidents systematically so that the appropriate steps are taken • Helping personnel to recover quickly and efficiently from security incidents, minimizing loss or theft of information and disruption of services • Using information gained during incident handling to better prepare for handling future incidents and to provide stronger protection for systems and data • Dealing properly with legal issues that may arise during incidents

17 Computer Security Incident Response Team (CSIRT)
CSIRTs are responsible for: rapidly detecting incidents minimizing loss and destruction mitigating the weaknesses that were exploited restoring computing services For large and medium-sized organizations, a computer security incident response team (CSIRT) is responsible for rapidly detecting incidents, minimizing loss and destruction, mitigating the weaknesses that were exploited, and restoring computing services.

18 Security Incident “any action that threatens one or more of the classic security services of confidentiality, integrity, availability, accountability, authenticity, and reliability in a system” unauthorized access to a system accessing information not authorized to see passing information on to a person not authorized to see it attempting to circumvent the access mechanisms using another person’s password and user id unauthorized modification of information on the system attempting to corrupt information that may be of value attempting to modify information without authority processing information in an unauthorized manner There is a range of events that can be regarded as a security incident. Indeed any action that threatens one or more of the classic security services of confidentiality, integrity, availability, accountability, authenticity, and reliability in a system constitutes an incident. These include various forms of unauthorized access to a system, and unauthorized modification of information on the system. Unauthorized access to a system by a person includes • Accessing information that person is not authorized to see • Accessing information and passing it on to another person who is not authorized to see it • Attempting to circumvent the access mechanisms implemented on a system • Using another person’s password and user id for any purpose • Attempting to deny use of the system to any other person without authorization to do so Unauthorized modification of information on a system by a person includes • Attempting to corrupt information that may be of value to another person • Attempting to modify information and/or resources without authority • Processing information in an unauthorized manner

19 Detecting Incidents incidents may be detected by users or administration staff staff should be encouraged to make reports of system malfunctions or anomalous behaviors automated tools system integrity verification tools log analysis tools network and host intrusion detection systems (IDS) intrusion prevention systems Security incidents may be detected by users or administration staff who report a system malfunction or anomalous behavior. Staff should be encouraged to make such reports. Staff should also report any suspected weaknesses in systems. The general security training of staff in the organization should include details of who to contact in such cases. Security incidents may also be detected by automated tools, which analyze information gathered from the systems and connecting networks. We discuss a range of such tools in Chapter 8 . These tools may report evidence of either a precursor to a possible future incident or indication of an actual incident occurring. Tools that can detect incidents include the following: • System integrity verification tools: Scan critical system files, directories, and services to ensure they have not been changed without proper authorization. • Log analysis tools: Analyze the information collected in audit logs using some form of pattern recognition to identify potential security incidents. • Network and host intrusion detection systems (IDS): Monitor and analyze network and host activity and usually compare this information with a collection of attack signatures to identify potential security incidents. • Intrusion prevention systems: Augment an intrusion detection system with the ability to automatically block detected attacks. Such systems need to be used with care, because they can cause problems if they respond to a misidentified attack and reduce system functionality when not justified. We discuss such systems in Chapter 9 . The effectiveness of such automated tools depends greatly on the accuracy of their configuration, and the correctness of the patterns and signatures used. The tools need to be updated regularly to reflect new attacks or vulnerabilities. They also need to distinguish adequately between normal, legitimate behavior and anomalous attack behavior. This is not always easy to achieve and depends on the work patterns of specific organizations and their systems. However, a key advantage of automated systems that are regularly updated is that they can track changes in known attacks and vulnerabilities. It is often difficult for security administrators to keep pace with the rapid changes to the security risks to their systems and to respond with patches or other changes needed in a timely manner. The use of automated tools can help reduce the risks to the organization from this delayed response. The decision to deploy automated tools should result from the organization’s security goals and objectives and specific needs identified in the risk assessment process. Deploying these tools usually involves significant resources, both monetary and personnel time. This needs to be justified by the benefits gained in reducing risks. Whether or not automated tools are used, the security administrators need to monitor reports of vulnerabilities and to respond with changes to their systems if necessary.

20 Triage Function goal: responds to incoming information by:
ensure that all information destined for the incident handling service is channeled through a single focal point commonly achieved by advertising the triage function as the single point of contact for the whole incident handling service responds to incoming information by: requesting additional information in order to categorize the incident notifying the various parts of the enterprise or constituency about the vulnerability and shares information about how to fix or mitigate the vulnerability identifies the incident as either new or part of an ongoing incident and passes this information on to the incident handling response function The goal of this function is to ensure that all information destined for the incident handling service is channeled through a single focal point regardless of the method by which it arrives (e.g., by , hotline, helpdesk, IDS) for appropriate redistribution and handling within the service. This goal is commonly achieved by advertising the triage function as the single point of contact for the whole incident handling service. The triage function responds to incoming information in one or more of the following ways: 1. The triage function may need to request additional information in order to categorize the incident. 2. If the incident relates to a known vulnerability, the triage function notifies the various parts of the enterprise or constituency about the vulnerability and shares information about how to fix or mitigate the vulnerability. 3. The triage function identifies the incident as either new or part of an ongoing incident and passes this information on to the incident handling response function in priority order.

21 Responding to Incidents
must have documented procedures to respond to incidents procedures should: detail how to identify the cause describe the action taken to recover from the incident identify typical categories of incidents and the approach taken to respond to them identify management personnel responsible for making critical decisions and how to contact them identify the circumstances when security breaches should be reported to third parties such as the police or relevant CERT respond to it. [CARN03] lists the following potential response activities: Once a potential incident is detected, there must be documented procedures to • Taking action to protect systems and networks affected or threatened by intruder activity • Providing solutions and mitigation strategies from relevant advisories or alerts • Looking for intruder activity on other parts of the network • Filtering network traffic • Rebuilding systems • Patching or repairing systems • Developing other response or workaround strategies Response procedures must detail how to identify the cause of the security incident, whether accidental or deliberate. The procedures then must describe the action taken to recover from the incident in a manner that minimizes the compromise or harm to the organization. It is clearly impossible to detail every possible type of incident. However, the procedures should identify typical categories of such incidents and the approach taken to respond to them. Ideally, these should include descriptions of possible incidents and typical responses. They should also identify the management personnel responsible for making critical decisions affecting the organization’s systems and how to contact them at any time when an incident is occurring. This is particularly important in circumstances such as the mass worm infection we described, when the response involves trading off major loss of functionality against further significant systems compromise. Such decisions will clearly affect the organization’s operations and must be made very quickly. NIST SP lists the following broad categories of security incidents that should be addressed in incident response policies: • Denial-of-service attacks that prevent or impair normal use of systems • Malicious code that infects a host • Unauthorized access to a system • Inappropriate usage of a system in violation of acceptable use policies • Multiple-component incidents, which involve two or more of the above categories in a single incident In determining the appropriate responses to an incident, a number of issues should be considered. These include how critical the system is to the organization’s function, and the current and potential technical effect of the incident in terms of how significantly the system has been compromised. The response procedures should also identify the circumstances when security breaches should be reported to third parties such as the police or relevant CERT (computer emergency response team) organization. There is a high degree of variance among organizational attitudes to such reports. Making such reports clearly helps third parties monitor the overall level of activity and trends in computer crimes. However, particularly if legal action could be instituted, it may be a liability for the organization to gather and present suitable evidence. While the law may require reporting in some circumstances, there are many other types of security incidents when the response is not prescribed. Hence, it must be determined in advance when such reports would be regarded as appropriate for the organization. There is also a chance that if an incident is reported externally, it might be reported in the public media. An organization should identify how it would respond in general to such reports.

22 Incident Handling Life Cycle
Once an incident is opened, it may transition through many different states, with all the information relating to the incident (its change of state and associated actions) until no further action is required from the team’s perspective (the “circle” portion of Figure 17.2 ) and the incident is finally closed. It is also important to note that an incident (or event) can cycle through the analysis portion multiple times during the activity’s life cycle.

23 Documenting Incidents
should immediately follow a response to an incident identify what vulnerability led to its occurrence how this might be addressed to prevent the incident in the future details of the incident and the response taken impact on the organization’s systems and their risk profile Following the immediate response to an incident, there is a need to identify what vulnerability led to its occurrence and how this might be addressed to prevent the incident in the future. Details of the incident and the response taken are recorded for future reference. The impact on the organization’s systems and their risk profile must also be reconsidered as a result of the incident. This typically involves feeding the information gathered as a result of the incident back to an earlier phase of the IT security management process. It is possible that the incident was an isolated rare occurrence and the organization was simply unlucky for it to occur. More generally, though, a security incident reflects a change in the risk profile of the organization that needs to be addressed. This could involve reviewing the risk assessment of the relevant systems and either changing or extending this analysis. It could involve reviewing controls identified for some risks, strengthening existing controls, and implementing new controls. This reflects the cyclic process of IT security management.

24 Table 17.3 Examples of Possible Information Flow To and From the Incident Handling Service
A number of services are either a part of or interact with the incident handling function. Table 17.3 [CARN03] provides examples of the information flow to and from an incident handling service. This type of breakdown is useful in organizing and optimizing the incident handling service and in training personnel on the requirements for incident handling and response.

25 Security Auditing Part 2 of 3
Security auditing is a form of auditing that focuses on the security of an organization’s information system (IS) assets. This function is a key element in computer security. Security auditing can • Provide a level of assurance concerning the proper operation of the computer with respect to security. • Generate data that can be used in after-the-fact analysis of an attack, whether successful or unsuccessful. • Provide a means of assessing inadequacies in the security service. • Provide data that can be used to define anomalous behavior. • Maintain a record useful in computer forensics. Security Auditing Part 2 of 3

26 Table 18.1 Security Audit Terminology
Two key concepts are audits and audit trails, defined in Table The process of generating audit information yields data that may be useful in real time for intrusion detection; this aspect is discussed in Chapter 8 . In the present chapter, our concern is with the collection, storage, and analysis of data related to IS security. We begin with an overall look at the security auditing architecture and how this relates to the companion activity of intrusion detection. Next, we discuss the various aspects of audit trails, also known as audit logs. We then discuss the analysis of audit data. RFC 2828

27 Security Audit and Alarms Model
We begin our discussion of security auditing by looking at the elements that make up a security audit architecture. First, we examine a model that shows security auditing in its broader context. Then, we look at a functional breakdown of security auditing. Security Audit and Alarms Model ITU-T Recommendation X.816 develops a model that shows the elements of the security auditing function and their relationship to security alarms. Figure 18.1 depicts the model. The key elements are as follows: • Event discriminator: This is logic embedded into the software of the system that monitors system activity and detects security-related events that it has been configured to detect. • Audit recorder: For each detected event, the event discriminator transmits the information to an audit recorder. The model depicts this transmission as being in the form of a message. The audit could also be done by recording the event in a shared memory area. • Alarm processor: Some of the events detected by the event discriminator are defined to be alarm events. For such events an alarm is issued to an alarm processor. The alarm processor takes some action based on the alarm. This action is itself an auditable event and so is transmitted to the audit recorder. • Security audit trail : The audit recorder creates a formatted record of each event and stores it in the security audit trail. • Audit analyzer: The security audit trail is available to the audit analyzer, which, based on a pattern of activity, may define a new auditable event that is sent to the audit recorder and may generate an alarm. • Audit archiver: This is a software module that periodically extracts records from the audit trail to create a permanent archive of auditable events. • Archives: The audit archives are a permanent store of security-related events on this system. • Audit provider: The audit provider is an application and/or user interface to the audit trail. • Audit trail examiner: The audit trail examiner is an application or user who examines the audit trail and the audit archives for historical trends, for computer forensic purposes, and for other analysis. • Security reports: The audit trail examiner prepares human-readable security reports. This model illustrates the relationship between audit functions and alarm functions. The audit function builds up a record of events that are defined by the security administrator to be security related. Some of these events may in fact be security violations or suspected security violations. Such events feed into an intrusion detection or firewall function by means of alarms.

28 Security Auditing Functions
It is useful to look at another breakdown of the security auditing function, developed as part of the Common Criteria specification [CCPS04a]. Figure 18.3 shows a breakdown of security auditing into six major areas, each of which has one or more specific functions: • Data generation: Identifies the level of auditing, enumerates the types of auditable events, and identifies the minimum set of audit-related information provided. This function must also deal with the conflict between security and privacy and specify for which events the identity of the user associated with an action is included in the data generated as a result of an event. • Event selection: Inclusion or exclusion of events from the auditable set. This allows the system to be configured at different levels of granularity to avoid the creation of an unwieldy audit trail. • Event storage: Creation and maintenance of the secure audit trail. The storage function includes measures to provide availability and to prevent loss of data from the audit trail. • Automatic response: Defines reactions taken following detection of events that are indicative of a potential security violation. • Audit analysis: Provided via automated mechanisms to analyze system activity and audit data in search of security violations. This component identifies the set of auditable events whose occurrence or accumulated occurrence indicates a potential security violation. For such events, an analysis is done to determine if a security violation has occurred; this analysis uses anomaly detection and attack heuristics. • Audit review: As available to authorized users to assist in audit data review. The audit review component may include a selectable review function that provides the ability to perform searches based on a single criterion or multiple criteria with logical (i.e., and/or) relations, sort audit data, and filter audit data before audit data are reviewed. Audit review may be restricted to authorized users.

29 Event Definition must define the set of events that are subject to audit common criteria suggests: introduction of objects deletion of objects distribution or revocation of access rights or capabilities changes to subject or object security attributes policy checks performed by the security software use of access rights to bypass a policy check use of identification and authentication functions security-related actions taken by an operator/user import/export of data from/to removable media Reviewing the functionality suggested by Figures 18.1 and 18.3 , we can develop a set of requirements for security auditing. The first requirement is event definition . The security administrator must define the set of events that are subject to audit. We go into more detail in the next section, but we include here a list suggested in [CCPS04a]: • Introduction of objects within the security-related portion of the software into a subject’s address space • Deletion of objects • Distribution or revocation of access rights or capabilities • Changes to subject or object security attributes • Policy checks performed by the security software as a result of a request by a subject • The use of access rights to bypass a policy check • Use of identification and authentication functions • Security-related actions taken by an operator and/or authorized user (e.g., suppression of a protection mechanism) • Import/export of data from/to removable media (e.g., printed output, tapes, disks)

30 Implementation Guidelines
agree on requirements with management scope of checks agreed and controlled checks limited to read-only access to software and data other access only for isolated copies of system files, then erased or given appropriate protection resources for performing the checks should be explicitly identified and made available requirements for special or additional processing should be defined all access should be monitored and logged document procedures, requirements, responsibilities person(s) doing audit independent of activities The ISO standard Code of Practice for Information Security Management (ISO 27002) provides a useful set of guidelines for implementation of an auditing capability: 1. Audit requirements should be agreed with appropriate management. 2. The scope of the checks should be agreed and controlled. 3. The checks should be limited to read-only access to software and data. 4. Access other than read-only should only be allowed for isolated copies of system files, which should be erased when the audit is completed or given appropriate protection if there is an obligation to keep such files under audit documentation requirements. 5. Resources for performing the checks should be explicitly identified and made available. 6. Requirements for special or additional processing should be identified and agreed. 7. All access should be monitored and logged to produce a reference trail; the use of time-stamped reference trails should be considered for critical data or systems. 8. All procedures, requirements, and responsibilities should be documented. 9. The person(s) carrying out the audit should be independent of the activities audited.

31 What to Collect events related to the use of the auditing software
events related to the security mechanisms on the system events that are collected for use by the various security detection and prevention mechanisms events related to system management and operation operating system access application access for selected applications remote access The choice of data to collect is determined by a number of requirements. One issue is the amount of data to collect, which is determined by the range of areas of interest and by the granularity of data collection. There is a trade-off here between quantity and efficiency. The more data are collected, the greater is the performance penalty on the system. Larger amounts of data may also unnecessarily burden the various algorithms used to examine and analyze the data. Further, the presence of large amounts of data creates a temptation to generate security reports excessive in number or length. With these cautions in mind, the first order of business in security audit trail design is the selection of data items to capture. These may include • Events related to the use of the auditing software (i.e., all the components of Figure 18.1 ). • Events related to the security mechanisms on the system. • Any events that are collected for use by the various security detection and prevention mechanisms. These include items relevant to intrusion detection (e.g., Table 8.2 ) and items related to firewall operation (e.g., Tables 9.3 and 9.4 ). • Events related to system management and operation. • Operating system access (e.g., via system calls). • Application access for selected applications. • Remote access.

32 Table 18.2 Auditable Items Suggested in X.816
One example is a suggested list of auditable items in X.816, shown in Table The standard points out that both normal and abnormal conditions may need to be audited; for instance, each connection request, such as a TCP connection request, may be a subject for a security audit trail record, whether or not the request was abnormal and irrespective of whether the request was accepted or not. This is an important point. Data collection for auditing goes beyond the need to generate security alarms or to provide input to a firewall module. Data representing behavior that does not trigger an alarm can be used to identify normal versus abnormal usage patterns and thus serve as input to intrusion detection analysis. Also, in the event of an attack, an analysis of all the activity on a system may be needed to diagnose the attack and arrive at suitable countermeasures for the future.

33 Monitoring Areas Suggested in ISO 27002
Another useful list of auditable events ( Table 18.3 ) is contained in ISO As with X.816, the ISO standard details both authorized and unauthorized events, as well as events affecting the security functions of the system. As the security administrator designs an audit data collection policy, it is useful to organize the audit trail into categories for purposes of choosing data items to collect. In what follows, we look at useful categories for audit trail design.

34 Figure 18.4 - Examples of Audit Trails
figure 18.4a is an example of a system-level audit trail on a UNIX system figure 18.4b is an example of an application-level audit trail for a mail delivery system figure 18.4c is an example of a user-level audit trail on a UNIX system System-level audit trails are generally used to monitor and optimize system performance but can serve a security audit function as well. The system enforces certain aspects of security policy, such as access to the system itself. A system-level audit trail should capture data such as login attempts, both successful and unsuccessful, devices used, and OS functions performed. Other system-level functions may be of interest for auditing, such as system operation and network performance indicators. Figure 18.4a , from [NIST95], is an example of a system-level audit trail on a UNIX system. The shutdown command terminates all processes and takes the system down to single-user mode. The su command creates a UNIX shell. Application-level audit trails may be used to detect security violations within an application or to detect flaws in the application’s interaction with the system. For critical applications, or those that deal with sensitive data, an application-level audit trail can provide the desired level of detail to assess security threats and impacts. For example, for an application, an audit trail can record sender and receiver, message size, and types of attachments. An audit trail for a database interaction using SQL (Structured Query Language) queries can record the user, type of transaction, and even individual tables, rows, columns, or data items accessed. Figure 18.4b is an example of an application-level audit trail for a mail delivery system. A user-level audit trail traces the activity of individual users over time. It can be used to hold a user accountable for his or her actions. Such audit trails are also useful as input to an analysis program that attempts to define normal versus anomalous behavior. A user-level audit trail can record user interactions with the system, such as commands issued, identification and authentication attempts, and files and resources accessed. The audit trail can also capture the user’s use of applications. Figure 18.4c is an example of a user-level audit trail on a UNIX system.

35 Physical Access Audit Trails
generated by equipment that controls physical access card-key systems, alarm systems sent to central host for analysis and storage data of interest: date/time/location/user of access attempt both valid and invalid access attempts attempts to add/modify/delete physical access privileges may send violation messages to personnel Audit trails can be generated by equipment that controls physical access and then transmitted to a central host for subsequent storage and analysis. Examples are card-key systems and alarm systems. [NIST95] lists the following as examples of the type of data of interest: • The date and time the access was attempted or made should be logged, as should the gate or door through which the access was attempted or made, and the individual (or user ID) making the attempt to access the gate or door. • Invalid attempts should be monitored and logged by non-computer audit trails just as they are for computer system audit trails. Management should be made aware if someone attempts to gain access during unauthorized hours. • Logged information should also include attempts to add, modify, or delete physical access privileges (e.g., granting a new employee access to the building or granting transferred employees access to their new office [and, of course, deleting their old access, as applicable]). • As with system and application audit trails, auditing of non-computer functions can be implemented to send messages to security personnel indicating valid or invalid attempts to gain access to controlled spaces. In order not to desensitize a guard or monitor, all access should not result in messages being sent to a screen. Only exceptions, such as failed access attempts, should be highlighted to those monitoring access.

36 Protecting Audit Trail Data
read/write file on host easy, least resource intensive, instant access vulnerable to attack by intruder write-once/read-many device more secure but less convenient need steady supply of recordable media access may be delayed and not available immediately write-only device provides paper trail impractical for capturing detailed audit data on large or networked systems useful when a permanent, immediately available log is required must protect both integrity and confidentiality encryption, digital signatures, access controls RFC 2196 ( Site Security Handbook ) lists three alternatives for storing audit records: • Read/write file on a host • Write-once/read-many device (e.g., CD-ROM or DVD-ROM) • Write-only device (e.g., a line printer) File system logging is relatively easy to configure and is the least resource intensive. Records can be accessed instantly, which is useful for countering an ongoing attack. However, this approach is highly vulnerable. If an attacker gains privileged access to a system, then the audit trail is vulnerable to modification or deletion. A CD-ROM or similar storage method is far more secure but less convenient. A steady supply of recordable media is needed. Access may be delayed and not available immediately. Printed logs do provide a paper trail, but are impractical for capturing detailed audit data on large systems or networked systems. RFC 2196 suggests that the paper log can be useful when a permanent, immediately available log is required even with a system crash. Protection of the audit trail involves both integrity and confidentiality. Integrity is particularly important because an intruder may attempt to remove evidence of the intrusion by altering the audit trail. For file system logging, perhaps the best way to ensure integrity is the digital signature. Write-once devices, such as CD-ROM or paper, automatically provide integrity. Strong access control is another measure to provide integrity. Confidentiality is important if the audit trail contains user information that is sensitive and should not be disclosed to all users, such as information about changes in a salary or pay grade status. Strong access control helps in this regard. An effective measure is symmetric encryption (e.g., using AES [Advanced Encryption Standard] or triple DES [Data Encryption Standard]). The secret key must be protected and only available to the audit trail software and subsequent audit analysis software. Note that integrity and confidentiality measures protect audit trail data not only in local storage but also during transmission to a central repository.

37 Implementing Logging foundation of security auditing facility is the initial capture of the audit data software must include hooks (capture points) that trigger data collection and storage as preselected events occur dependent on the nature of the software varies depending on operating system and applications involved The foundation of a security auditing facility is the initial capture of the audit data. This requires that the software include hooks, or capture points, that trigger the collection and storage of data as preselected events occur. Such an audit collection or logging function is dependent on the nature of the software and will vary depending on the underlying operating system and the applications involved. In this section, we look at approaches to implementing the logging function for system-level and user-level audit trails on the one hand and application-level audit trails on the other.

38 Windows Event Log event is an entity that describes some interesting occurrence contains: a numeric identification code a set of attributes optional user-supplied data three types of event logs: system: system related apps and drivers application: user-level apps security: Windows LSA An event in Windows Event Log is an entity that describes some interesting occurrence in a computer system. Events contain a numeric identification code, a set of attributes (task, opcode, level, version, and keywords), and optional user-supplied data. Windows is equipped with three types of event logs: • System event log: Used by applications running under system service accounts (installed system services), drivers, or a component or application that has events that relate to the health of the computer system. • Application event log: Events for all user-level applications. This log is not secured and it is open to any applications. Applications that log extensive information should define an application-specific log. • Security event log: The Windows Audit Log. This event log is for exclusive use of the Windows Local Security Authority. User events may appear as audits if supported by the underlying application.

39 Windows Event Categories
account logon events account management directory service access logon events object access policy changes privilege use Windows allows the system user to enable auditing in nine different categories: • Account logon events: User authentication activity from the perspective of the system that validated the attempt. Examples: authentication granted; authentication ticket request failed; account mapped for logon; account could not be mapped for logon. Individual actions in this category are not particularly instructive, but large numbers of failures may indicate scanning activity, brute-force attacks on individual accounts, or the propagation of automated exploits. • Account management: Administrative activity related to the creation, management, and deletion of individual accounts and user groups. Examples: user account created; change password attempt; user account deleted; security enabled global group member added; domain policy changed. • Directory service access: User-level access to any Active Directory object that has a System Access Control List defined. An SACL creates a set of users and user groups for which granular auditing is required. • Logon events: User authentication activity, either to a local machine or over a network, from the system that originated the activity. Examples: successful user logon; logon failure, unknown username, or bad password; logon failure, because account is disabled; logon failure, because account has expired; logon failure, user not allowed to logon at this computer; user logoff; logon failure, account locked out. • Object access: User-level access to file system and registry objects that have System Access Control Lists defined. Provides a relatively easy way to track read access, as well as changes, to sensitive files, integrated with the operating system. Examples: object open; object deleted. • Policy changes: Administrative changes to the access policies, audit configuration, and other system-level settings. Examples: user right assigned; new trusted domain; audit policy changed. • Privilege use: Windows incorporates the concept of a user right, granular permission to perform a particular task. If you enable privilege use auditing, you record all instances of users exercising their access to particular system functions (creating objects, debugging executable code, or backing up the system). Examples: specified privileges were added to a user’s access token (during logon); a user attempted to perform a privileged system service operation. • Process tracking: Generates detailed audit information when processes start and finish, programs are activated, or objects are accessed indirectly. Examples: new process was created; process exited; auditable data was protected; auditable data was unprotected; user attempted to install a service. • System events: Records information on events that affect the availability and integrity of the system, including boot messages and the system shutdown message. Examples: system is starting; Windows is shutting down; resource exhaustion in the logging subsystem; some audits lost; audit log cleared.

40 UNIX Syslog UNIX's general-purpose logging mechanism
found on all UNIX / Linux variants elements: syslog() API referenced by several standard system utilities and available to application programs logger command used to add single-line entries to the system log /etc/syslog.conf configuration file used to control the logging and routing of system log events syslogd daemon to receive/route log events Syslog is UNIX’s general-purpose logging mechanism found on all UNIX variants and Linux. It consists of the following elements: • syslog() : An application program interface (API) referenced by several standard system utilities and available to application programs • logger: A UNIX command used to add single-line entries to the system log • /etc/syslog.conf: The configuration file used to control the logging and routing of system log events • syslogd: The system daemon used to receive and route system log events from syslog() calls and logger commands. Different UNIX implementations will have different variants of the syslog facility, and there are no uniform system log formats across systems. Chapter 25 examines the Linux syslog facility. Here, we provide a brief overview of some syslog-related functions and look at the syslog protocol.

41 a means of capturing relevant events alternative message formats
Syslog Service basic service provides: a means of capturing relevant events a storage facility a protocol for transmitting syslog messages from other machines to a central machine that acts as a syslog server extra add-on features may include: robust filtering log analysis event response alternative message formats log file encryption database storage rate limiting The basic service offered by UNIX syslog is a means of capturing relevant events, a storage facility, and a protocol for transmitting syslog messages from other machines to a central machine that acts as a syslog server. In addition to these basic functions, other services are available, often as third-party packages and in some cases as built-in modules. [KENT06] lists the following as being the most common extra features: • Robust filtering: Original syslog implementations allowed messages to be handled differently based on their facility and priority only; no finer-grained filtering was permitted. Some current syslog implementations offer more robust filtering capabilities, such as handling messages differently based on the host or program that generated a message, or a regular expression matching content in the body of a message. Some implementations also allow multiple filters to be applied to a single message, which provides more complex filtering capabilities. • Log analysis: Originally, syslog servers did not perform any analysis of log data; they simply provided a framework for log data to be recorded and transmitted. Administrators could use separate add-on programs for analyzing syslog data. Some syslog implementations now have limited log analysis capabilities built-in, such as the ability to correlate multiple log entries. • Event response: Some syslog implementations can initiate actions when certain events are detected. Examples of actions include sending SNMP traps, alerting administrators through pages or s, and launching a separate program or script. It is also possible to create a new syslog message that indicates that a certain event was detected. • Alternative message formats: Some syslog implementations can accept data in non-syslog formats, such as SNMP traps. This can be helpful for getting security event data from hosts that do not support syslog and cannot be modified to do so. • Log file encryption: Some syslog implementations can be configured to encrypt rotated log files automatically, protecting their confidentiality. This can also be accomplished through the use of OS or third-party encryption programs. • Database storage for logs: Some implementations can store log entries in both traditional syslog files and a database. Having the log entries in a database format can be very helpful for subsequent log analysis. • Rate limiting: Some implementations can limit the number of syslog messages or TCP connections from a particular source during a certain period of time. This is useful in preventing a denial of service for the syslog server and the loss of syslog messages from other sources. Because this technique is designed to cause the loss of messages from a source that is overwhelming the syslog server, it can cause some log data to be lost during an adverse event that generates an unusually large number of messages.

42 Syslog Facilities and Severity Levels
(a) Syslog Facilities All messages sent to syslogd have a facility and a severity ( Table 18.5 ). The facility identifies the application or system component that generates the message. The severity, or message level, indicates the relative severity of the message and can be used for some rudimentary filtering. (b) Syslog Severity Levels

43 Logging at Application Level
privileged applications present security issues may not be captured by system/user-level audit data constitute a large percentage of reported vulnerabilities vulnerabilities exploited: lack of dynamic checks on input data errors in application logic may be necessary to capture behavior of application beyond its access to system services and file systems two approaches to collecting audit data: interposable libraries dynamic binary rewriting Applications, especially applications with a certain level of privilege, present security problems that may not be captured by system-level or user-level auditing data. Application-level vulnerabilities constitute a large percentage of reported vulnerabilities on security mailing lists. One type of vulnerability that can be exploited is the all-too-frequent lack of dynamic checks on input data, which make possible buffer overflow (see Chapter 10 ) and format string attacks. Other vulnerabilities exploit errors in application logic. For example, a privileged application may be designed to read and print a specific file. An error in the application might allow an attacker to exploit an unexpected interaction with the shell environment to force the application to read and print a different file, which would result in a security compromise. Auditing at the system level does not provide the level of detail to catch application logic error behavior. Further, intrusion detection systems look for attack signatures or anomalous behavior that would fail to appear with attacks based on application logic errors. For both detection and auditing purposes, it may be necessary to capture in detail the behavior of an application, beyond its access to system services and file systems. The information needed to detect application-level attacks may be missing or too difficult to extract from the low-level information included in system call traces and in the audit records produced by the operating system. In the remainder of this section, we examine two approaches to collecting audit data from applications: interposable libraries and dynamic binary rewriting.

44 Interposable Libraries
allows the generation of audit data without needing to recompile either the system libraries or the application audit data can be generated without changing the system’s shared libraries or needing access to the source code for the executable exploits the use of dynamic libraries in UNIX statically linked libraries a separate copy of the linked library function is loaded into the program’s virtual memory statically linked shared libraries referenced shared object is incorporated into the target executable at link time by the link loader each object is assigned a fixed virtual address link loader connects external referenced objects by assigning their virtual addresses when the executable is created dynamically linked shared libraries the linking to shared library routines is deferred until load time if changes are made to the library prior to load time any program that references the library is unaffected A technique described in [KUPE99] and [KUPE04] provides for application-level auditing by creating new procedures that intercept calls to shared library functions in order to instrument the activity. Interposition allows the generation of audit data without needing to recompile either the system libraries or the application of interest. Thus, audit data can be generated without changing the system’s shared libraries or needing access to the source code for the executable on which the interposition is to be performed. This approach can be used on any UNIX or Linux variant and on some other operating systems. The technique exploits the use of dynamic libraries in UNIX. Before examining the technique, we provide a brief background on shared libraries. The OS includes hundreds of C library functions in archive libraries. Each library consists of a set of variables and functions that are compiled and linked together. The linking function resolves all memory references to data and program code within the library, generating logical, or relative, addresses. A function can be linked into an executable program, on demand, at compilation. If a function is not part of the program code, the link loader searches a list of libraries and links the desired object into the target executable. On loading, a separate copy of the linked library function is loaded into the program’s virtual memory. This scheme is referred to as statically linked libraries . A more flexible scheme, first introduced with UNIX System V Release 3, is the use of statically linked shared libraries . As with statically linked libraries, the referenced shared object is incorporated into the target executable at link time by the link loader. However, each object in a statically linked shared library is assigned a fixed virtual address. The link loader connects external referenced objects to their definition in the library by assigning their virtual addresses when the executable is created. Thus, only a single copy of each library function exists. Further, the function can be modified and remains in its fixed virtual address. Only the object needs to be recompiled, not the executable programs that reference it. However, the modification generally must be minor; the changes must be made in such a way that the start address and the address of any variables, constants, or program labels in the code are not changed. UNIX System V Release 4 introduced the concept of dynamically linked shared libraries . With dynamically linked libraries, the linking to shared library routines is deferred until load time. At this time, the desired library contents are mapped into the process’s virtual address space. Thus, if changes are made to the library prior to load time, any program that references the library is unaffected. For both statically and dynamically linked shared libraries, the memory pages of the shared pages must be marked read-only. The system uses a copy-on-write scheme if a program performs a memory update on a shared page: The system assigns a copy of the page to the process, which it can modify without affecting other users of the page.

45 Use of an Interposable Library
Figure 18.7a indicates the normal mode of operation when a program invokes a routine in dynamically linked shared libraries. At load time, the reference to routine foo in the program is resolved to the virtual memory address of the start of the foo in the shared library. With library interpolation, a special interposable library is constructed so that at load time, the program links to the interposable library instead of the shared library. For each function in the shared library for which auditing is to be invoked, the interposable library contains a function with the same name. If the desired function is not contained in the interposed library, the loader continues its search in the shared library and links directly with the target function. The interposed module can perform any auditing-related function, such as recording the fact of the call, the parameters passed and returned, the return address in the calling program, and so forth. Typically, the interposed module will call the actual shared function ( Figure 18.7b ) so that the application’s behavior is not altered, just instrumented. This technique allows the interception of certain function calls and the storage of state between such calls without requiring the recompilation of the calling program or shared objects.

46 Example of Function in the Interposed Library
[KUPE99] gives an example of an interposable library function written in C ( Figure 18.8 ). The function can be described as follows: 1. AUDIT_CALL_START (line 8) is placed at the beginning of every interposed function. This makes it easy to insert arbitrary initialization code into each function. 2. AUDIT_LOOKUP_COMMAND (line 10 in Figure 18.8a , detail in Figure 18.8b ) performs the lookup of the pointer to the next definition of the function in the shared libraries using the dlsym(3x) command. The special flag RTLD_NEXT ( Figure 18.8b , line 2), indicates that the next reference along the library search path used by the run-time loader will be returned. The function pointer is stored in fptr if a reference is found, or the error value is returned to the calling program. 3. Line 12 contains the commands that are executed before the function is called. 4. In this case, the interposed function executes the original function call and returns the value to the user (line 14). Other possible actions include the examination, recording, or transformation of the arguments; the prevention of the actual execution of the library call; and the examination, recording, or transformation of the return value. 5. Additional code could be inserted before the result is returned (line 16), but this example has none inserted.

47 Dynamic Binary Rewriting
can be used with both statically and dynamically linked programs postcompilation technique that directly changes the binary code of executables change is made at load time and modifies only the memory image of a program does not require recompilation of the application binary implemented on Linux using two modules: loadable kernel module monitoring daemon loadable modules can be automatically loaded and unloaded on demand The interposition technique is designed to work with dynamically linked shared libraries. It cannot intercept function calls of statically linked programs unless all programs in the system are re-linked at the time that the audit library is introduced. [ZHOU04] describes a technique, referred to as dynamic binary rewriting, that can be used with both statically and dynamically linked programs. Dynamic binary rewriting is a postcompilation technique that directly changes the binary code of executables. The change is made at load time and modifies only the memory image of a program, not the binary program file on secondary storage. As with the interposition technique, dynamic binary rewriting does not require recompilation of the application binary. Audit module selection is postponed until the application is invoked, allowing for flexible selection of the auditing configuration. The technique is implemented on Linux using two modules: a loadable kernel module and a monitoring daemon. Linux is structured as a collection of modules, a number of which can be automatically loaded and unloaded on demand. These relatively independent blocks are referred to as loadable modules [GOYE99]. In essence, a module is an object file whose code can be linked to and unlinked from the kernel at run time. Typically, a module implements some specific function, such as a file system, a device driver, or some other feature of the kernel’s upper layer. A module does not execute as its own process or thread, although it can create kernel threads for various purposes as necessary. Rather, a module is executed in kernel mode on behalf of the current process.

48 Run-Time Environment for Application Auditing
Figure 18.9 shows the structure of this approach. The kernel module ensures non-bypassable instrumentation by intercepting the execve() system call. The execve() function loads a new executable into a new process address space and begins executing it. By intercepting this call, the kernel module stops the application before its first instruction is executed and can insert the audit routines into the application before its execution starts. The actual instrumentation of an application is performed by the monitoring daemon, which is a privileged user-space process. The daemon manages two repositories: a patch repository and an audit repository. The patch repository contains the code for instrumenting the monitored applications. The audit repository contains the auditing code to be inserted into an application. The code in both the audit and the patch repositories is in the form of dynamic libraries. By using dynamic libraries, it is possible to update the code in the libraries while the daemon is still running. In addition, multiple versions of the libraries can exist at the same time. The sequence of events is as follows: 1. A monitored application is invoked by the execve() system call. 2. The kernel module intercepts the call, stops the application, and sets the process’s parent to the monitoring daemon. Then the kernel module notifies the user-space daemon that a monitored application has started. 3. The monitoring daemon locates the patch and audit library functions appropriate for this application. The daemon loads the audit library functions into the application’s address space and inserts audit function calls at certain points in the application’s code. 4. Once the application has been instrumented, the daemon enables the application to begin execution. A special language was developed to simplify the process of creating audit and patch code. In essence, patches can be inserted at any point of function call to a shared library routine. The patch can invoke audit routines and also invoke the shared library routine, in a manner logically similar to the interposition technique described earlier.

49 Audit Trail Analysis analysis programs and procedures vary widely
must understand context of log entries relevant information may reside in other entries in the same logs, other logs, and nonlog sources audit file formats contain mix of plain text and codes must decipher manually / automatically ideally regularly review entries to gain understanding of baseline Programs and procedures for audit trail analysis vary widely, depending on the system configuration, the areas of most concern, the software available, the security policy of the organization, and the behavior patterns of legitimate users and intruders. This section provides some observations concerning audit trail analysis. To perform useful audit analysis, the analyst or security administrator needs an understanding of the information available and how it can be used. NIST SP [KENT06] offers some useful advice in this regard, which we summarize in this subsection. The security administrator (or other individual reviewing and analyzing logs) needs to understand the context surrounding individual log entries. Relevant information may reside in other entries in the same log, entries in other logs, and nonlog sources such as configuration management entries. The administrator should understand the potential for unreliable entries, such as from a security package that is known to generate frequent false positives when looking for malicious activity. Most audit file formats contain a mixture of plain language plus cryptic messages or codes that are meaningful to the software vendor but not necessarily to the administrator. The administrator must make the effort to decipher as much as possible the information contained in the log entries. In some cases, log analysis software performs a data reduction task that reduces the burden on the administrator. Still, the administrator should have a reasonable understanding of the raw data that feeds into analysis and review software in order to be able to assess the utility of these packages. The most effective way to gain a solid understanding of log data is to review and analyze portions of it regularly (e.g., every day). The goal is to eventually gain an understanding of the baseline of typical log entries, likely encompassing the vast majority of log entries on the system. To perform effective reviews and analysis, administrators should have solid understanding of each of the following from training or hands-on experience: • The organization’s policies regarding acceptable use, so that administrators can recognize violations of the policies. • The security software used by their hosts, including the types of security related events that each program can detect and the general detection profile of each program (e.g., known false positives). • The operating systems and major applications (e.g., , Web) used by their hosts, particularly each OS’s and major application’s security and logging capabilities and characteristics. • The characteristics of common attack techniques, especially how the use of these techniques might be recorded on each system. • The software needed to perform analysis, such as log viewers, log reduction scripts, and database query tools.

50 Types of Audit Trail Analysis
audit trails can be used in multiple ways this depends in part on when done possibilities include: audit trail review after an event triggered by event to diagnose cause and remediate focuses on the audit trail entries that are relevant to the specific event periodic review of audit trail data review bulk data to identify problems and behavior real-time audit analysis part of an intrusion detection function Audit trails can be used in multiple ways. The type of analysis depends, at least in part, on when the analysis is to be done. The possibilities include the following: • Audit trail review after an event: This type of review is triggered by an observed event, such as a known system or application software problem, a known violation of existing security policy by a user, or some unexplained system or user problem. The review can gather information to elaborate on what is known about the event, to diagnose the cause or the problem, and to suggest remedial action and future countermeasures. This type of review focuses on the audit trail entries that are relevant to the specific event. • Periodic review of audit trail data: This type of review looks at all of the audit trail data or at defined subsets of the data and has many possible objectives. Examples of objectives include looking for events or patterns that suggest a security problem, developing a profile of normal behavior and searching for anomalous behavior, and developing profiles by individual user to maintain a permanent record by user. • Real-time audit analysis: Audit analysis tools can also be used in a real-time or near-real-time fashion. Real-time analysis is part of the intrusion detection function.

51 Audit Review audit review capability provides administrator with information from selected audit records actions of one or more users actions on a specific object or resource all or a specified set of audited exceptions actions on a specific system / security attribute may be filtered by time / source / frequency used to provide system activity baseline level of security related activity Distinct from an analysis of audit trail data using data reduction and analysis tools is the concept of audit review. An audit review capability enables an administrator to read information from selected audit records. The Common Criteria specification [CCPS04a] calls for a capability that allows prestorage or poststorage audit selection and includes the ability to selectively review the following: • The actions of one or more users (e.g., identification, authentication, system entry, and access control actions) • The actions performed on a specific object or system resource • All or a specified set of audited exceptions • Actions associated with a specific system or security attribute Audit review can be focused on records that match certain attributes, such as user or user group, time window, type of record, and so forth. One automated tool that can be useful in audit review is a prioritization of audit records based on input from the administrator. Records can be prioritized based on a combination of factors. Examples include the following: • Entry type (e.g., message code 103, message class CRITICAL) • Newness of the entry type (i.e., has this type of entry appeared in the logs before?) • Log source • Source or destination IP address (e.g., source address on a blacklist, destination address of a critical system, previous events involving a particular IP address) • Time of day or day of the week (e.g., an entry might be acceptable during certain times but not permitted during others) • Frequency of the entry (e.g., x times in y seconds) There may be a number of possible purposes for this type of audit review. Audit review can enable an administrator to get a feel for the current operation of the system and the profile of the users and applications on the system, the level of attack activity, and other usage and security-related events. Audit review can be used to gain an understanding after the fact of an attack incident and the system’s response to it, leading to changes in software and procedures.

52 Approaches to Data Analysis
basic alerting indicate interesting type of event has occurred baselining define normal versus unusual events / patterns compare with new data to detect changes thresholding is the identification of data that exceed a particular baseline value windowing detection of events within a given set of parameters correlation seeks relationships among events The spectrum of approaches and algorithms used for audit data analysis is far too broad to be treated effectively here. Instead, we give a feeling for some of the major approaches, based on the discussion in [SING04]. BASIC ALERTING The simplest form of an analysis is for the software to give an indication that a particular interesting event has occurred. If the indication is given in real time, it can serve as part of an intrusion detection system. For events that may not rise to the level of triggering an intrusion alert, an after-the-fact indication of suspicious activity can lead to further analysis. Baselining is the process of defining normal versus unusual events and patterns. The process involves measuring a set of known data to compute a range of normal values. These baseline values can then be compared to new data to detect unusual shifts. Examples of activity to baseline include the following: • Amount of network traffic per protocol: total HTTP, , FTP, and so on. • Logins/logouts • Accesses of admin accounts • Dynamic Host Configuration Protocol (DHCP) address management, DNS requests • Total amount of log data per hour/day • Number of processes running at any time

53 Integrated Approaches
volume of audit data means manual analysis and baselining is impractical need a Security Information and Event Management (SIEM) system a centralized logging and analysis package agentless or agent-based normalizes a variety of log formats analyzes combined data correlates events among the log entries identifies and prioritizes significant events can initiate responses [KELL06] is a report by an information security officer at a government agency on her attempts to get a handle on the vast amount of security audit data generated by her agency’s networks, servers, and hosts. The systems are configured to generate audit data, including security-related audit data, for management, auditors, and attorneys. So much data is generated that it makes it difficult for the security officer to extract timely and useful information. She needs to get and analyze security-related data from hosts, servers, routers, intrusion detection systems, firewalls, and a multitude of other security tools. The load is so great that one large server is dedicated solely to housing security analysis software and audit files. The problem came to a head when the security officer realized that it had become impossible to perform one of the basic tasks of security audit analysis: baselining. The security officer needs to be able to characterize normal activity and thresholds so that the system will generate alerts when anomalies or malicious patterns are detected. Because of the volume of data, a human-generated or even human-assisted baseline generation was impractical. And with the broad mix of audit data sources and formats, there seemed to be no obvious way to develop automated baseline generation. The type of product that can address these issues has been referred to as a security information management (SIM) system or a security information and event management (SIEM) system. As these products move into the third and fourth generations, a number of other names have proliferated, with none commonly accepted across product lines. Before looking at the specific solution adopted by this security officer, we provide a brief general overview of SIEM systems.

54 Example: Cisco MARS example of SIEM product
support a wide variety of systems agentless with central dedicated server wide array of analysis packages an effective GUI server collects, parses, normalizes, correlates and assesses events to then check for false positives, vulnerabilities, and profiling After reviewing several alternatives, the security officer chose the Cisco Systems’ MARS product as being the most cost-effective. The MARS product supports a variety of systems. Of course, all of the Cisco products on site were compatible with the product, including NetFlow and syslog data from Cisco routers, firewalls, switches, concentrators, IDSs, and so on. In addition, MARS can pull data from almost any SNMP- and syslog-enabled device, as well as from a wide range of vulnerability and antivirus systems, host operating systems, Web servers, Web proxy devices, and database servers. The following is a list of the devices and software packages supported at that time by MARS: • Network: Cisco IOS Software; Cisco Catalyst OS; Cisco NetFlow; and Extreme Extremeware • Firewall/VPN: Cisco ASA Software; Cisco PIX Security Appliance; Cisco IOS Firewall; Cisco Firewall Services Module (FWSM); Cisco VPN 3000 Concentrator; Checkpoint Firewall-1 NG and VPN-1 versions; NetScreen Firewall; and Nokia Firewall • Intrusion detection: Cisco IDS; Cisco IDS Module; Cisco IOS IPS; Enterasys Dragon NIDS; ISS RealSecure Network Sensor; Snort NIDS; McAfee Intrushield NIDS; NetScreen IDP; OS; and Symantec ManHunt • Vulnerability assessment: eEye REM, Qualys QualysGuard, and FoundStone FoundScan • Host security: Cisco Security Agent; McAfee Entercept; and ISS RealSecure Host Sensor • Antivirus: Symantec Antivirus, Cisco Incident Control System (Cisco ICS), Trend Micro Outbreak Prevention Service (OPS), Network Associates VirusScan, and McAfee ePO • Authentication servers: Cisco Secure ACS • Host log: Windows NT, 2000, and 2003 (agent and agentless); Solaris; and Linux • Application: Web servers (Internet Information Server, iPlanet, and Apache); Oracle audit logs; and Network Appliance NetCache • Universal device support: To aggregate and monitor any application syslog MARS works in an agentless configuration, with a centralized dedicated server. In general terms, the server performs the following steps: 1. Events come into the MARS server from devices and software modules throughout the network. 2. Events are parsed to locate and identify each field in the entry. 3. MARS normalizes each entry into a uniform audit trail entry format. 4. MARS performs a correlation function to find events that are related and defines sessions. Each session is a related set of events. For example, if a worm is detected, the detected occurrences across all devices are correlated into a single session for this worm attack. 5. Sessions and uncorrelated events are run against a rule engine and each is assessed. Some events and sessions are dropped as irrelevant. The others are reclassified as incidents to be logged in the incident database. 6. A false-positive analysis is run on the data to catch known false positive reports for IDS and other systems in the network. 7. A vulnerability assessment is performed against suspected hosts to determine the urgency of the data. 8. Traffic profiling and statistical anomaly detection programs are run against the data. MARS provides a wide array of analysis packages and an effective graphical user interface. Preliminary indications are that this product will meet the needs of the security officer.

55 Table 18.6 Suggested List of Events to Be Audited

56 Legal and Ethical Aspects Part 3 of 3
The legal and ethical aspects of computer security encompass a broad range of topics, and a full discussion is well beyond the scope of this book. In this chapter, we touch on a few important topics in this area. The bulk of this book examines technical approaches to the detection, prevention, and recovery from computer and network attacks. Chapters 16 and 17 examine physical and human-factor approaches, respectively, to strengthening computer security. All of these measures can significantly enhance computer security but cannot guarantee complete success in detection and prevention. One other tool is the deterrent factor of law enforcement. Many types of computer attacks can be considered crimes and, as such, carry criminal sanctions. This section begins with a classification of types of computer crime and then looks at some of the unique law enforcement challenges of dealing with computer crime. Legal and Ethical Aspects Part 3 of 3

57 Types of Computer Crime
the U.S. Department of Justice categorizes computer crime based on the role that the computer plays in the criminal activity: computers as targets involves an attack on data integrity, system integrity, data confidentiality, privacy, or availability computers as storage devices using the computer to store stolen password lists, credit card or calling card numbers, proprietary corporate information, pornographic image files, or pirated commercial software computers as communications tools crimes that are committed online, such as fraud, gambling, child pornography, and the illegal sale of prescription drugs, controlled substances, alcohol, or guns The U.S. Department of Justice [DOJ00] categorizes computer crime based on the role that the computer plays in the criminal activity, as follows: • Computers as targets: This form of crime targets a computer system, to acquire information stored on that computer system, to control the target system without authorization or payment (theft of service), or to alter the integrity of data or interfere with the availability of the computer or server. Using the terminology of Chapter 1 , this form of crime involves an attack on data integrity, system integrity, data confidentiality, privacy, or availability. • Computers as storage devices: Computers can be used to further unlawful activity by using a computer or a computer device as a passive storage medium. For example, the computer can be used to store stolen password lists, credit card or calling card numbers, proprietary corporate information, pornographic image files, or “warez” (pirated commercial software). • Computers as communications tools: Many of the crimes falling within this category are simply traditional crimes that are committed online. Examples include the illegal sale of prescription drugs, controlled substances, alcohol, and guns; fraud; gambling; and child pornography.

58 Table 19.1 Cybercrimes Cited in the Convention on Cybercrime (page 1 of 2)
A more specific list of crimes, shown in Table 19.1 , is defined in the international Convention on Cybercrime. This is a useful list because it represents an international consensus on what constitutes computer crime, or cybercrime, and what crimes are considered important.

59 Table 19.1 - Cybercrimes Cited in the Convention on Cybercrime (page 2 of 2)

60 Table 19.2 CERT 2007 E-Crime Watch Survey Results
Yet another categorization is used in the CERT 2007 E-crime Survey, the results of which are shown in Table The figures in the second column indicate the percentage of respondents who report at least one incident in the corresponding row category. Entries in the remaining three columns indicate the percentage of respondents who reported a given source for an attack.

61 Law Enforcement Challenges
The deterrent effect of law enforcement on computer and network attacks correlates with the success rate of criminal arrest and prosecution. The nature of cybercrime is such that consistent success is extraordinarily difficult. To see this, consider what [KSHE06] refers to as the vicious cycle of cybercrime, involving law enforcement agencies, cybercriminals, and cybercrime victims ( Figure 19.1 ). For law enforcement agencies , cybercrime presents some unique difficulties. Proper investigation requires a fairly sophisticated grasp of the technology. Although some agencies, particularly larger agencies, are catching up in this area, many jurisdictions lack investigators knowledgeable and experienced in dealing with this kind of crime. Lack of resources represents another handicap. Some cybercrime investigations require considerable computer processing power, communications capacity, and storage capacity, which may be beyond the budget of individual jurisdictions. The global nature of cybercrime is an additional obstacle: Many crimes will involve perpetrators who are remote from the target system, in another jurisdiction or even another country. A lack of collaboration and cooperation with remote law enforcement agencies can greatly hinder an investigation. Initiatives such as international Convention on Cybercrime are a promising sign. The Convention at least introduces a common terminology for crimes and a framework for harmonizing laws globally. The relative lack of success in bringing cybercriminals to justice has led to an increase in their numbers, boldness, and the global scale of their operations. It is difficult to profile cybercriminals in the way that is often done with other types of repeat offenders. The cybercriminal tends to be young and very computer-savvy, but the range of behavioral characteristics is wide. Further, there exist no cybercriminal databases that can point investigators to likely suspects. The success of cybercriminals, and the relative lack of success of law enforcement, influence the behavior of cybercrime victims . As with law enforcement, many organizations that may be the target of attack have not invested sufficiently in technical, physical, and human-factor resources to prevent attacks. Reporting rates tend to be low because of a lack of confidence in law enforcement, a concern about corporate reputation, and a concern about civil liability. The low reporting rates and the reluctance to work with law enforcement on the part of victims feeds into the handicaps under which law enforcement works, completing the vicious cycle. Executive management and security administrators need to look upon law enforcement as another resource and tool, alongside technical, physical, and human-factor resources. The successful use of law enforcement depends much more on people skills than technical skills. Management needs to understand the criminal investigation process, the inputs that investigators need, and the ways in which the victim can contribute positively to the investigation.

62 Intellectual Property
intellectual property is defined as “any intangible asset that consists of human knowledge and ideas”. Intellectual Property • Intellectual property : Any intangible asset that consists of human knowledge and ideas. Examples include software, data, novels, sound recordings, the design of a new type of mousetrap, or a cure for a disease. There are three main types of intellectual property for which legal protection is available: copyrights, trademarks, and patents. The legal protection is against infringement , which is the invasion of the rights secured by copyrights, trademarks, and patents. The right to seek civil recourse against anyone infringing his or her property is granted to the IP owner. Depending upon the type of IP, infringement may vary ( Figure 19.2 ). infringement is “the invasion of the rights secured by copyrights, trademarks, and patents”.

63 Copyright protects tangible or fixed expression of an idea but not the idea itself creator can claim and file copyright at a national government copyright office if: proposed work is original creator has put original idea in concrete form Copyright law protects the tangible or fixed expression of an idea, not the idea itself. A creator can claim copyright, and file for the copyright at a national government copyright office, if the following conditions are fulfilled: • The proposed work is original. • The creator has put this original idea into a concrete form, such as hard copy (paper), software, or multimedia form.

64 Copyright Rights examples of items that can be copyrighted include:
literary works musical works dramatic works pantomimes and choreographic works pictorial, graphic, and sculptural works motion pictures and other audiovisual works sound recordings architectural works software-related works copyright owner has these exclusive rights, protected against infringement: reproduction right modification right distribution right public-performance right public-display right The copyright owner has the following exclusive rights, protected against infringement: • Reproduction right: Lets the owner make copies of a work • Modification right: Also known as the derivative-works right; concerns modifying a work to create a new or derivative work • Distribution right: Lets the owner publicly sell, rent, lease, or lend copies of the work • Public-performance right: Applies mainly to live performances • Public-display right: Lets the owner publicly show a copy of the work directly or by means of a film, slide, or television image Examples of items that may be copyrighted include the following [BRAU01]: • Literary works: Novels, nonfiction prose, poetry, newspaper articles and newspapers, magazine articles and magazines, catalogs, brochures, ads (text), and compilations such as business directories • Musical works: Songs, advertising jingles, and instrumentals • Dramatic works: Plays, operas, and skits • Pantomimes and choreographic works: Ballets, modern dance, jazz dance, and mime works • Pictorial, graphic, and sculptural works: Photographs, posters, maps, paintings, drawings, graphic art, display ads, cartoon strips and cartoon characters, stuffed animals, statues, paintings, and works of fine art • Motion pictures and other audiovisual works: Movies, documentaries, travelogues, training films and videos, television shows, television ads, and interactive multimedia works • Sound recordings: Recordings of music, sound, or words • Architectural works: Building designs, whether in the form of architectural plans, drawings, or the constructed building itself • Software-related works: Computer software, software documentation and manuals, training manuals, other manuals

65 Patent grant a property right to the inventor
“the right to exclude others from making, using, offering for sale, or selling” the invention in the United States or “importing” the invention into the United States types: utility any new and useful process, machine, article of manufacture, or composition of matter design new, original, and ornamental design for an article of manufacture plant discovers and asexually reproduces any distinct and new variety of plant A patent for an invention is the grant of a property right to the inventor. The right conferred by the patent grant is, in the language of the U.S. statute and of the grant itself, “the right to exclude others from making, using, offering for sale, or selling” the invention in the United States or “importing” the invention into the United States. Similar wording appears in the statutes of other nations. There are three types of patents: • Utility patents: May be granted to anyone who invents or discovers any new and useful process, machine, article of manufacture, or composition of matter, or any new and useful improvement thereof; • Design patents: May be granted to anyone who invents a new, original, and ornamental design for an article of manufacture; and • Plant patents: May be granted to anyone who invents or discovers and asexually reproduces any distinct and new variety of plant. An example of a patent from the computer security realm is the RSA public-key cryptosystem. From the time it was granted in 1983 until the patent expired in 2000, the patent holder, RSA Security, was entitled to receive a fee for each implementation of RSA.

66 Trademark a word, name, symbol, or device
used in trade with goods indicates source of goods distinguishes them from goods of others trademark rights may be used to: prevent others from using a confusingly similar mark but not to prevent others from making the same goods or from selling the same goods or services under a clearly different mark A trademark is a word, name, symbol, or device that is used in trade with goods to indicate the source of the goods and to distinguish them from the goods of others. A servicemark is the same as a trademark except that it identifies and distinguishes the source of a service rather than a product. The terms trademark and mark are commonly used to refer to both trademarks and servicemarks. Trademark rights may be used to prevent others from using a confusingly similar mark, but not to prevent others from making the same goods or from selling the same goods or services under a clearly different mark.

67 U.S. Digital Millennium Copyright ACT (DMCA)
signed into law in 1998 implements WIPO treaties to strengthen protections of digital copyrighted materials encourages copyright owners to use technological measures to protect their copyrighted works measures that prevent access to the work measures that prevent copying of the work prohibits attempts to bypass the measures both criminal and civil penalties apply to attempts to circumvent The U.S. Digital Millennium Copyright Act (DMCA) has had a profound effect on the protection of digital content rights in both the United States and worldwide. The DMCA, signed into law in 1998, is designed to implement World Intellectual Property Organization (WIPO) treaties, signed in In essence, DMCA strengthens the protection of copyrighted materials in digital format. The DMCA encourages copyright owners to use technological measures to protect copyrighted works. These measures fall into two categories: measures that prevent access to the work and measures that prevent copying of the work. Further, the law prohibits attempts to bypass such measures. Specifically, the law states that “no person shall circumvent a technological measure that effectively controls access to a work protected under this title.” Among other effects of this clause, it prohibits almost all unauthorized decryption of content. The law further prohibits the manufacture, release, or sale of products, services, and devices that can crack encryption designed to thwart either access to or copying of material unauthorized by the copyright holder. Both criminal and civil penalties apply to attempts to circumvent technological measures and to assist in such circumvention.

68 DMCA Exemptions certain actions are exempted from the provisions of the DMCA and other copyright laws including: considerable concern exists that DMCA inhibits legitimate security and encryption research feel that innovation and academic freedom is stifled and open source software development is threatened fair use reverse engineering encryption research security testing personal privacy Certain actions are exempted from the provisions of the DMCA and other copyright laws, including the following: • Fair use: This concept is not tightly defined. It is intended to permit others to perform, show, quote, copy, and otherwise distribute portions of the work for certain purposes. These purposes include review, comment, and discussion of copyrighted works. • Reverse engineering: Reverse engineering of a software product is allowed if the user has the right to use a copy of the program and if the purpose of the reverse engineering is not to duplicate the functionality of the program but rather to achieve interoperability. • Encryption research: “Good faith” encryption research is allowed. In essence, this exemption allows decryption attempts to advance the development of encryption technology. • Security testing: This is the access of a computer or network for the good faith testing, investigating, or correcting a security flaw or vulnerability, with the authorization of the owner or operator. • Personal privacy: It is generally permitted to bypass technological measures if that is the only reasonable way to prevent the access to result in the revealing or recording of personally identifying information. Despite the exemptions built into the Act, there is considerable concern, especially in the research and academic communities, that the act inhibits legitimate security and encryption research. These parties feel that DMCA stifles innovation and academic freedom and is a threat to open source software development [ACM04].

69 Digital Rights Management (DRM)
systems and procedures that ensure that holders of digital rights are clearly identified and receive stipulated payment for their works may impose further restrictions such as inhibiting printing or prohibiting further distribution no single DRM standard or architecture objective is to provide mechanisms for the complete content management life cycle provide persistent content protection for a variety of digital content types / platforms / media Digital Rights Management (DRM) refers to systems and procedures that ensure that holders of digital rights are clearly identified and receive the stipulated payment for their works. The systems and procedures may also impose further restrictions on the use of digital objects, such as inhibiting printing or prohibiting further distribution. There is no single DRM standard or architecture. DRM encompasses a variety of approaches to intellectual property management and enforcement by providing secure and trusted automated services to control the distribution and use of content. In general, the objective is to provide mechanisms for the complete content management life cycle (creation, subsequent contribution by others, access, distribution, use), including the management of rights information associated with the content. DRM systems should meet the following objectives: 1. Provide persistent content protection against unauthorized access to the digital content, limiting access to only those with the proper authorization. 2. Support a variety of digital content types (e.g., music files, video streams, digital books, images). 3. Support content use on a variety of platforms, (e.g., PCs, PDAs, iPods, mobile phones). 4. Support content distribution on a variety of media, including CD-ROMs, DVDs, and flash memory.

70 DRM Components Figure 19.3 , based on [LIU03], illustrates a typical DRM model in terms of the principal users of DRM systems: • Content provider: Holds the digital rights of the content and wants to protect these rights. Examples are a music record label and a movie studio. • Distributor: Provides distribution channels, such as an online shop or a Web retailer. For example, an online distributor receives the digital content from the content provider and creates a Web catalog presenting the content and rights metadata for the content promotion. • Consumer: Uses the system to access the digital content by retrieving downloadable or streaming content through the distribution channel and then paying for the digital license. The player/viewer application used by the consumer takes charge of initiating license request to the clearinghouse and enforcing the content usage rights. • Clearinghouse: Handles the financial transaction for issuing the digital license to the consumer and pays royalty fees to the content provider and distribution fees to the distributor accordingly. The clearinghouse is also responsible for logging license consumptions for every consumer. In this model, the distributor need not enforce the access rights. Instead, the content provider protects the content in such a way (typically encryption) that the consumer must purchase a digital license and access capability from the clearinghouse. The clearinghouse consults usage rules provided by the content provider to determine what access is permitted and the fee for a particular type of access. Having collected the fee, the clearinghouse credits the content provider and distributor appropriately.

71 DRM System Architecture
Figure 19.4 shows a generic system architecture to support DRM functionality. The system is access by parties in three roles. Rights holders are the content providers, who either created the content or have acquired rights to the content. Service providers include distributors and clearinghouses. Consumers are those who purchase the right to access to content for specific uses. There is system interface to the services provided by the DRM system: • Identity management: Mechanisms to uniquely identify entities, such as parties and content • Content management: Processes and functions needed to manage the content lifestyle • Rights management: Processes and functions needed to manage rights, rights holders, and associated requirements Below these management modules are a set of common functions. The security/ encryption module provides functions to encrypt content and to sign license agreements. The identity management service makes use of the authentication and authorization functions to identify all parties in the relationship. Using these functions, the identity management service includes the following: • Allocation of unique party identifiers • User profile and preferences • User’s device management • Public-key management Billing/payments functions deal with the collection of usage fees from consumers and the distribution of payments to rights holders and distributors. Delivery functions deal with the delivery of content to consumers.

72 Privacy overlaps with computer security
dramatic increase in scale of information collected and stored motivated by law enforcement, national security, economic incentives individuals have become increasingly aware of access and use of personal information and private details about their lives concerns about extent of privacy compromise have led to a variety of legal and technical approaches to reinforcing privacy rights An issue with considerable overlap with computer security is that of privacy. On one hand, the scale and interconnectedness of personal information collected and stored in information systems has increased dramatically, motivated by law enforcement, national security, and economic incentives. The last mentioned has been perhaps the main driving force. In a global information economy, it is likely that the most economically valuable electronic asset is aggregations of information on individuals [HAYE09]. On the other hand, individuals have become increasingly aware of the extent to which government agencies, businesses, and even Internet users have access to their personal information and private details about their lives and activities. Concerns about the extent to which personal privacy has been and may be compromised have led to a variety of legal and technical approaches to reinforcing privacy rights.

73 European Union (EU) Data Protection Directive
adopted in 1998 to: ensure member states protect fundamental privacy rights when processing personal information prevent member states from restricting the free flow of personal information within EU organized around principles of: notice consent consistency access security onward transfer enforcement A number of international organizations and national governments have introduced laws and regulations intended to protect individual privacy. We look at two such initiatives in this subsection. In 1998, the EU adopted the Directive on Data Protection to both (1) ensure that member states protected fundamental privacy rights when processing personal information, and (2) prevent member states from restricting the free flow of personal information within the EU. The Directive is not itself a law, but requires member states to enact laws encompassing its terms. The Directive is organized around the following principles of personal information use: • Notice: Organizations must notify individuals what personal information they are collecting, the uses of that information, and what choices the individual may have. • Consent: Individuals must be able to choose whether and how their personal information is used by, or disclosed to, third parties. They have the right not to have any sensitive information collected or used without express permission, including race, religion, health, union membership, beliefs, and sex life. • Consistency: Organizations may use personal information only in accordance with the terms of the notice given the data subject and any choices with respect to its use exercised by the subject. • Access: Individuals must have the right and ability to access their information and correct, modify, or delete any portion of it. • Security: Organizations must provide adequate security, using technical and other means, to protect the integrity and confidentiality of personal information. • Onward transfer: Third parties receiving personal information must provide the same level of privacy protection as the organization from whom the information is obtained. • Enforcement: The Directive grants a private right of action to data subjects when organizations do not follow the law. In addition, each EU member has a regulatory enforcement agency concerned with privacy rights enforcement.

74 United States Privacy Initiatives
Privacy Act of 1974 dealt with personal information collected and used by federal agencies permits individuals to determine records kept permits individuals to forbid records being used for other purposes permits individuals to obtain access to records and to correct and amend records as appropriate ensures agencies properly collect, maintain, and use personal information creates a private right of action for individuals The first comprehensive privacy legislation adopted in the United States was the Privacy Act of 1974, which dealt with personal information collected and used by federal agencies. The Act is intended to 1. Permit individuals to determine what records pertaining to them are collected, maintained, used, or disseminated. 2. Permit individuals to forbid records obtained for one purpose to be used for another purpose without consent. 3. Permit individuals to obtain access to records pertaining to them and to correct and amend such records as appropriate. 4. Ensure that agencies collect, maintain, and use personal information in a manner that ensures that the information is current, adequate, relevant, and not excessive for its intended use. 5. Create a private right of action for individuals whose personal information is not used in accordance with the Act. As with all privacy laws and regulations, there are exceptions and conditions attached to this Act, such as criminal investigations, national security concerns, and conflicts between competing individual rights of privacy. While the 1974 Privacy Act covers government records, a number of other U.S. laws have been enacted that cover other areas, including the following: • Banking and financial records: Personal banking information is protected in certain ways by a number of laws, including the recent Financial Services Modernization Act. • Credit reports: The Fair Credit Reporting Act confers certain rights on individuals and obligations on credit reporting agencies. • Medical and health insurance records: A variety of laws have been in place for decades dealing with medical records privacy. The Health Insurance Portability and Accountability Act (HIPPA) created significant new rights for patients to protect and access their own health information. • Children’s privacy: The Children’s Online Privacy Protection Act places restrictions on online organizations in the collection of data from children under the age of 13. • Electronic communications: The Electronic Communications Privacy Act generally prohibits unauthorized and intentional interception of wire an electronic communications during the transmission phase and unauthorized accessing of electronically stored wire and electronic communications. Also have a range of other privacy laws

75 ISO states . . . “An organizational data protection and privacy policy should be developed and implemented. This policy should be communicated to all persons involved in the processing of personal information. Compliance with this policy and all relevant data protection legislation and regulations requires appropriate management structure and control. Often this is best achieved by the appointment of a person responsible, such as a data protection officer, who should provide guidance to managers, users, and service providers on their individual responsibilities and the specific procedures that should be followed. Responsibility for handling personal information and ensuring awareness of the data protection principles should be dealt with in accordance with relevant legislation and regulations. Appropriate technical and organizational measures to protect personal information should be implemented.” Organizations need to deploy both management controls and technical measures to comply with laws and regulations concerning privacy as well as to implement corporate policies concerning employee privacy. ISO ( Code of Practice for Information Security Management ) states the requirement as follows: ISO 27002: Data protection and privacy of personal information: An organizational data protection and privacy policy should be developed and implemented. This policy should be communicated to all persons involved in the processing of personal information. Compliance with this policy and all relevant data protection legislation and regulations requires appropriate management structure and control. Often this is best achieved by the appointment of a responsible person, such as a data protection officer, who should provide guidance to managers, users, and service providers on their individual responsibilities and the specific procedures that should be followed. Responsibility for handling personal information and ensuring awareness of the data protection principles should be dealt with in accordance with relevant legislation and regulations. Appropriate technical and organizational measures to protect personal information should be implemented. An excellent, detailed list of considerations for organizational implementation of privacy controls is provided in The Standard of Good Practice for Information Security , from the Information Security Forum [ISF11]. This material is reproduced in Appendix H .4.

76 Privacy and Data Surveillance
The demands of homeland security and counterterrorism have imposed new threats to personal privacy. Law enforcement and intelligence agencies have become increasingly aggressive in using data surveillance techniques to fulfill their mission. In addition, private organizations are exploiting a number of trends to increase their ability to build detailed profiles of individuals, including the spread of the Internet, the increase in electronic payment methods, near-universal use of cellular phone communications, ubiquitous computation, sensor webs, and so on. Both policy and technical approaches are needed to protect privacy when both government and nongovernment organizations seek to learn as much as possible about individuals. In terms of technical approaches, the requirements for privacy protection for information systems can be addressed in the context of database security. That is, the approaches that are appropriate for privacy protection involve technical means that have been developed for database security. These are discussed in detail in Chapter 5 . A specific proposal for a database security approach to privacy protection is outlined in [POPP06] and illustrated in Figure The privacy appliance is a tamper-resistant, cryptographically protected device that is interposed between a database and the access interface, analogous to a firewall or intrusion prevention device. The device implements privacy protection functions, including verifying the user’s access permissions and credentials and creating an audit log. Some of the specific functions of the appliance are as follows: • Data transformation: This function encodes or encrypts portions of the data so as to preserver privacy but still allow data analysis functions needed for effective use. An example of such data analysis functions is the detection of terrorist activity patterns. • Anonymization: This function removes specific identifying information from query results, such as last name and telephone number, but creates some sort of anonymized unique identifier so that analysts can detect connections between queries. • Selective revelation: This is a method for minimizing exposure of individual information while enabling continuous analysis of potentially interconnected data. The function initially reveals information to the analyst only in sanitized form, that is, in terms of statistics and categories that do not reveal (directly or indirectly) anyone’s private information. If the analyst sees reason for concern, he or she can follow up by seeking permission to get more precise information. This permission would be granted if the initial information provides sufficient cause to allow the revelation of more information, under appropriate legal and policy guidelines. • Immutable audit: A tamper-resistant method that identifies where data go and who has seen the data. The audit function automatically and permanently records all data accesses, with strong protection against deletion, modification, and unauthorized use. • Associative memory: This is a software module that can recognize patterns and make connections between pieces of data that the human user may have missed or didn’t know existed. With this method, it can discover relationships quickly between data points found in massive amounts of data. As Figure 19.6 indicates, the owner of a database installs a privacy appliance tailored to the database content and structure and to its intended use by outside organizations. An independently operated privacy appliance can interact with multiple databases from multiple organizations to collect and interconnect data for their ultimate use by law enforcement, an intelligence user, or other appropriate user.

77 Ethical Issues ethics:
many potential misuses and abuses of information and electronic communication that create privacy and security problems basic ethical principles developed by civilizations apply unique considerations surrounding computers and information systems scale of activities not possible before creation of new types of entities for which no agreed ethical rules have previously been formed ethics: “a system of moral principles that relates to the benefits and harms of particular actions, and to the rightness and wrongness of motives and ends of those actions.” Because of the ubiquity and importance of information systems in organization of all types, there are many potential misuses and abuses of information and electronic communication that create privacy and security problems. In addition to questions of legality, misuse and abuse raise concerns of ethics. Ethics refers to a system of moral principles that relates to the benefits and harms of particular actions, and to the rightness and wrongness of motives and ends of those actions. In this section, we look at ethical issues as they relate to computer and information system security. To a certain extent, a characterization of what constitutes ethical behavior for those who work with or have access to information systems is not unique to this context. The basic ethical principles developed by civilizations apply. However, there are some unique considerations surrounding computers and information systems. First, computer technology makes possible a scale of activities not possible before. This includes a larger scale of recordkeeping, particularly on individuals, with the ability to develop finer-grained personal information collection and more precise data mining and data matching. The expanded scale of communications and the expanded scale of interconnection brought about by the Internet magnify the power of an individual to do harm. Second, computer technology has involved the creation of new types of entities for which no agreed ethical rules have previously been formed, such as databases, Web browsers, chat rooms, cookies, and so on.

78 Ethical Hierarchy Further, it has always been the case that those with special knowledge or special skills have additional ethical obligations beyond those common to all humanity. We can illustrate this in terms of an ethical hierarchy ( Figure 19.7 ), based on one discussed in [GOTT99]. At the top of the hierarchy are the ethical values professionals share with all human beings, such as integrity, fairness, and justice. Being a professional with special training imposes additional ethical obligations with respect to those affected by his or her work. General principles applicable to all professionals arise at this level. Finally, each profession has associated with it specific ethical values and obligations related to the specific knowledge of those in the profession and the powers that they have to affect others. Most professions embody all of these levels in a professional code of conduct, a subject discussed subsequently.

79 Ethical Issues Related to Computers and Information Systems
some ethical issues from computer use: repositories and processors of information producers of new forms and types of assets instruments of acts symbols of intimidation and deception those who understand, exploit technology, and have access permission, have power over these Let us turn now more specifically to the ethical issues that arise from computer technology. Computers have become the primary repository of both personal information and negotiable assets, such as bank records, securities records, and other financial information. Other types of databases, both statistical and otherwise, are assets with considerable value. These assets can only be viewed, created, and altered by technical and automated means. Those who can understand and exploit the technology, plus those who have obtained access permission, have power related to those assets. A classic paper on computers and ethics [PARK88] points out that ethical issues arise as the result of the roles of computers, such as the following: • Repositories and processors of information: Unauthorized use of otherwise unused computer services or of information stored in computers raises questions of appropriateness or fairness. • Producers of new forms and types of assets: For example, computer programs are entirely new types of assets, possibly not subject to the same concepts of ownership as other assets. • Instruments of acts: To what degree must computer services and users of computers, data, and programs be responsible for the integrity and appropriateness of computer output? • Symbols of intimidation and deception: The images of computers as thinking machines, absolute truth producers, infallible, subject to blame, and as anthropomorphic replacements of humans who err should be carefully considered.

80 Table 19.3 Potential Ethical Dilemmas
Another listing of ethical issues, from [HARR90], is shown in Table Both of these lists are concerned with balancing professional responsibilities with ethical or moral responsibilities.

81 Ethical Question Examples
whistle-blower when professional ethical duty conflicts with loyalty to employer e.g. inadequately tested software product organizations and professional societies should provide alternative mechanisms potential conflict of interest e.g. consultant has financial interest in vendor which should be revealed to client We cite two areas here of the types of ethical questions that face a computing or IS professional. The first is that IS professionals may find themselves in situations where their ethical duty as professionals comes into conflict with loyalty to their employer. Such a conflict may give rise for an employee to consider “blowing the whistle,” or exposing a situation that can harm the public or a company’s customers. For example, a software developer may know that a product is scheduled to ship with inadequate testing to meet the employer’s deadlines. The decision of whether to blow the whistle is one of the most difficult that an IS professional can face. Organizations have a duty to provide alternative, less extreme opportunities for the employee, such as an in-house ombudsperson coupled with a commitment not to penalize employees for exposing problems in-house. Additionally, professional societies should provide a mechanism whereby society members can get advice on how to proceed. Another example of an ethical question concerns a potential conflict of interest. For example, if a consultant has a financial interest in a certain vendor, this should be revealed to any client if that vendor’s products or services might be recommended by the consultant.

82 Codes of Conduct ethics are not precise laws or sets of facts
many areas may present ethical ambiguity many professional societies have adopted ethical codes of conduct which can: 1 be a positive stimulus and instill confidence 2 be educational 3 provide a measure of support 4 be a means of deterrence and discipline 5 enhance the profession's public image Unlike scientific and engineering fields, ethics cannot be reduced to precise laws or sets of facts. Although an employer or a client of a professional can expect that the professional has an internal moral compass, many areas of conduct may present ethical ambiguities. To provide guidance to professionals and to articulate what employers and customers have a right to expect, a number of professional societies have adopted ethical codes of conduct. A professional code of conduct can serve the following functions [GOTT99]: 1. A code can serve two inspirational functions: as a positive stimulus for ethical conduct on the part of the professional, and to instill confidence in the customer or user of an IS product or service. However, a code that stops at just providing inspirational language is likely to be vague and open to an abundance of interpretations. 2. A code can be educational. It informs professionals about what should be their commitment to undertake a certain level of quality of work and their responsibility for the well-being of users of their product and the public, to the extent the product may affect nonusers. The code also serves to educate managers on their responsibility to encourage and support employee ethical behavior and on their own ethical responsibilities. 3. A code provides a measure of support for a professional whose decision to act ethically in a situation may create conflict with an employer or customer. 4. A code can be a means of deterrence and discipline. A professional society can use a code as a justification for revoking membership or even a professional license. An employee can use a code as a basis for a disciplinary action. 5. A code can enhance the profession’s public image, if it is seen to be widely honored.

83 ACM Code of Ethics and Professional Conduct
We illustrate the concept of a professional code of ethics for computer professionals with three specific examples. The ACM (Association for Computing Machinery) Code of Ethics and Professional Conduct ( Figure 19.8 ) applies to computer scientists

84 Revision


Download ppt "Human Resources Security Part 1 of 3"

Similar presentations


Ads by Google