Presentation is loading. Please wait.

Presentation is loading. Please wait.

Operating System Security

Similar presentations


Presentation on theme: "Operating System Security"— Presentation transcript:

1 Operating System Security
Chapter 12 Operating System Security Computer client and server systems are central components of the IT infrastructure for most organizations. The client systems provide access to organizational data and applications, supported by the servers housing those data and applications. However, given that most large software systems will almost certainly have a number of security weaknesses, as we discussed in Chapter 6 and in the previous two chapters, it is currently necessary to manage the installation and continuing operation of these systems to provide appropriate levels of security despite the expected presence of these vulnerabilities. In some circumstances we may be able to use systems designed and evaluated to provide security by design. We examine some of these possibilities in the next chapter. In this chapter we discuss how to provide systems security as a hardening process that includes planning, installation, configuration, update, and maintenance of the operating system and the key applications in use, following the general approach detailed in [NIST08]. We consider this process for the operating system, and then key applications in general, and then discuss some specific aspects in relation to Linux and Windows systems in particular. We conclude with a discussion on securing virtualized systems, where multiple virtual machines may execute on the one physical system.

2 Operating System each layer of code needs measures in place to provide appropriate security services each layer is vulnerable to attack from below if the lower layers are not secured appropriately We view a system as having a number of layers, with the physical hardware at the bottom; the base operating system above including privileged kernel code, APIs, and services; and finally user applications and utilities in the top layer, as shown in Figure This figure also shows the presence of BIOS and possibly other code that is external to, and largely not visible from, the operating system kernel, but which is used when booting the system or to support low-level hardware control. Each of these layers of code needs appropriate hardening measures in place to provide appropriate security services. And each layer is vulnerable to attack from below, should the lower layers not also be secured appropriately.

3 Measures the 2010 Australian Defense Signals Directorate (DSD) list the “Top 35 Mitigation Strategies” over 70% of the targeted cyber intrusions investigated by DSD in 2009 could have been prevented the top four measures for prevention are: patch operating systems and applications using auto-update patch third-party applications restrict admin privileges to users who need them white-list approved applications A number of reports note that the use of a small number of basic hardening measures can prevent a large proportion of the attacks seen in recent years. The 2010 Australian Defense Signals Directorate (DSD) list of the “Top 35 Mitigation Strategies” notes that implementing just the top four of these would have prevented over 70% of the targeted cyber intrusions investigated by DSD in These top four measures are: 1. patch operating systems and applications using auto-update 2. patch third-party applications 3. restrict admin privileges to users who need them 4. white-list approved applications We discuss all four of these measures, and many others in the DSD list, in this chapter. Note that these measures largely align with those in the “20 Critical Controls” developed by DHS, NSA, the Department of Energy, SANS, and others in the United States.

4 Operating System Security
possible for a system to be compromised during the installation process before it can install the latest patches building and deploying a system should be a planned process designed to counter this threat process must: assess risks and plan the system deployment secure the underlying operating system and then the key applications ensure any critical content is secured ensure appropriate network protection mechanisms are used ensure appropriate processes are used to maintain security As we noted above, computer client and server systems are central components of the IT infrastructure for most organizations, may hold critical data and applications, and are a necessary tool for the function of an organization. Accordingly, we need to be aware of the expected presence of vulnerabilities in operating systems and applications as distributed, and the existence of worms scanning for such vulnerabilities at high rates, such as we discussed in Section Thus, it is quite possible for a system to be compromised during the installation process before it can install the latest patches or implement other hardening measures. Hence building and deploying a system should be a planned process designed to counter such a threat, and to maintain security during its operational lifetime. [NIST08] states that this process must: • assess risks and plan the system deployment • secure the underlying operating system and then the key applications • ensure any critical content is secured • ensure appropriate network protection mechanisms are used • ensure appropriate processes are used to maintain security While we address the selection of network protection mechanisms in Chapter 9 , we examine the other items in the rest of this chapter.

5 System Security Planning Process
the purpose of the system, the type of information stored, the applications and services provided, and their security requirements the categories of users of the system, the privileges they have, and the types of information they can access how the users are authenticated how access to the information stored on the system is managed what access the system has to information stored on other hosts, such as file or database servers, and how this is managed who will administer the system, and how they will manage the system (via local or remote access) any additional security measures required on the system, including the use of host firewalls, anti-virus or other malware protection mechanisms, and logging [NIST08] provides a list of items that should be considered during the system security planning process. While its focus is on secure server deployment, much of the list applies equally well to client system design. This list includes consideration of: • the purpose of the system, the type of information stored, the applications and services provided, and their security requirements • the categories of users of the system, the privileges they have, and the types of information they can access • how the users are authenticated • how access to the information stored on the system is managed • what access the system has to information stored on other hosts, such as file or database servers, and how this is managed • who will administer the system, and how they will manage the system (via local or remote access) • any additional security measures required on the system, including the use of host firewalls, anti-virus or other malware protection mechanisms, and logging

6 Operating Systems Hardening
first critical step in securing a system is to secure the base operating system basic steps install and patch the operating system harden and configure the operating system to adequately address the indentified security needs of the system install and configure additional security controls, such as anti- virus, host-based firewalls, and intrusion detection system (IDS) test the security of the basic operating system to ensure that the steps taken adequately address its security needs The first critical step in securing a system is to secure the base operating system upon which all other applications and services rely. A good security foundation needs a properly installed, patched, and configured operating system. Unfortunately, the default configuration for many operating systems often maximizes ease of use and functionality, rather than security. Further, since every organization has its own security needs, the appropriate security profile, and hence configuration, will also differ. What is required for a particular system should be identified during the planning phase, as we have just discussed. While the details of how to secure each specific operating system differ, the broad approach is similar. Appropriate security configuration guides and checklists exist for most common operating systems, and these should be consulted, though always informed by the specific needs of each organization and their systems. In some cases, automated tools may be available to further assist in securing the system configuration. [NIST08] suggests the following basic steps should be used to secure an operating system: • install and patch the operating system • harden and configure the operating system to adequately address the identified security needs of the system by: • removing unnecessary services, applications, and protocols • configuring users, groups, and permissions • configuring resource controls • install and configure additional security controls, such as anti-virus, host-based firewalls, and intrusion detection systems (IDS), if needed • test the security of the basic operating system to ensure that the steps taken adequately address its security needs

7 Initial Setup and Patching
system security begins with the installation of the operating system ideally new systems should be constructed on a protected network full installation and hardening process should occur before the system is deployed to its intended location initial installation should install the minimum necessary for the desired system overall boot process must also be secured the integrity and source of any additional device driver code must be carefully validated critical that the system be kept up to date, with all critical security related patches installed should stage and validate all patches on the test systems before deploying them in production System security begins with the installation of the operating system. As we have already noted, a network connected, unpatched system, is vulnerable to exploit during its installation or continued use. Hence it is important that the system not be exposed while in this vulnerable state. Ideally new systems should be constructed on a protected network. This may be a completely isolated network, with the operating system image and all available patches transferred to it using removable media such as DVDs or USB drives. Given the existence of malware that can propagate using removable media, as we discuss in Chapter 6 , care is needed to ensure the media used here is not so infected. Alternatively, a network with severely restricted access to the wider Internet may be used. Ideally it should have no inbound access, and have outbound access only to the key sites needed for the system installation and patching process. In either case, the full installation and hardening process should occur before the system is deployed to its intended, more accessible, and hence vulnerable, location. The initial installation should install the minimum necessary for the desired system, with additional software packages included only if they are required for the function of the system. We explore the rationale for minimizing the number of packages on the system shortly. The overall boot process must also be secured. This may require adjusting options on, or specifying a password required for changes to, the BIOS code used when the system initially boots. It may also require limiting which media the system is normally permitted to boot from. This is necessary to prevent an attacker from changing the boot process to install a covert hypervisor, such as we discussed in Section 6.8 , or to just boot a system of their choice from external media in order to bypass the normal system access controls on locally stored data. The use of a cryptographic file system may also be used to address this threat, as we note later. Care is also required with the selection and installation of any additional device driver code, since this executes with full kernel level privileges, but is often supplied by a third party. The integrity and source of such driver code must be carefully validated given the high level of trust it has. A malicious driver can potentially bypass many security controls to install malware. This was done in both the Blue Pill demonstration rootkit, which we discussed in Section 6.8 , and the Stuxnet worm, which we described in Section 6.3 . Given the continuing discovery of software and other vulnerabilities for commonly used operating systems and applications, it is critical that the system be kept as up to date as possible, with all critical security related patches installed. Indeed, doing this addresses the top two of the four key DSD mitigation strategies we listed previously. Nearly all commonly used systems now provide utilities that can automatically download and install security updates. These tools should be configured and used to minimize the time any system is vulnerable to weaknesses for which patches are available. Note that on change-controlled systems, you should not run automatic updates, because security patches can, on rare but significant occasions, introduce instability. For systems on which availability and uptime are of paramount importance, therefore, you should stage and validate all patches on test systems before deploying them in production.

8 Remove Unnecessary Services, Applications, Protocols
when performing the initial installation the supplied defaults should not be used default configuration is set to maximize ease of use and functionality rather than security if additional packages are needed later they can be installed when they are required if fewer software packages are available to run the risk is reduced system planning process should identify what is actually required for a given system Because any of the software packages running on a system may contain software vulnerabilities, clearly if fewer software packages are available to run, then the risk is reduced. There is clearly a balance between usability, providing all software that may be required at some time, with security and a desire to limit the amount of software installed. The range of services, applications, and protocols required will vary widely between organizations, and indeed between systems within an organization. The system planning process should identify what is actually required for a given system, so that a suitable level of functionality is provided, while eliminating software that is not required to improve security. The default configuration for most distributed systems is set to maximize ease of use and functionality, rather than security. When performing the initial installation, the supplied defaults should not be used, but rather the installation should be customized so that only the required packages are installed. If additional packages are needed later, they can be installed when they required. [NIST08] and many of the security hardening guides provide lists of services, applications, and protocols that should not be installed if not required. [NIST08] also states a strong preference for not installing unwanted software, rather than installing and then later removing or disabling it. It argues this preference because they note that many uninstall scripts fail to completely remove all components of a package. They also note that disabling a service means that while it is not available as an initial point of attack, should an attacker succeed in gaining some access to a system, then disabled software could be re-enabled and used to further compromise a system. It is better for security if unwanted software is not installed, and thus not available for use at all.

9 Configure Users, Groups, and Authentication
system planning process should consider: categories of users on the system privileges they have types of information they can access how and where they are defined and authenticated default accounts included as part of the system installation should be secured those that are not required should be either removed or disabled policies that apply to authentication credentials configured Configure Users, Groups, and Authentication not all users with access to a system will have the same access to all data and resources on that system elevated privileges should be restricted to only those users that require them, and then only when they are needed to perform a task Not all users with access to a system will have the same access to all data and resources on that system. All modern operating systems implement access controls to data and resources, as we discuss in Chapter 4 . Nearly all provide some form of discretionary access controls. Some systems may provide role-based or mandatory access control mechanisms as well. The system planning process should consider the categories of users on the system, the privileges they have, the types of information they can access, and how and where they are defined and authenticated. Some users will have elevated privileges to administer the system; others will be normal users, sharing appropriate access to files and other data as required; and there may even be guest accounts with very limited access. The third of the four key DSD mitigation strategies is to restrict elevated privileges to only those users that require them. Further, it is highly desirable that such users only access elevated privileges when needed to perform some task that requires them, and to otherwise access the system as a normal user. This improves security by providing a smaller window of opportunity for an attacker to exploit the actions of such privileged users. Some operating systems provide special tools or access mechanisms to assist administrative users to elevate their privileges only when necessary, and to appropriately log these actions. One key decision is whether the users, the groups they belong to, and their authentication methods are specified locally on the system or will use a centralized authentication server. Whichever is chosen, the appropriate details are now configured on the system. Also at this stage, any default accounts included as part of the system installation should be secured. Those which are not required should be either removed or at least disabled. System accounts that manage services on the system should be set so they cannot be used for interactive logins. And any passwords installed by default should be changed to new values with appropriate security. Any policy that applies to authentication credentials, and especially to password security, is also configured. This includes details of which authentication methods are accepted for different methods of account access. And it includes details of the required length, complexity, and age allowed for passwords. We discuss some of these issues in Chapter 3 .

10 Install Configure Additional Resource Security Controls Controls
once the users and groups are defined, appropriate permissions can be set on data and resources many of the security hardening guides provide lists of recommended changes to the default access configuration further security possible by installing and configuring additional security tools: anti-virus software host-based firewalls IDS or IPS software application white-listing Once the users and their associated groups are defined, appropriate permissions can be set on data and resources to match the specified policy. This may be to limit which users can execute some programs, especially those that modify the system state. Or it may be to limit which users can read or write data in certain directory trees. Many of the security hardening guides provide lists of recommended changes to the default access configuration to improve security. Further security improvement may be possible by installing and configuring additional security tools such as anti-virus software, host-based firewalls, IDS or IPS software, or application white-listing. Some of these may be supplied as part of the operating systems installation, but not configured and enabled by default. Others are third-party products that are acquired and used. Given the widespread prevalence of malware, as we discuss in Chapter 6 , appropriate anti-virus (which as noted addresses a wide range of malware types) is a critical security component on many systems. Anti-virus products have traditionally been used on Windows systems, since their high use made them a preferred target for attackers. However, the growth in other platforms, particularly smartphones, has led to more malware being developed for them. Hence appropriate anti-virus products should be considered for any system as part of its security profile. Host-based firewalls, IDS, and IPS software also may improve security by limiting remote network access to services on the system. If remote access to a service is not required, though some local access is, then such restrictions help secure such services from remote exploit by an attacker. Firewalls are traditionally configured to limit access by port or protocol, from some or all external systems. Some may also be configured to allow access from or to specific programs on the systems, to further restrict the points of attack, and to prevent an attacker installing and accessing their own malware. IDS and IPS software may include additional mechanisms such as traffic monitoring, or file integrity checking to identify and even respond to some types of attack. Another additional control is to white-list applications. This limits the programs that can execute on the system to just those in an explicit list. Such a tool can prevent an attacker installing and running their own malware, and was the last of the four key DSD mitigation strategies. While this will improve security, it functions best in an environment with a predictable set of applications that users require. Any change in software usage would require a change in the configuration, which may result in increased IT support demands. Not all organizations or all systems will be sufficiently predictable to suit this type of control.

11 Test the System Security
checklists are included in security hardening guides there are programs specifically designed to: review a system to ensure that a system meets the basic security requirements scan for known vulnerabilities and poor configuration practices should be done following the initial hardening of the system repeated periodically as part of the security maintenance process Test the System Security final step in the process of initially securing the base operating system is security testing goal: ensure the previous security configuration steps are correctly implemented identify any possible vulnerabilities The final step in the process of initially securing the base operating system is security testing. The goal is to ensure that the previous security configuration steps are correctly implemented, and to identify any possible vulnerabilities that must be corrected or managed. Suitable checklists are included in many security hardening guides. There are also programs specifically designed to review a system to ensure that a system meets the basic security requirements, and to scan for known vulnerabilities and poor configuration practices. This should be done following the initial hardening of the system, and then repeated periodically as part of the security maintenance process.

12 Application Configuration
may include: creating and specifying appropriate data storage areas for application making appropriate changes to the application or service default configuration details some applications or services may include: default data scripts user accounts of particular concern with remotely accessed services such as Web and file transfer services risk from this form of attack is reduced by ensuring that most of the files can only be read, but not written, by the server Any application specific configuration is then performed. This may include creating and specifying appropriate data storage areas for the application, and making appropriate changes to the application or service default configuration details. Some applications or services may include default data, scripts, or user accounts. These should be reviewed, and only retained if required, and suitably secured. A well-known example of this is found with Web servers, which often include a number of example scripts, quite a few of which are known to be insecure. These should not be used as supplied. As part of the configuration process, careful consideration should be given to the access rights granted to the application. Again, this is of particular concern with remotely accessed services, such as Web and file transfer services. The server application should not be granted the right to modify files, unless that function is specifically required. A very common configuration fault seen with Web and file transfer servers is for all the files supplied by the service to be owned by the same “user” account that the server executes as. The consequence is that any attacker able to exploit some vulnerability in either the server software or a script executed by the server may be able to modify any of these files. The large number of “Web defacement” attacks is clear evidence of this type of insecure configuration. Much of the risk from this form of attack is reduced by ensuring that most of the files can only be read, but not written, by the server. Only those files that need to be modified, to store uploaded form data for example, or logging details, should be writeable by the server. Instead the files should mostly be owned and modified by the users on the system who are responsible for maintaining the information.

13 Encryption Technology
is a key enabling technology that may be used to secure data both in transit and when stored must be configured and appropriate cryptographic keys created, signed, and secured if secure network services are provided using TLS or IPsec suitable public and private keys must be generated for each of them if secure network services are provided using SSH, appropriate server and client keys must be created cryptographic file systems are another use of encryption Encryption is a key enabling technology that may be used to secure data both in transit and when stored, as we discuss in Chapter 2 and in Parts Four and Five. If such technologies are required for the system, then they must be configured, and appropriate cryptographic keys created, signed, and secured. If secure network services are provided, most likely using either TLS or IPsec, then suitable public and private keys must be generated for each of them. Then X.509 certificates are created and signed by a suitable certificate authority, linking each service identity with the public key in use, as we discuss in Section If secure remote access is provided using Secure Shell (SSH), then appropriate server, and possibly client keys, must be created. Cryptographic file systems are another use of encryption. If desired, then these must be created and secured with suitable keys.

14 Security Maintenance process of maintaining security is continuous
security maintenance includes: monitoring and analyzing logging information performing regular backups recovering from security compromises regularly testing system security using appropriate software maintenance processes to patch and update all critical software, and to monitor and revise configuration as needed Once the system is appropriately built, secured, and deployed, the process of maintaining security is continuous. This results from the constantly changing environment, the discovery of new vulnerabilities, and hence exposure to new threats. [NIST08] suggests that this process of security maintenance includes the following additional steps: • monitoring and analyzing logging information • performing regular backups • recovering from security compromises • regularly testing system security • using appropriate software maintenance processes to patch and update all critical software, and to monitor and revise configuration as needed We have already noted the need to configure automatic patching and update where possible, or to have a process to manually test and install patches on configuration controlled systems, and that the system should be regularly tested using checklist or automated tools where possible. We discuss the process of incident response in Section We now consider the critical logging and backup procedures.

15 can only inform you about bad things that have already happened
in the event of a system breach or failure, system administrators can more quickly identify what happened key is to ensure you capture the correct data and then appropriately monitor and analyze this data information can be generated by the system, network and applications range of data acquired should be determined during the system planning stage generates significant volumes of information and it is important that sufficient space is allocated for them automated analysis is preferred Logging [NIST08] notes that “logging is a cornerstone of a sound security posture.” Logging is a reactive control that can only inform you about bad things that have already happened. But effective logging helps ensure that in the event of a system breach or failure, system administrators can more quickly and accurately identify what happened and thus most effectively focus their remediation and recovery efforts. The key is to ensure you capture the correct data in the logs, and are then able to appropriately monitor and analyze this data. Logging information can be generated by the system, network and applications. The range of logging data acquired should be determined during the system planning stage, as it depends on the security requirements and information sensitivity of the server. Logging can generate significant volumes of information. It is important that sufficient space is allocated for them. A suitable automatic log rotation and archive system should also be configured to assist in managing the overall size of the logging information. Manual analysis of logs is tedious and is not a reliable means of detecting adverse events. Rather, some form of automated analysis is preferred, as it is more likely to identify abnormal activity. We discuss the process of logging further in Chapter 18 .

16 Data Backup and Archive
performing regular backups of data is a critical control that assists with maintaining the integrity of the system and user data may be legal or operational requirements for the retention of data backup the process of making copies of data at regular intervals archive the process of retaining copies of data over extended periods of time in order to meet legal and operational requirements to access past data needs and policy relating to backup and archive should be determined during the system planning stage kept online or offline stored locally or transported to a remote site trade-offs include ease of implementation and cost versus greater security and robustness against different threats Performing regular backups of data on a system is another critical control that assists with maintaining the integrity of the system and user data. There are many reasons why data can be lost from a system, including hardware or software failures, or accidental or deliberate corruption. There may also be legal or operational requirements for the retention of data. Backup is the process of making copies of data at regular intervals, allowing the recovery of lost or corrupted data over relatively short time periods of a few hours to some weeks. Archive is the process of retaining copies of data over extended periods of time, being months or years, in order to meet legal and operational requirements to access past data. These processes are often linked and managed together, although they do address distinct needs. The needs and policy relating to backup and archive should be determined during the system planning stage. Key decisions include whether the backup copies are kept online or offline, and whether copies are stored locally or transported to a remote site. The trade-offs include ease of implementation and cost versus greater security and robustness against different threats. A good example of the consequences of poor choices here was seen in the attack on an Australian hosting provider in early The attackers destroyed not only the live copies of thousands of customer’s sites, but also all of the online backup copies. As a result, many customers who had not kept their own backup copies lost all of their site content and data, with serious consequences for many of them, and for the hosting provider as well. In other examples, many organizations that only retained onsite backups have lost all their data as a result of fire or flooding in their IT center. These risks must be appropriately evaluated.

17 Linux/Unix Security patch management
keeping security patches up to date is a widely recognized and critical control for maintaining security application and service configuration most commonly implemented using separate text files for each application and service generally located either in the /etc directory or in the installation tree for a specific application individual user configurations that can override the system defaults are located in hidden “dot” files in each user’s home directory most important changes needed to improve system security are to disable services and applications that are not required Ensuring that system and application code is kept up to date with security patches is a widely recognized and critical control for maintaining security. Modern Unix and Linux distributions typically include tools for automatically downloading and installing software updates, including security updates, which can minimize the time a system is vulnerable to known vulnerabilities for which patches exist. For example, Red Hat, Fedora, and CentOS include up2date or yum ; SuSE includes yast; and Debian uses apt-get , though you must run it as a cron job for automatic updates. It is important to configure whichever update tool is provided on the distribution in use, to install at least critical security patches in a timely manner. As noted earlier, change-controlled systems should not run automatic updates, because they may possibly introduce instability. Such systems should validate all patches on test systems before deploying them to production systems. Configuration of applications and services on Unix and Linux systems is most commonly implemented using separate text files for each application and service. System-wide configuration details are generally located either in the /etc directory or in the installation tree for a specific application. Where appropriate, individual user configurations that can override the system defaults are located in hidden “dot” files in each user’s home directory. The name, format, and usage of these files are very much dependent on the particular system version and applications in use. Hence the systems administrators responsible for the secure configuration of such a system must be suitably trained and familiar with them. Traditionally, these files were individually edited using a text editor, with any changes made taking effect either when the system was next rebooted or when the relevant process was sent a signal indicating that it should reload its configuration settings. Current systems often provide a GUI interface to these configuration files to ease management for novice administrators. Using such a manager may be appropriate for small sites with a limited number of systems. Organizations with larger numbers of systems may instead employ some form of centralized management, with a central repository of critical configuration files that can be automatically customized and distributed to the systems they manage. The most important changes needed to improve system security are to disable services, especially remotely accessible services, and applications, that are not required, and to then ensure that applications and services that are needed are appropriately configured, following the relevant security guidance for each. We provide further details on this in Section

18 Linux/Unix Security users, groups, and permissions
access is specified as granting read, write, and execute permissions to each of owner, group, and others for each resource guides recommend changing the access permissions for critical directories and files local exploit software vulnerability that can be exploited by an attacker to gain elevated privileges remote exploit software vulnerability in a network server that could be triggered by a remote attacker As we describe in Sections 4.5 and 25.3 , Unix and Linux systems implement discretionary access control to all file system resources. These include not only files and directories but devices, processes, memory, and indeed most system resources. Access is specified as granting read, write, and execute permissions to each of owner, group, and others, for each resource, as shown in Figure These are set using the chmod command. Some systems also support extended file attributes with access control lists that provide more flexibility, by specifying these permissions for each entry in a list of users and groups. These extended access rights are typically set and displayed using the getfacl and setfacl commands. These commands can also be used to specify set user or set group permissions on the resource. Information on user accounts and group membership are traditionally stored in the /etc/passwd and /etc/group files, though modern systems also have the ability to import these details from external repositories queried using LDAP or NIS for example. These sources of information, and indeed of any associated authentication credentials, are specified in the PAM (pluggable authentication module) configuration for the system, often using text files in the /etc/pam.d directory. In order to partition access to information and resources on the system, users need to be assigned to appropriate groups granting them any required access. The number and assignments to groups should be decided during the system security planning process, and then configured in the appropriate information repository, whether locally using the configuration files in /etc, or on some centralized database. At this time, any default or generic users supplied with the system should be checked, and removed if not required. Other accounts that are required, but are not associated with a user that needs to login, should have login capability disabled, and any associated password or authentication credential removed. Guides to hardening Unix and Linux systems also often recommend changing the access permissions for critical directories and files, in order to further limit access to them. Programs that set user (setuid) to root or set group (setgid) to a privileged group are key target for attackers. As we detail in Sections 4.5 and 25.3 , such programs execute with superuser rights, or with access to resources belonging to the privileged group, no matter which user executes them. A software vulnerability in such a program can potentially be exploited by an attacker to gain these elevated privileges. This is known as a local exploit. A software vulnerability in a network server could be triggered by a remote attacker. This is known as a remote exploit. It is widely accepted that the number and size of setuid root programs in particular should be minimized. They cannot be eliminated, as superuser privileges are required to access some resources on the system. The programs that manage user login, and allow network services to bind to privileged ports, are examples. However, other programs, that were once setuid root for programmer convenience, can function as well if made setgid to a suitable privileged group that has the necessary access to some resource. Programs to display system state, or deliver mail, have been modified in this way. System hardening guides may recommend further changes and indeed the removal of some such programs that are not required on a particular system.

19 Linux/Unix Security remote access controls logging and log rotation
several host firewall programs may be used most systems provide an administrative utility to select which services will be permitted to access the system logging and log rotation should not assume that the default setting is necessarily appropriate Given that remote exploits are of concern, it is important to limit access to only those services required. This function may be provided by a perimeter firewall, as we discussed in Chapter 9 . However, host-based firewall or network access control mechanisms may provide additional defenses. Unix and Linux systems support several alternatives for this. The TCP Wrappers library and tcpd daemon provide one mechanism that network servers may use. Lightly loaded services may be “wrapped” using tcpd, which listens for connection requests on their behalf. It checks that any request is permitted by configured policy before accepting it and invoking the server program to handle it. Requests that are rejected are logged. More complex and heavily loaded servers incorporate this functionality into their own connection management code, using the TCP Wrappers library, and the same policy configuration files. These files are /etc/ hosts.allow and /etc/hosts.deny, which should be set as policy requires. There are several host firewall programs that may be used. Linux systems primarily now use the iptables program to configure the netfilter kernel module. This provides comprehensive, though complex, stateful packet filtering, monitoring, and modification capabilities. BSD-based systems (including MacOSX) typically use the ipfw program with similar, though less comprehensive, capabilities. Most systems provide an administrative utility to generate common configurations and to select which services will be permitted to access the system. These should be used unless there are non-standard requirements, given the skill and knowledge needed to edit these configuration files directly. Most applications can be configured to log with levels of detail ranging from “debugging” (maximum detail) to “none.” Some middle setting is usually the best choice, but you should not assume that the default setting is necessarily appropriate. In addition, many applications allow you to specify either a dedicated file to write application event data to or a syslog facility to use when writing log data to /dev/log (see Section 25.5 ). If you wish to handle system logs in a consistent, centralized manner, it’s usually preferable for applications to send their log data to /dev/log. Note, however, that logrotate (also discussed in Section 25.5 ) can be configured to rotate any logs on the system, whether written by syslogd, Syslog-NG, or individual applications.

20 Linux/Unix Security chroot jail
restricts the server’s view of the file system to just a specified portion uses chroot system call to confine a process by mapping the root of the filesystem to some other directory file directories outside the chroot jail aren’t visible or reachable main disadvantage is added complexity Some network accessible services do not require access to the full file-system, but rather only need a limited set of data files and directories for their operation. FTP is a common example of such a service. It provides the ability to download files from, and upload files to, a specified directory tree. If such a server were compromised and had access to the entire system, an attacker could potentially access and compromise data elsewhere. Unix and Linux systems provide a mechanism to run such services in a chroot jail , which restricts the server’s view of the file system to just a specified portion. This is done using the chroot system call that confines a process to some subset of the file system by mapping the root of the filesystem “/” to some other directory (e.g., /srv/ ftp/public). To the “chrooted” server, everything in this chroot jail appears to actually be in / (e.g., the “real” directory /srv/ftp/public/etc/myconfigfile appears as /etc/myconfigfile in the chroot jail). Files in directories outside the chroot jail (e.g., /srv/www or /etc.) aren’t visible or reachable at all. Chrooting therefore helps contain the effects of a given server being compromised or hijacked. The main disadvantage of this method is added complexity: a number of files (including all executable libraries used by the server), directories, and devices needed must be copied into the chroot jail. Determining just what needs to go into the jail for the server to work properly can be tricky, though detailed procedures for chrooting many different applications are available. Troubleshooting a chrooted application can also be difficult. Even if an application explicitly supports this feature, it may behave in unexpected ways when run chrooted. Note also that if the chrooted process runs as root, it can “break out” of the chroot jail with little difficulty. Still, the advantages usually far outweigh the disadvantages of chrooting network services. The system hardening guides such as those provided by the “NSA—Security Configuration Guides” include security checklists for a number of Unix and Linux distributions that may be followed. There are also a number of commercial and open-source tools available to perform system security scanning and vulnerability testing. One of the best known is “Nessus.” This was originally an open-source tool, which was commercialized in 2005, though some limited free-use versions are available. “Tripwire” is a well-known file integrity checking tool that maintains a database of cryptographic hashes of monitored files, and scans to detect any changes, whether as a result of malicious attack, or simply accidental or incorrectly managed update. This again was originally an open-source tool, which now has both commercial and free variants available. The “Nmap” network scanner is another well-known and deployed assessment tool that focuses on identifying and profiling hosts on the target network, and the network services they offer.

21 Windows Security users administration and access controls
patch management “Windows Update” and “Windows Server Update Service” assist with regular maintenance and should be used third party applications also provide automatic update support users administration and access controls systems implement discretionary access controls resources Vista and later systems include mandatory integrity controls objects are labeled as being of low, medium, high, or system integrity level system ensures the subject’s integrity is equal or higher than the object’s level implements a form of the Biba Integrity model We now consider some specific issues with the secure installation, configuration, and management of Microsoft Windows systems. These systems have for many years formed a significant portion of all “general purpose” system installations. Hence, they have been specifically targeted by attackers, and consequently security countermeasures are needed to deal with these challenges. The process of providing appropriate levels of security still follows the general outline we describe in this chapter. Beyond the general guidance in this section, we provide more detailed discussion of Windows security mechanisms later in Chapter 26 . Again, there are a large range of resources available to assist administrators of these systems, including reports such as [SYMA07], online resources such as the “Microsoft Security Tools & Checklists,” and specific system hardening guides such as those provided by the “NSA—Security Configuration Guides.” The “Windows Update” service and the “Windows Server Update Services” assist with the regular maintenance of Microsoft software, and should be configured and used. Many other third-party applications also provide automatic update support, and these should be enabled for selected applications. Users and groups in Windows systems are defined with a Security ID (SID). This information may be stored and used locally, on a single system, in the Security Account Manager (SAM). It may also be centrally managed for a group of systems belonging to a domain, with the information supplied by a central Active Directory (AD) system using the LDAP protocol. Most organizations with multiple systems will manage them using domains. These systems can also enforce common policy on users on any system in the domain. We further explore the Windows security architecture in Section Windows systems implement discretionary access controls to system resources such as files, shared memory, and named pipes. The access control list has a number of entries that may grant or deny access rights to a specific SID, which may be for an individual user or for some group of users. Windows Vista and later systems also include mandatory integrity controls. These label all objects, such as processes and files, and all users, as being of low, medium, high, or system integrity level. Then whenever data is written to an object, the system first ensures that the subject’s integrity is equal or higher than the object’s level. This implements a form of the Biba Integrity model we discuss in Section 13.2 that specifically targets the issue of untrusted remote code executing in, for example Windows Internet Explorer, trying to modify local resources.

22 Windows Security Users Administration and Access Controls
Windows systems also define privileges system wide and granted to user accounts combination of share and NTFS permissions may be used to provide additional security and granularity when accessing files on a shared resource User Account Control (UAC) provided in Vista and later systems assists with ensuring users with administrative rights only use them when required, otherwise accesses the system as a normal user Low Privilege Service Accounts used for long-lived service processes such as file, print, and DNS services Windows systems also define privileges, which are system wide and granted to user accounts. Examples of privileges include the ability to backup the computer (which requires overriding the normal access controls to obtain a complete backup), or the ability to change the system time. Some privileges are considered dangerous, as an attacker may use them to damage the system. Hence they must be granted with care. Others are regarded as benign, and may be granted to many or all user accounts. As with any system, hardening the system configuration can include further limiting the rights and privileges granted to users and groups on the system. As the access control list gives deny entries greater precedence, you can set an explicit deny permission to prevent unauthorized access to some resource, even if the user is a member of a group that otherwise grants access. When accessing files on a shared resource, a combination of share and NTFS permissions may be used to provide additional security and granularity. For example, you can grant full control to a share, but read-only access to the files within it. If access-based enumeration is enabled on shared resources, it can automatically hide any objects that a user is not permitted to read. This is useful with shared folders containing many users’ home directories, for example. You should also ensure users with administrative rights only use them when required, and otherwise access the system as a normal user. The User Account Control (UAC) provided in Vista and later systems assists with this requirement. These systems also provide Low Privilege Service Accounts that may be used for long-lived service processes, such as file, print, and DNS services that do not require elevated privileges.

23 Windows Security application and service configuration
much of the configuration information is centralized in the Registry forms a database of keys and values that may be queried and interpreted by applications registry keys can be directly modified using the “Registry Editor” more useful for making bulk changes Unlike Unix and Linux systems, much of the configuration information in Windows systems is centralized in the Registry, which forms a database of keys and values that may be queried and interpreted by applications on these systems. Changes to these values can be made within specific applications, setting preferences in the application that are then saved in the registry using the appropriate keys and values. This approach hides the detailed representation from the administrator. Alternatively, the registry keys can be directly modified using the “Registry Editor.” This approach is more useful for making bulk changes, such as those recommended in hardening guides. These changes may also be recorded in a central repository, and pushed out whenever a user logs in to a system within a network domain. Given the predominance of malware that targets Windows systems, it is essential that suitable anti-virus, anti-spyware, personal firewall, and other malware and attack detection and handling software packages are installed and configured on such systems. This is clearly needed for network connected systems, as shown by the high-incidence numbers in reports such as [SYMA11]. However, as the Stuxnet attacks in 2010 show, even isolated systems updated using removable media are vulnerable, and thus must also be protected.

24 Windows Security other security controls
essential that anti-virus, anti-spyware, personal firewall, and other malware and attack detection and handling software packages are installed and configured current generation Windows systems include basic firewall and malware countermeasure capabilities important to ensure the set of products in use are compatible Windows systems also support a range of cryptographic functions: encrypting files and directories using the Encrypting File System (EFS) full-disk encryption with AES using BitLocker “Microsoft Baseline Security Analyzer” free, easy to use tool that checks for compliance with Microsoft’s security recommendations Current generation Windows systems include some basic firewall and malware countermeasure capabilities, which should certainly be used at a minimum. However, many organizations find that these should be augmented with one or more of the many commercial products available. One issue of concern is undesirable interactions between anti-virus and other products from multiple vendors. Care is needed when planning and installing such products to identify possible adverse interactions, and to ensure the set of products in use are compatible with each other. Windows systems also support a range of cryptographic functions that may be used where desirable. These include support for encrypting files and directories using the Encrypting File System (EFS), and for full-disk encryption with AES using BitLocker. The system hardening guides such as those provided by the “NSA—Security Configuration Guides” also include security checklists for various versions of Windows. There are also a number of commercial and open-source tools available to perform system security scanning and vulnerability testing of Windows systems. The “Microsoft Baseline Security Analyzer” is a simple, free, easy-to-use tool that aims to help small- to medium-sized businesses improve the security of their systems by checking for compliance with Microsoft’s security recommendations. Larger organizations are likely better served using one of the larger, centralized, commercial security analysis suites available.

25 Virtualization a technology that provides an abstraction of the resources used by some software which runs in a simulated environment called a virtual machine (VM) benefits include better efficiency in the use of the physical system resources provides support for multiple distinct operating systems and associated applications on one physical system raises additional security concerns Virtualization refers to a technology that provides an abstraction of the computing resources used by some software, which thus runs in a simulated environment called a virtual machine (VM). There are many types of virtualization; however, in this section we are most interested in full virtualization. This allows multiple full operating system instances to execute on virtual hardware, supported by a hypervisor that manages access to the actual physical hardware resources. Benefits arising from using virtualization include better efficiency in the use of the physical system resources than is typically seen using a single operating system instance. This is particularly evident in the provision of virtualized server systems. Virtualization can also provide support for multiple distinct operating systems and associated applications on the one physical system. This is more commonly seen on client systems. There are a number of additional security concerns raised in virtualized systems, as a consequence both of the multiple operating systems executing side by side and of the presence of the virtualized environment and hypervisor as a layer below the operating system kernels and the security services they provide. [CLEE09] presents a survey of some of the security issues arising from such a use of virtualization, a number of which we will discuss further.

26 Virtualization Alternatives
application virtualization allows applications written for one environment to execute on some other operating system full virtualization multiple full operating system instances execute in parallel virtual machine monitor (VMM) hypervisor coordinates access between each of the guests and the actual physical hardware resources There are many forms of creating a simulated, virtualized environment. These include application virtualization , as provided by the Java Virtual Machine environment. This allows applications written for one environment, to execute on some other operating system. It also includes full virtualization , in which multiple full operating system instances execute in parallel. Each of these guest operating systems, along with their own set of applications, executes in its own VM on virtual hardware. These guest OSs are managed by a hypervisor , or virtual machine monitor (VMM), that coordinates access between each of the guests and the actual physical hardware resources, such as CPU, memory, disk, network, and other attached devices. The hypervisor provides a similar hardware interface as that seen by operating systems directly executing on the actual hardware. As a consequence, little if any modification is needed to the guest OSs and their applications. Recent generations of CPUs provide special instructions that improve the efficiency of hypervisor operation.

27 Native Virtualization Security Layers
Full virtualization systems may be further divided into native virtualization systems, in which the hypervisor executes directly on the underlying hardware, as we show in Figure 12.2 , and hosted virtualization systems, in which the hypervisor executes as just another application on a host OS that is running on the underlying hardware, as we show in Figure Native virtualization systems are typically seen in servers, with the goal of improving the execution efficiency of the hardware. They are arguably also more secure, as they have fewer additional layers than the alternative hosted approach.

28 Hosted Virtualization Security Layers
Hosted virtualization systems are more common in clients, where they run along side other applications on the host OS, and are used to support applications for alternate operating system versions or types. As this approach adds additional layers with the host OS under, and other host applications beside, the hypervisor, this may result in increased security concerns. In virtualized systems, the available hardware resources must be appropriately shared between the various guest OSs. These include CPU, memory, disk, network, and other attached devices. CPU and memory are generally partitioned between these, and scheduled as required. Disk storage may be partitioned, with each guest having exclusive use of some disk resources. Alternatively, a “virtual disk” may be created for each guest, which appears to it as a physical disk with a full file-system, but is viewed externally as a single “disk image” file on the underlying file-system. Attached devices such as optical disks or USB devices are generally allocated to a single guest OS at a time. Several alternatives exist for providing network access. The guest OS may have direct access to distinct network interface cards on the system; the hypervisor may mediate access to shared interfaces; or the hypervisor may implement virtual network interface cards for each guest, routing traffic between guests as required. This last approach is quite common, and arguably the most efficient since traffic between guests does not need to be relayed via external network links. It does have security consequences in that this traffic is not subject to monitoring by probes attached to networks, such as we discussed in Chapter 9 . Hence alternative, host-based probes would be needed in such a system if such monitoring is required.

29 Virtualization Security Issues
security concerns include: guest OS isolation ensuring that programs executing within a guest OS may only access and use the resources allocated to it guest OS monitoring by the hypervisor which has privileged access to the programs and data in each guest OS virtualized environment security particularly image and snapshot management which attackers may attempt to view or modify [CLEF09] and [NIST11] both detail a number of security concerns that result from the use of virtualized systems, including: • guest OS isolation, ensuring that programs executing within a guest OS may only access and use the resources allocated to it, and not covertly interact with programs or data either in other guest OSs or in the hypervisor. • guest OS monitoring by the hypervisor, which has privileged access to the programs and data in each guest OS, and must be trusted as secure from subversion and compromised use of this access virtualized environment security, particularly as regards image and snapshot management, which attackers may attempt to view or modify These security concerns may be regarded as an extension of the concerns we have already discussed with securing operating systems and applications. If a particular operating system and application configuration is vulnerable when running directly on hardware in some context, it will most likely also be vulnerable when running in a virtualized environment. And should that system actually be compromised, it would be at least as capable of attacking other nearby systems, whether they are also executing directly on hardware or running as other guests in a virtualized environment. The use of a virtualized environment may improve security by further isolating network traffic between guests than would be the case when such systems run natively, and from the ability of the hypervisor to transparently monitor activity on all guests OS. However, the presence of the virtualized environment and the hypervisor may reduce security if vulnerabilities exist within it which attackers may exploit. Such vulnerabilities could allow programs executing in a guest to covertly access the hypervisor, and hence other guest OS resources. This is known as VM escape, and is of concern, as we discussed in Section Virtualized systems also often provide support for suspending an executing guest OS in a snapshot, saving that image, and then restarting execution at a later time, possibly even on another system. If an attacker can view or modify this image, they can compromise the security of the data and programs contained within it. Thus the use of virtualization adds additional layers of concern, as we have previously noted. Securing virtualized systems means extending the security process to secure and harden these additional layers. In addition to securing each guest operating system and applications, the virtualized environment and the hypervisor must also be secured.

30 Hypervisor Security should be
secured using a process similar to securing an operating system installed in an isolated environment configured so that it is updated automatically monitored for any signs of compromise accessed only by authorized administration may support both local and remote administration so must be configured appropriately remote administration access should be considered and secured in the design of any network firewall and IDS capability in use Access to VM image and snapshots must be carefully controlled The hypervisor should be secured using a process similar to that with securing an operating system. That is, it should be installed in an isolated environment, from known clean media, and updated to the latest patch level in order to minimize the number of vulnerabilities that may be present. It should then be configured so that it is updated automatically, any unused services are disabled or removed, unused hardware is disconnected, appropriate introspection capabilities are used with the guest OSs, and the hypervisor is monitored for any signs of compromise. Access to the hypervisor should be limited to authorized administrators only, since these users would be capable of accessing and monitoring activity in any of the guest OSs. The hypervisor may support both local and remote administration. This must be configured appropriately, with suitable authentication and encryption mechanisms used, particularly when using remote administration. Remote administration access should also be considered and secured in the design of any network firewall and IDS capability in use. Ideally such administration traffic should use a separate network, with very limited, if any, access provided from outside the organization.

31 Linux Security Lecture slides prepared by Dr Lawrie Brown for “Computer Security: Principles and Practice”, 1/e, by William Stallings and Lawrie Brown, Chapter 23 “Linux Security”.

32 Linux Security Linux has evolved into one of the most popular and versatile operating systems many features mean broad attack surface can create highly secure Linux systems will review: Discretionary Access Controls typical vulnerabilities and exploits in Linux best practices for mitigating those threats new improvements to Linux security model Linux’s traditional security model is: people or proceses with “root” privileges can do anything other accounts can do much less hence attacker’s want to get root privileges can run robust, secure Linux systems crux of problem is use of Discretionary Access Controls (DAC) Since Linus Torvalds created Linux in 1991, more or less on a whim, Linux has evolved into one of the world’s most popular and versatile operating systems. Linux is free and open-sourced, and is available in a wide variety of “distributions” targeted at almost every usage-scenario imaginable. Like other general-purpose operating systems, Linux’s wide range of features presents a broad attack surface, but by leveraging native Linux security controls, carefully configuring Linux applications, and deploying certain add-on security packages, you can create highly secure Linux systems. The study and practice of Linux security therefore has wide-ranging uses and ramifications. New exploits against popular Linux applications affect many thousands of users around the world. New Linux security tools and techniques have just as profound of an impact, albeit a much more constructive one! In this chapter we’ll examine the Discretionary Access Control-based security model and architecture common to all Linux distributions and to most other Unix-derived and Unix-like operating systems (and also, to a surprising degree, to Microsoft Windows). We’ll discuss the strengths and weaknesses of this ubiquitous model; typical vulnerabilities and exploits in Linux; best practices for mitigating those threats; and improvements to the Linux security model that are only slowly gaining popularity, but that hold the promise to correct decades-old shortcomings in this platform.

33 Linux Security Transactions
In the Linux DAC system, there are users, each of which belongs to one or more groups; and there are also objects: files and directories. Users read, write, and execute these objects, based on the objects’ permissions, of which each object has three sets: one each defining the permissions for the object’s user-owner, group-owner, and “other” (everyone else). These permissions are enforced by the Linux kernel, the “brain” of the operating system, as is illustrated in Figure 23.1 from the text. When running, a process normally “runs as” (with the identity of) the user and group of the person or process that executed it. Since processes “act as” users, if a running process attempts to read, write, or execute some other object, the kernel will first evaluate that object’s permissions against the process’ user and group identity, just as though the process was an actual human user. This basic transaction, wherein a subject (user or process) attempts some action (read, write, execute) against some object (file, directory, special file). Whoever owns an object can set or change its permissions. Herein lies the Linux DAC model’s real weakness: the system superuser account, called “root,” has the ability to both take ownership and change the permissions of all objects in the system. And as it happens, it’s not uncommon for both processes and administrator-users to routinely run with root privileges, in ways that provide attackers with opportunities to hijack those privileges.

34 File System Security in Linux everything as a file
e.g. memory, device-drivers, named pipes, and other system resources hence why filesystem security is so important I/O to devices is via a “special” file e.g. /dev/cdrom have other special files like named pipes a conduit between processes / programs Linux treats everything as a file, including memory, device-drivers, named pipes, and other system resources. A device like a CDROM is a file to the Linux kernel: the "special" device-file /dev/cdrom (which is usually a symbolic link to /dev/hdb or some other special file). To send data from or write it to the CDROM drive, the Linux kernel actually reads to and writes from this special file. Other special files, such as named pipes, act as input/output (I/O) "conduits," allowing one process or program to pass data to another. One common example of a named pipe on Linux systems is /dev/urandom: when a program reads this file, /dev/urandom returns random characters from the kernel's random-number generator. So in Linux/Unix, nearly everything is represented by a file. Once you understand this, it's much easier to understand why filesystem security is important, and how it works.

35 Users and Groups a user-account (user) a group-account (group)
represents someone capable of using files associated both with humans and processes a group-account (group) is a list of user-accounts users have a main group may also belong to other groups users & groups are not files Actually, there are two things on a Unix system that aren't represented by files: user-accounts and group-accounts (in short users and groups). Various files contain information about a system's users and groups, but none actually represents them. A user-account represents someone or something capable of using files. As we saw in the previous section, a user-account can be associated both with actual human beings and with processes. The standard Linux user-account "lp," for example, is used by the Line Printer Daemon (lpd): the lpd program runs as the user lp. A group-account is simply a list of user-accounts. Each user-account is defined with a main group membership, but may in fact belong to as many groups as you want or need it to. For example, the user "maestro" may have a main group membership in "conductors," and also belong to the group "pianists."

36 Users and Groups user's details are kept in /etc/password
maestro:x:200:100:Maestro Edward Hizzersands:/home/maestro:/bin/bash additional group details in /etc/group conductors:x:100: pianists:x:102:maestro,volodya use useradd, usermod, userdel to alter A user's main group membership is specified in the user's entry in /etc/password; additional groups are specified in /etc/group by adding the username to the end of the entry for each group the user needs to belong to. Listing 23.1 shows user "maestro"'s entry in the file /etc/password. Can see that the first field contains the name of the user-account, "maestro;" the second field ("x") is a placeholder for maestro's password (which is actually stored in /etc/shadow); the third field shows maestro's numeric userid (or "uid," in this case "200"); and the fourth field shows the numeric groupid (or "gid," in this case "100") of maestro's main group membership. The remaining fields specify a comment, maestro's home directory, and maestro's default login shell. Listing 23.2 shows part of the corresponding /etc/group file. Each line simply contains a group-name, a group-password (usually unused -- "x" is a placeholder), and numeric group-id (gid), and a comma-delimited list of users with "secondary" memberships in the group. Thus we see that the group "conductors" has a gid of "100", which corresonds to the gid specified as maestro's main group in Listing 23.1; and also that the group "pianists" includes the user "maestro" (plus another named "volodya") as a secondary member. The simplest way to modify /etc/password and /etc/group in order to create, modify, and delete user-accounts is via the commands useradd, usermod, and userdel, respectively. All three of these commands can be used to set and modify group-memberships.

37 File Permissions files have two owners: a user & a group
each with its own set of permissions with a third set of permissions for other permissions are to read/write/execute in order user/group/other, cf. -rw-rw-r maestro user Mar :38 baton.txt set using chmod command Each file on a Unix system (which, as we've seen, means "practically every single thing on a Unix system"), has two owners: a user and a group, each with its own set of permissions that specify what the user or group may do with the file (read it, write to it or delete it, and execute it). A third set of permissions pertains to other, that is, user-accounts that don't own the file or belong to the group that owns it. Listing 23.3 shows a "long file-listing" for the file /home/maestro/baton.txt. Permissions are listed in the order "user-permissions, group-permissions, other-permissions." Thus we see that for the file shown in Listing 23.3 in the text, its user-owner ("maestro") may read and write/delete the file ("rw-"); its group-owner ("conductors") may also read and write/delete the file ("rw-"); but that other users (who are neither "maestro" nor members of "conductors") may only read the file. There's a third permission besides "read" and "write": "execute," denoted by "x" (when set). If maestro writes a shell script named "punish_bassoonists.sh", and if he sets its permissions to "-rwxrw-r--", then maestro will be able to execute his script by entering the name of the script at the command-line. If, however, he forgets to do so, he won't be able to run the script, even though he owns it. Permissions are usually set via the "chmod" command (short for "change mode").

38 Directory Permissions
read = list contents write = create or delete files in directory execute = use anything in or change working directory to this directory e.g. $ chmod g+rx extreme_casseroles $ ls -l extreme_casseroles drwxr-x--- 8 biff drummers 288 Mar :38 extreme_casseroles Directory-permissions work slightly differently than permissions on regular files. "Read" and "write" are similar; for directories these permissions translate to "list the directory's contents" and "create or delete files within the directory," respectively. "Execute" is a little less intuitive: for directories, "execute" translates to "use anything within or change working directory to this directory". That is, if a user or group has execute-permissions on a given directory, they can list that directory's contents, read that directory's files (assuming those individual files' own permissions include this), and change their working directory to that directory, like with the command "cd". If a user or group does not have execute-permissions on a given directory, they will be unable to list or read anything in it, regardless of the permissions set on the things inside. (Note that if you lack execute permissions on a directory but do have read permissions on a it, and you try to list its contents with ls, you will receive an error message that, in fact, lists the directory's contents. But this doesn't work if you have neither read nor execute permissions on the directory.) Suppose our example system has a user named "biff" who belongs to the group "drummers." And suppose further that his home-directory contains a directory called "extreme_casseroles" that he wishes to share with his fellow percussionists. Listing 13-4 shows how biff might set that directory's permissions. In Listing 13-4, only biff has the ability to create, change, or delete files inside extreme_casseroles. Other members of the group "drummers" may list its contents and cd to it. Everyone else on the system, however (except root, who is always all-powerful), is blocked from listing, reading, cd-ing, or doing anything else with the directory.

39 Sticky Bit originally used to lock file in memory
now used on directories to limit delete if set must own file or dir to delete other users cannot delete even if have write set using chmod command with +t flag, e.g. chmod +t extreme_casseroles directory listing includes t or T flag drwxrwx--T 8 biff drummers Mar 25 01:38 extreme_casseroles only apply to specific directory not child dirs In older Unix operating systems, the sticky bit was used to write a file (program) to memory so it would load more quickly when invoked. On Linux, however, it serves a different function: when you set the sticky bit on a directory, it limits users' ability to delete things in that directory. That is, to delete a given file in the directory you must either own that file or own the directory, even if you belong to the group which owns the directory and group-write permissions are set on it. To set the sticky bit, use the chmod command with +t. In our example, this would be chmod +t extreme_casseroles If we set the sticky bit on extreme_casseroles and then do a long listing of the directory itself, using "ls -ld extreme_casseroles", we'll see: drwxrwx--T 8 biff drummers Mar 25 01:38 extreme_casseroles Note the "T" at the end of the permissions-string. We'd normally expect to see either "x" or "-" there, depending on whether the directory is "other-writable". "T" denotes that the directory is not "other-executable" but has the sticky bit set. A lower-case "t" would denote that the directory is other-executable and has the sticky bit set. The sticky bit only applies to the directory's first level downwards, it is not inherited by child directories.

40 SetUID and SetGID setuid bit means program "runs as" owner
no matter who executes it setgid bit means run as a member of the group which owns it again regardless of who executes it "run as" = "run with same privileges as” are very dangerous if set on file owned by root or other privileged account or group only used on executable files, not shell scripts setuid has no effect on directories. setgid does and causes any file created in a directory to inherit the directory's group useful if users belong to other groups and routinely create files to be shared with other members of those groups instead of manually changing its group Now come to two of the most dangerous permissions-bits in the Unix world: setuid and segid. If set on an executable binary file, the setuid bit causes that program to "run as" its owner, no matter who executes it. Similarly, the setgid bit, when set on an executable, causes that program to run as a member of the group which owns it, again regardless of who exectutes it. "run as" means "to run with the same privileges as." IMPORTANT WARNING: setuid and setgid are very dangerous if set on any file owned by root or any other privileged account or group. The command "sudo" is a much better tool for delegating root's authority. Note that if you want a program to run setuid, that program must be group- executable or other-executable, for obvious reasons. Note also that the Linux kernel ignores the setuid and setgid bits on shell scripts; these bits only work on binary (compiled) executables. setgid works the same way, but with group-permissions: if you set the setgid bit on an executable file via the command "chmod g+s filename", and if the file is also "other-executable" (-r-xr-sr-x), then when that program is executed it will run with the group-ID of the file rather than of the user who executed it.

41 Numeric File Permissions
Internally, Linux uses numbers to represent permissions; only user programs display permissions as letters. The chmod command recognizes both mnemonic permission-modifiers ("u+rwx,go-w") and numeric modes. A numeric mode consists of four digits: as you read left-to-right, these represent special-permissions, user-permissions, group-permissions, and other-permissions. For example, 0700 translates to "no special permissions set, all user-permissions set, no group permissions set, no other-permissions set." Each permission has a numeric value, and the permissions in each digit-place are additive: the digit represents the sum of all permission-bits you wish to set. If, for example, user-permissions are set to "7", this represents 4 (the value for "read") plus 2 (the value for "write") plus 1 (the value for "execute"). As I just mentioned, the basic numeric values are 4 for read, 2 for write, and 1 for execute. Why no "3? Because (a) these values represent bits in a binary stream and are therefore all powers of 2; and (b) this way, no two combination of permissions have the same sum. Special permissions are as follows: 4 stands for setuid, 2 stands for setgid, and 1 stands for sticky-bit. For example, the numeric mode 3000 translates to "setgid set, sticky-bit set, no other permissions set" (which is a useless set of permissions). Figure 23.2 shows the permissions of "mycoolfile" set using the command: chmod 0644 mycoolfile

42 Kernel vs User Space Kernel space User space
refers to memory used by the Linux kernel and its loadable modules (e.g., device drivers) User space refers to memory used by all other processes since kernel enforces Linux DAC and security critical to isolate kernel from user so kernel space never swapped to disk only root may load and unload kernel modules It’s a little bit of an oversimplification to say that users, groups, files, and directories are all that matter in the Linux DAC: memory is important too. Therefore we should at least briefly discuss kernel space and user space. Kernel space refers to memory used by the Linux kernel and its loadable modules (e.g., device drivers). User space refers to memory used by all other processes. Since the kernel enforces the Linux DAC and, in real terms, dictates system reality, it’s extremely important to isolate kernel space from user space. For this reason, kernel space is never swapped to hard disk. It’s also the reason that only root may load and unload kernel modules. As we’re about to see, one of the worst things that can happen on a compromised Linux system is for an attacker to gain the ability to load kernel modules.

43 setuid root Vulnerabilities
a setuid root program runs as root no matter who executes it used to provide unprivileged users with access to privileged resources must be very carefully programmed if can be exploited due to a software bug may allow otherwise-unprivileged users to use it to wield unauthorized root privileges distributions now minimise setuid-root programs system attackers still scan for them! In this section we’ll discuss the most common weaknesses in Linux systems. As we discussed in the previous section, any program whose “setuid” permission-bit is set will run with the privileges of the user that owns it, rather than those of the process or user executing it. A setuid root program is a root-owned program with its setuid bit set, that is, a program that runs as root no matter who executes it. Running setuid-root is necessary for programs that need to be run by unprivileged users, yet must provide such users with access to privileged functions (for example, changing their password, which requires changes to protected system files). But such a program needs to have been very carefully programmed, with impeccable user-input validation, strict memory management, etc. That is, it needs to have been designed to be run setuid (or setgid) root. Even then, a root-owned program should only have its setuid bit set if absolutely necessary. The risk here is that if a setuid root program can be exploited or abused in some way (for example, via a buffer overflow vulnerability or race condition as we discuss in chapters 11 and 12), then otherwise-unprivileged users may be able to use that program to wield unauthorized root privileges, possibly including opening a root shell (a command-line session running with root privileges). Due to a history of abuse against setuid root programs, major Linux distributions no longer ship with unecessarily setuid-root programs. But system attackers still scan for them!

44 Web Vulnerabilities a very broad category of vulnerabilities
because of ubiquity of world wide web have big and visible attack surfaces when written in scripting languages not as prone to classic buffer overflows can suffer from poor input-handling few “enabled-by-default” web applications but users install vulnerable web applications or write custom web applications having easily- identified and easily-exploited flaws This is a very broad category of vulnerabilities, many of which also fall into other categories in this list. It warrants its own category because of the ubiquity of the world wide web: there are few attack surfaces as big and visible as an Internet-facing web site. While web applications written in scripting languages such as PHP, Perl, and Java may not be as prone to classic buffer overflows (thanks to the additional layers of abstraction presented by those languages’ interpreters), they’re nonetheless prone to similar abuses of poor input-handling, including cross-site scripting, SQL code injection, and a plethora of other vulnerabilities, as we discuss in chapters 11 and 12. Nowadays, few Linux distributions ship with “enabled-by-default” web applications (such as the default cgi-scripts included with older versions of the Apache webserver). However, many users install web applications with known vulnerabilities, or write custom web applications having easily-identified and easily-exploited flaws.

45 Rootkits allow attacker to cover their tracks
if successfully installed before detection, all is very nearly lost originally collections of hacked commands hiding attacker’s files, directories, processes now use loadable kernel modules intercepting system calls in kernel-space hiding attacker from standard commands may be able to detect with chkrootkit generally have to wipe and rebuild system Rootkits allows an attacker to cover their tracks, typically installed after a root compromise: if successfully install a rootkit before detection, all is very nearly lost. Rootkits began as collections of “hacked replacements” for common Unix commands (ls, ps, etc.) that behaved like the legitimate commands they replaced, except for hiding an attacker’s files, directories and processes. In the Linux world, since the advent of loadable kernel modules (LKMs), rootkits have more frequently taken the form of LKMs. An LKM rootkit does its business (covering the tracks of attackers) in kernel-space, intercepting system calls pertaining to any user’s attempts to view the intruder’s resources. In this way, files, directories, and processes owned by an attacker are hidden even to a compromised system’s standard, un-tampered-with commands, including customized software. Besides operating at a lower, more global level, another advantage of the LKM rootkit over traditional rootkits is that system integrity-checking tools such as Tripwire won’t generate alerts from system commands being replaced. Luckily, even LKM rootkits do not always ensure complete invisibility for attackers. Many traditional and LKM rootkits can be detected with the script chkrootkit, available at In general, however, if an attacker gets far enough to install a KVM rootkit, your system can be considered to be completely compromised; when and if you detect the breach (e.g., via a defaced website, missing data, suspicious network traffic, etc.), the only way to restore your system with any confidence of completely shutting out the intruder will be to erase its hard disk (or replace it, if you have the means and inclination to analyze the old one), re-install Linux, and apply all the latest software patches.

46 Linux Hardening - OS Installation
security begins with O/S installation especially what software is run since unused applications liable to be left in default, un- hardened and un-patched state generally should not run: X Window system, RPC services, R-services, inetd, SMTP daemons, telnet etc also have some initial system s/w configuration: setting root password creating a non-root user account setting an overall system security level enabling a simple host-based firewall policy enabling SELinux Linux system security begins at operating system installation time: one of the most critical, system-impacting decisions a system administrator makes is what software will run on the system. Since it’s hard enough to find the time to secure a system’s critical applications, an unused application is liable to be left in a default, un-hardened and un-patched state. Therefore, it’s very important that from the start, careful consideration be given to what applications should be installed, and which should not. What software should you not install? Common sense should be your guide: for example, an SMTP ( ) relay shouldn’t need the Apache webserver; a database server shouldn’t need an office productivity suite such as OpenOffice; etc. Given the plethora of roles Linux systems play (desktops, servers, laptops, firewalls, embedded systems, etc), will generalize enumerating what software to not install. Here is a list of software packages that should seldom, if ever, be installed on hardened (especially Internet-facing) servers: X Window System, RPC Services, R-Services, inetd, SMTP Daemons, Telnet and other cleartext-logon services (see text for discussion why). In addition to initial software selection and installation, Linux installation utilities also perform varying amounts of initial system and software configuration, including: setting the root password creating a non-root user account setting an overall system security level (usually initial file-permission settings) enabling a simple host-based firewall policy enabling SELinux or Novell AppArmor (see Section 23.7)

47 Patch Management installed server applications must be:
configured securely kept up to date with security patches patching can never win “patch rat-race” have tools to automatically download and install security updates e.g. up2date, YaST, apt-get note should not run automatic updates on change-controlled systems without testing Carefully selecting what gets installed (and what doesn’t get installed) on a Linux system is an important first step in securing it. All the server applications you do install, however, must be configured securely (the subject of Section 13.6), and they must also be kept up to date with security patches. The bad news with patching is that you can never win the “patch rat-race:” there will always be software vulnerabilities that attackers are able to exploit for some period of time before vendors issue patches for them. (As-yet-unpatchable vulnerabilities are known as zero-day, or 0-day, vulnerabilities.). The good news is, modern Linux distributions usually include tools for automatically downloading and installing security updates, which can minimize the time your system is vulnerable to things against which patches are available. For example, Red Hat, Fedora, and CentOS include up2date (YUM can be used instead); SuSE includes YaST Online Update; and Debian uses apt-get, though you must run it as a cron job for automatic updates. Note that on change-controlled systems, you should not run automatic updates, since security patches can, on rare but significant occasions, introduce instability. For systems on which availability and up-time are of paramount importance, therefore, you should stage all patches on test systems before deploying them in production.

48 Network Access Controls
network a key attack vector to secure TCP wrappers a key tool to check access originally tcpd inetd wrapper daemon before allowing connection to service checks if requesting host explicitly in hosts.allow is ok if requesting host explicitly in hosts.deny is blocked if not in either is ok checks on service, source IP, username now often part of app using libwrappers One of the most important attack-vectors in Linux threats is the network. Network-level access controls, that restrict access to local resources based on the IP addresses of the systems attempting access, are therefore an important tool in Linux security. One of the most mature network access control mechanisms in Linux is libwappers. In its original form, the software package TCP Wrappers, the daemon tcpd is used as a “wrapper” process for each service initiated by inetd. Before allowing a connection to any given service, tcpd first evaluates access controls defined in the files /etc/hosts.allow and /etc/hosts.deny: if the transaction matches any rule in hosts.allow (which tcpd parses first), it’s allowed. If no rule in hosts.allow matches, tcpd then evaluates the transaction against the rules in hosts.deny; if any rule in hosts.deny matches, the transaction is logged and denied, but is otherwise permitted. These access controls are based on the name of the local service being connected to, on the source IP address or hostname of the client attempting the connection, and on the username of the client attempting the connection (that is, the owner of the client-process). Note that client usernames are validated via the ident service, which unfortunately is trivially easy to forge on the client side, which makes this criterion’s value questionable. The best way to configure TCP Wrappers access controls is therefore to set a “deny all” policy in hosts.deny, such that the only transactions permitted are those explicitly specified in hosts.allow. Since inetd is essentially obsolete, TCP Wrappers is no longer used as commonly as libwrappers, a system library which allows applications to defend themselves by leveraging /etc/hosts.allow and /etc/hosts.deny without requiring tcpd to act as an intermediary.

49 Network Access Controls
also have the very powerful netfilter Linux kernel native firewall mechanism and iptables user-space front end as useful on firewalls, servers, desktops direct config tricky, steep learning curve do have automated rule generators typically for “personnal” firewall use will: allow incoming requests to specified services block all other inbound service requests allow all outbound (locally-originating) requests if need greater security, manually config While TCP Wrappers ubiquitous and easy-to-use, more powerful is the Linux kernel’s native firewall mechanism, netfilter (and its user-space front end iptables) iptables is useful both on multi-interface firewall systems and on ordinary servers and desktop systems. iptables command does however have a steep learning curve. Nearly all Linux distributions now include utilities for automatically generating “personal” (local) firewall rules, especially at installation time. Typically, they prompt the administrator/user for local services that external hosts should be allowed to reach, if any (e.g., HTTP on TCP port 80, HTTPS on TCP port 443, and SSH on TCP port 22), and then generate rules that: allow incoming requests to those services; block all other inbound (externally-originating) transactions; and allow all outbound (locally-originating) services; with the assumption that all outbound network transactions are legitimate. This assumption does not hold if the system is compromised by a human attacker or by malware. In cases in which a greater level of caution is justified, it may be necessary to create more complex iptables policies than your Linux installer’s firewall wizard can provide. Many people manually create their own startup-script for this purpose (an iptables “policy” is actually just a list of iptables commands), but a tool such as Shorewall or Firewall Builder may instead be used.

50 Antivirus Software historically Linux not as vulnerable to viruses
more to lesser popularity than security prompt patching was effective for worms but viruses abuse users privileges non-root users have less scope to exploit but can still consume resources growing Linux popularity mean exploits hence antivirus software will more important various commercial and free Linux A/V Historically, Linux hasn’t been nearly so vulnerable to viruses as other operating systems (e.g., Windows); more due to its lesser popularity as a desktop platform than necessarily inherent security. Hence Linux users have tended to worry less about viruses, and have tended to rely on keeping up to date with security patches for protection against malware. This is arguably a more proactive technique than relying on signature-based antivirus tools. And indeed, prompt patching of security holes is an effective protection against worms, which have historically been a much bigger threat against Linux systems than viruses. Viruses, however, typically abuse the privileges of whatever user unwittingly executes them, rather than actually exploiting a software vulnerability: the virus simply “runs as” the user. This may not have system-wide ramifications so long as that user isn’t root, but even relatively unprivileged users can execute network client applications, create large files that could fill a disk volume, and perform any number of other problematic actions. As Linux’s popularity continues to grow, especially as a general-purpose desktop platform we can expect Linux viruses to become much more common. Sooner or later, therefore, antivirus software will become much more important on Linux systems than it is presently. There are a variety of commercial and free antivirus software packages that run on (and protect) Linux, including products from McAfee, Symantec, and Sophos; and the free, open-source tool ClamAV.

51 User Management guiding principles in user-account security:
need care setting file / directory permissions use groups to differentiate between roles use extreme care in granting / using root privs commands: chmod, useradd/mod/del, groupadd/mod/del, passwd, chage info in files /etc/passwd & /etc/group manage user’s group memberships set appropriate password ages Recall the guiding principles in Linux user-account security: be very careful when setting file and directory permissions; use group memberships to differentiate between different roles on your system; and be extremely careful in granting and using root privileges. We now discuss some details of user- and group-account management, and delegation of root privileges. First, some commands: use chmod command to set and change permissions for objects belonging to existing users and groups. To create, modify, and delete user accounts, use the useradd, usermod, and userdel commands. To create, modify, and delete group accounts, use the groupadd, groupmod, and groupdel commands. Alternatively, you can simply edit the file /etc/passwd directly to create, modify, or delete users, or edit /etc/group to create, modify, or delete groups. Note that initial (primary) group memberships are set in a user’s entry in /etc/passwd; supplementary (secondary) group memberships are set in /etc/group. use the usermod command to change either primary or supplementary group memberships for any user. You can use passwd to change your own (or as root anyone’s) password. Password aging (maximum and minimum lifetime for user passwords), is set globally in the files /etc/login.defs and /etc/default/useradd, but these settings are only applied when new user accounts are created. To modify the password lifetime for an existing account, use the chage command. Passwords should have a minimum age to prevent users from rapidly “cycling through” password-changes; 7 days is a reasonable. Maximum lifetime is trickier, balancing exposure risk vs user annoyance; 60 days is a reasonable balance for many organizations.

52 Root Delegation have "root can to anything, users do little” issue
“su” command allows users to run as root either root shell or single command must supply root password means likely too many people know this SELinux RBAC can limit root authority, complex “sudo” allows users to run as root but only need their password, not root password /etc/sudoers file specifies what commands allowed or configure user/group perms to allow, tricky A key problem with Linux/Unix security is "root can to anything, users do little." The “su” command, is used to promote a user root (provided you know the root password). However, it's much easier to do a quick "su" to become root for awhile than it is to create a granular system of group-memberships and permissions that allows administrators and sub-administrators to have exactly the permissions they need. You can use the su command with the "-c" flag, which allows you to specify a single command to run as root rather than an entire shell-session (for example, "su -c rm somefile.txt"), but since this requires you to enter the root password, everyone who needs to run a particular root command needs the root password. But it's never good for more than a small number of people to know root's password. Another approach to solving the "root takes all" problem is to use SELinux’s Role-Based Access Controls (RBAC), which enforce access-controls that reduce root's effective authority. However, this is much more complicated than setting up effective groups and group-permissions. A reasonable compromise is the sudo command, which is standard on most Linux distributions. "sudo" allows users to execute specified commands as root, without their actually needing to know the root password (unlike su). sudo is configured via the file /etc/sudoers, but you shouldn't edit this file directly; rather, you should use the command visudo, which opens a special vi (text editor) session. sudo is a very powerful tool, havr to use it wisely as root privileges are never to be trifled with! It really is better to use user- and group-permissions judiciously than to hand out root privileges even via sudo, and it's better still to use an RBAC-based system like SELinux if feasible.

53 Logging effective logging a key resource
Linux logs using syslogd or Syslog-NG receive log data from a variety of sources sorts by facility (category) and severity writes log messages to local/remote log files Syslog-NG preferable because it has: variety of log-data sources / destinations. much more flexible “rules engine” to configure Log Management……… can log via TCP which can be encryptedbalance number of log files used size of few to finding info in many manage size of log files must rotate log files and delete old copies. typically use logrotate utility run by cron to manage both system and application logs must also configure application logging Effective logging helps ensure that in the event of a system breach or failure, system administrators can more quickly and accurately identify what happened, and thus most effectively focus their remediation and recovery efforts. On Linux systems, system logs are handled either by the ubiquitous Berkeley Syslog daemon (syslogd) in conjunction with the kernel log daemon (klogd), or by the much-more-feature-rich Syslog-NG. System log daemons receive log data from a variety of sources, sort it by facility (category) and severity, and then write the log messages to log files, as we discussed in section 15.3 in the text. Syslog-NG is preferable both because it can use a much wider variety of log-data sources and destinations, and because its “rules engine” is much more flexible than syslogd’s simple configuration file (/etc/syslogd.conf), allowing you to create a much more sophisticated set of rules for evaluating and processing log data. Syslog-NG also supports logging via TCP, which can be encrypted via a TLS “wrapper” such as Stunnel or Secure Shell. Both syslogd and Syslog-NG install with default settings for what gets logged, and where. While these default settings are adequate in many cases, this should be checked. At the very least, you should decide what combination of local and remote logging to perform. If logs remain local to the system that generates them, they may be tampered with by an attacker. If some or all log data is transmitted over the network to some central log-server, audit-trails can be more effectively preserved, but log data may also be exposed to network eavesdroppers.

54 Running As Unprivileged User/Group
every process “runs as” some user extremely important this user is not root since any bug can compromise entire system may need root privileges, e.g. bind port have root parent perform privileged function but main service from unprivileged child user/group used should be dedicated easier to identify source of log messages Recall in Linux and other Unix-like operating systems that every process “runs as” some user. For network daemons in particular, it’s extremely important that this user not be root; any process running as root is never more than a single buffer-overflow or race-condition away from being a means for attackers to achieve remote root compromise. Therefore, one of the most important security features a daemon can have is the ability to run as a non-privileged user or group. Running network processes as root isn’t entirely avoidable: for example, only root can bind processes to “privileged ports” (TCP and UDP ports lower than 1024). However, it’s still possible for a service’s parent process to run as root in order to bind to a privileged port, but to then then spawn a new child process that runs as an unprivileged user, each time an incoming connection is made. Ideally, the unprivileged users and groups used by a given network daemon should be dedicated for that purpose, if for no other reason than for auditability. (I.e., if entries start appearing in /var/log/messages indicating failed attempts by the user ftpuser to run the command /sbin/halt, it will be much easier to determe precisely what’s going on if the ftpuser account isn’t shared by five different network applications).

55 Running in chroot Jail chroot confines a process to a subset of /
maps a virtual “/” to some other directory useful if have a daemon that should only access a portion of the file system, e.g. FTP directories outside the chroot jail aren’t visible or reachable at all contains effects of compromised daemon complex to configure and troubleshoot must mirror portions of system in chroot jail The chroot system call confines a process to some subset of /, that is, it maps a virtual “/” to some other directory (e.g.,. /srv/ftp/public). This is useful since, for example, an FTP daemon that serves files from a particular directory, say, /srv/ftp/public, shouldn’t have any reason to have access to the rest of the filesystem. We call this directory to which we restrict the daemon a chroot jail. To the “chrooted” daemon, everything in the chroot jail appears to actually be in /, e.g., the “real” directory /srv/ftp/public/etc/myconfigfile appears as /etc/myconfigfile in the chroot jail. Things in directories outside the chroot jail, e.g., /srv/www or /etc, aren’t visible or reachable at all. Chrooting therefore helps contain the effects of a given daemon’s being compromised or hijacked. The main disadvantage of this method is added complexity: certain files, directories, and special files typically must be copied into the “chroot jail,” and determining just what needs to go into the jail for the daemon to work properly can be tricky, though detailed procedures for chrooting many different Linux applications are easy to find on the World Wide Web. Troubleshooting a chrooted application can also be difficult: even if an application explicitly supports this feature, it may behave in unexpected ways when run chrooted. Note also that if the chrooted process runs as root, it can “break out” of the chroot jail with little difficulty. Still, the advantages usually far outweigh the disadvantages of chrooting network services.

56 Modularity applications running as a single, large, multipurpose process can be: more difficult to run as an unprivileged user harder to locate / fix security bugs in source harder to disable unnecessary functionality hence modularity a highly prized feature providing a much smaller attack surface cf. postfix vs sendmail, Apache modules If an application runs in the form of a single, large, multipurpose process, it may be more difficult to run it as an unprivileged user; it may be harder to locate and fix security bugs in its source code (depending on how well-documented and structured the code is); and it may be harder to disable unnecessary areas of functionality. In modern network service applications, therefore,. Postfix, for example, consists of a suite of daemons and commands, each dedicated to a different mail-transfer-related task. Only a couple of these processes ever run as root, and they practically never run all at the same time. Postfix therefore has a much smaller attack surface than the monolithic Sendmail. The popular web server Apache used to be monolithic, but it now supports code modules that can be loaded at startup time as needed; this both reduces Apache’s memory footprint, and reduces the threat posed by vulnerabilities in unused functionality areas.

57 Encryption sending logins & passwords or application data over networks in clear text exposes them to network eavesdropping attacks hence many network applications now support encryption to protect such data often using OpenSSL library may need own X.509 certificates to use can generate/sign using openssl command may use commercial/own/free CA Sending logon credentials or application data over networks in clear text (i.e., unencrypted) exposes them to network eavesdropping attacks. Most Linux network applications therefore support encryption nowadays, most commonly via the OpenSSL library. Using application-level encryption is, in fact, the most effective way to ensure end-to-end encryption of network transactions. The SSL and TLS protocols provided by OpenSSL require the use of X.509 digital certificates. These can be generated and signed by the user-space openssl command. For optimal security, either a local or commecial (third-party) Certificate Authority (CA) should be used to sign all server certificates, but self-signed (that is, non-verifiable) certificates may also be used. [BAUE05] provides detailed instructions on how to create and use your own Certificate Authority with OpenSSL.

58 Logging applications can usually be configured to log to any level of detail (debug to none) need appropriate setting must decide if use dedicated file or system logging facility (e.g. syslog) central facility useful for consistent use must ensure any log files are rotated The last common application security feature we’ll discuss here is logging. Most applications can be configured to log to whatever level of detail you want, ranging from “debugging” (maximum detail) to “none.” Some middle setting is usually the best choice, but you should not assume that the default setting is adequate. In addition, many applications allow you to specify either a dedicated file to write application event data to, or a syslog facility to use when writing log data to /dev/log. If you wish to handle system logs in a consistent, centralized manner, it’s usually preferable for applications to send their log data to /dev/log. Note, however, that logrotate can be configured to rotate any logs on the system, whether written by syslogd, Syslog-NG, or individual applications.

59 Mandatory Access Controls
Linux uses a DAC security model but Mandatory Access Controls (MAC) impose a global security policy on all users users may not set controls weaker than policy normal admin done with accounts without authority to change the global security policy but MAC systems have been hard to manage Novell’s SuSE Linux has AppArmor RedHat Enterprise Linux has SELinux pure SELinux for high-sensitivity, high-security Linux uses a DAC security model, in which the owner of a given system object can set whatever access permissions on that resource they like. Stringent security controls, in general, are optional. In contrast, a computer with Mandatory Access Controls (MAC) has a global security policy that all users of the system are subject to. A user who creates a file on a MAC system may not set access controls on that file weaker than the controls dictated by the system security policy. See chapters 4 and 9. In general, compromising a system using a DAC-based security model is a matter of hijacking some root process. On a MAC-based system, the superuser account is only used for maintaining the global security policy. Day-to-day system administration is performed using accounts that lack the authority to change the global security policy. Hence, it's impossible to compromise the entire system by attacking any one process. Unfortunately, while MAC schemes have been available on various platforms over the years, they have traditionally been much more complicated to configure and maintain. To create an effective global security policy requires detailed knowledge of the precise behavior of every application on the system. Also the more restrictive the security controls are, the less convenient that system becomes for its users to use. Novell’s SuSE Linux includes AppArmor, a partial MAC implementation that restricts specific processes but leaves everything else subject to the conventional Linux DAC. In Fedora and Red Hat Enterprise Linux, SELinux has been implemented with a policy that restricts key network daemons, but relies on the Linux DAC to secure everything else. For high-sensitivity, high-security, multi-user scenarios, a “pure” SELinux implementation may be deployed, in which all processes, system resources, and data are regulated by comprehensive, granular access controls.

60 Windows Security Windows is the world’s most popular O/S
advantage is that security enhancements can protect millions of nontechnical users challenge is that vulnerabilities in Windows can also affect millions of users will review overall security architecture of Windows then security defenses built into Windows Windows is the world’s most popular operating system and as such has a number of interesting security-related advantages and challenges. The major advantage is any security advancement made to Windows can protect hundreds of millions of nontechnical users, and advances in security technologies can be used by thousands of corporations to secure their assets. The challenges for Microsoft are many, including the fact that security vulnerabilities in Windows can affect millions of users. Of course, there is nothing unique about Windows having security vulnerabilities; all software products have security bugs. However, Windows is used by so many nontechnical users that Microsoft has some interesting engineering challenges. This chapter begins with a description the overall security architecture of Windows 2000 and later. It is important to point out that versions of Windows based on the Windows 95 code base,including Windows 98, Windows 98 SE, and Windows Me, had no real security model, in contrast to Windows NT and later versions. The Windows 9x codebase is no longer supported. The remainder of the chapter cover the security defenses built into Windows, most notably the new defenses in Windows Vista.

61 Windows Security Architecture
Security Reference Monitor (SRM) a kernel-mode component that performs access checks, generates audit log entries, and manipulates user rights (privileges) Local Security Authority (LSA) responsible for enforcing local security policy Security Account Manager (SAM) a database that stores user accounts and local users and groups security information local logins perform lookup against SAM DB passwords are stored using MD4 The basic fundamental security blocks in the Windows operating system include: • The Security Reference Monitor (SRM) - a kernel-mode component that performs access checks, generates audit log entries, and manipulates user rights, also called privileges. Ultimately, every permission check is performed by the SRM. • The Local Security Authority (LSA) - resides in a user-mode process named lsass.exe and is responsible for enforcing local security policy in Windows. It also issues security tokens to accounts as they log on to the system.Security policy includes password policy, auditing policy, and privilege settings. • The Security Account Manager (SAM) - a database that stores user accounts and relevant security information about local users and local groups. When a user logs on to a computer using a local account, the SAM process (SamSrv) takes the logon information and performs a lookup against the SAM database, which resides in the Windows System32\config directory. If the credentials match, then the user can log on to the system, assuming there are no other factors preventing logon, such as logon time restrictions or privilege issues. Note that the SAM does not perform the logon; that is the job of the LSA. The SAM file is binary rather than text, and passwords are stored using the MD4 hash algorithm. On Windows Vista, the SAM stores password information using a password-based key derivation function (PBKCS).

62 Windows Security Architecture
Active Directory (AD) Microsoft’s LDAP directory all Windows clients can use AD to perform security operations including account logon authenticate using AD when the user logs on using a domain rather than local account user’s credential information is sent securely across the network to be verified by AD WinLogon (local) and NetLogon (net) handle login requests • Active Directory (AD) - is Microsoft’s LDAP directory included with Windows Server 2000 and later. All client versions of Windows, including Windows XP and Windows Vista, can communicate with AD to perform security operations including account logon. A Windows client will authenticate using AD when the user logs on to the computer using a domain account rather than a local account. Like the SAM scenario, the user’s credential information is sent securely across the network, verified by AD, and then, if the information is correct, the user can logon. • WinLogon and NetLogon - WinLogon handles local logons at the keyboard and NetLogon handles logons across the network.

63 Local vs Domain Accounts
a networked Windows computer can be: domain joined can login with either domain or local accounts if local may not access domain resources centrally managed and much more secure in a workgroup a collection of computers connected together only local accounts in SAM can be used no infrastructure to support AD domain A networked Windows computer can be in one of two configurations, either domain joined or in a workgroup. When a computer is domain joined, users can gain access to that computer using domain accounts, which are centrally managed in Active Directory. They can, if they wish, also log on using local accounts, but local accounts may not have access to domain resources such as networked printers, Web servers, servers, and so on. When a computer is in a workgroup, only local accounts can be used, held in the SAM. There are pros and cons to each scenario. A domain has the major advantage of being centrally managed and as such is much more secure. If an environment has 1000 Windows computers and an employee leaves, the user’s account can be disabled centrally rather than on 1000 individual computers. The only advantage of using local accounts is that a computer does not need the infrastructure required to support a domain using AD. Windows also has the notion of a workgroup, which is simply a collection of computers connected to one another using a network; but rather than using a central database of accounts in AD, the machines use only local accounts. The difference between a workgroup and a domain is simply where accounts are authenticated. A workgroup has no domain controllers; authentication is performed on each computer, and a domain authenticates accounts at domain controllers running AD.

64 Domain Login Example domain admin adds user’s account info (name, account, password, groups, privileges) account is represented by a Security ID (SID) unique to each account within a domain of form: S-1–5–21-AAA-BBB-CCC-RRR Breakdown: S means SID; 1 is version number; 5 is identifier authority (here is SECURITY_NT_AUTHORITY); 21 means “not unique”, although always unique within a domain; AAA-BBB- CCC is unique number representing domain; and RRR is a relative id (increments by 1 for each new account) Now give an example of what happens when a user logs on to a Windows system. First, a domain admin must add the user’s account information to the system; this will include the user’s name, account name, password, and optionally group membership and privileges. Then Windows creates an account for the user in the domain controller running Active Directory. Each user account is uniquely represented by a Security ID (SID) within a domain, and every account gets a different SID. A user account’s SID is of the following form (see text for details): S-1–5–21-AAA-BBB-CCC-RRR. In Windows, a username can be in one of two formats. The first, the SAM format, is supported by all versions of Windows and is of the form DOMAIN\Username. The second is called User Principal Name (UPN) and looks more like an RFC822 address: The SAM name should be considered a legacy format. When a user logs on to Windows, he or she does so using either a username and password, or a username and a smart card. It is possible to use other authentication or identification mechanisms, such as an RSA SecureID token or biometric device, but these require third-party support. Assuming the user logs on correctly, a token is generated by the operating system and assigned to the user. A token contains the user’s SID, group membership information, and privileges. Groups are also represented using SIDs. We explain privileges subsequently. The user’s token is assigned to every process run by the user, and is used to perform access checks, as discussed subsequently.

65 Domain Login Example (cont.)
username in one of two forms: SAM format: DOMAIN\Username User Principal Name (UPN): login using username & password or smartcard assuming login is correct, token is generated and assigned to the user contains user’s SID, group membership info, and privileges assigned to every process run by user, and used for access checks Now give an example of what happens when a user logs on to a Windows system. First, a domain admin must add the user’s account information to the system; this will include the user’s name, account name, password, and optionally group membership and privileges. Then Windows creates an account for the user in the domain controller running Active Directory. Each user account is uniquely represented by a Security ID (SID) within a domain, and every account gets a different SID. A user account’s SID is of the following form (see text for details): S-1–5–21-AAA-BBB-CCC-RRR. In Windows, a username can be in one of two formats. The first, the SAM format, is supported by all versions of Windows and is of the form DOMAIN\Username. The second is called User Principal Name (UPN) and looks more like an RFC822 address: The SAM name should be considered a legacy format. When a user logs on to Windows, he or she does so using either a username and password, or a username and a smart card. It is possible to use other authentication or identification mechanisms, such as an RSA SecureID token or biometric device, but these require third-party support. Assuming the user logs on correctly, a token is generated by the operating system and assigned to the user. A token contains the user’s SID, group membership information, and privileges. Groups are also represented using SIDs. We explain privileges subsequently. The user’s token is assigned to every process run by the user, and is used to perform access checks, as discussed subsequently.

66 Windows Privileges are systemwide permissions assigned to user accounts – over 45 total e.g. backup computer, or change system time some are deemed “dangerous” such as: act as part of operating system privilege debug programs privilege backup files and directories privilege others are deemed “benign” such as bypass traverse checking privilege Privileges are systemwide permissions assigned to user accounts, such as the ability to back up the computer, or to change the system time. There are over 45 privileges in Windows Vista. Some privileges are deemed “dangerous” which means a malicious account that is granted such a privilege can cause damage. Examples include: • Act as part of operating system privilege. This is often referred to as the Trusted Computing Base (TCB) privilege, because it allows code run by an account granted this privilege to act as part of the most trusted code in the operating system: the security code. This is the most dangerous privilege in Windows and is granted only the Local System account; even administrators are not granted this privilege. • Debug programs privilege. This privilege allows an account to debug any process running in Windows. A user account does not need this privilege to debug an application running under the user’s account. Because of the nature of debuggers, this privilege means a user can run any code he or she wants in any running process. • Backup files and directories privilege. Any process with this privilege will bypass all access control list (ACL) checks, because the process must be able to read all files to build a complete backup. Its sister privilege Restore files and directories is just as dangerous because it will ignore ACL checks when copying files to source media. Some privileges are generally deemed benign. An example is the “bypass traverse checking” privilege that is used to traverse directory trees even though the user may not have permissions on the traversed directory. This privilege is assigned to all user accounts by default and is used as an NTFS file system optimization.

67 Access Control Lists two forms of access control list (ACL):
Discretionary ACL (DACL) grants or denies access to protected resources such as files, shared memory, named pipes etc System ACL (ACL) used for auditing and in Windows Vista to enforce mandatory integrity policy objects needing protection are assigned a DACL (and possible SACL) that includes SID of the object owner. list of access control entries (ACEs) each ACE includes a SID & access mask access mask could include ability to: read, write, create, delete, modify, etc access masks are object-type specific e.g. service abilities are create, enumerate Windows has two forms of access control list (ACL). The first is Discretionary ACL (DACL), which grants or denies access to protected resources in Windows such as files, shared memory, named pipes, and so on. The other kind of ACL is the System ACL (ACL), which is used for auditing and in Windows Vista used to enforce mandatory integrity policy.

68 Security Descriptor (SD)
data structure with object owner, DACL, & SACL e.g. Owner: CORP\Blake ACE[0]: Allow CORP\Paige Full Control ACE[1]: Allow Administrators Full Control ACE[2]: Allow CORP\Cheryl Read, Write and Delete have no implied access, if there is no ACE for requesting user, then access is denied applications must request correct type of access if just request “all access” when need less (e.g. read) some user’s who should have access will be denied The data structure that includes the object owner, DACL, and SACL is referred to as a the object’s security descriptor (SD). A sample SD with no SACL is shown above. The DACL in this SD allows the user Paige (from the CORP domain) full access to the object; she can do anything to this object. Members of the Administrators can do likewise. Cheryl can read, write, and delete the object. Note the object owner is Blake; as the owner, he can do anything to the object, too. This was always the case until the release of Windows Vista. Some customers do not want owners to have such unbridled access to objects, even though they created them. In Windows Vista you can include an Owner SID in the DACL, and the access mask associated with that account applies to the object owner. There are two important things to keep in mind about access control in Windows. First, if the user accesses an object with the SD example above, and the user is not one of Blake, Paige, Cheryl, or in Administrator’s group, then that user is denied access to the object. There is no implied access. Second, if Cheryl requests read access to the object, she is granted read access. If she requests read and write access, she is also granted access. If she requests create access, she is denied access unless Cheryl is also a member of the Administrators group, because the “Cheryl ACE” does not include the “create” access mask. When a Windows application accesses an object, it must request the type of access the application requires. Many developers would simply request “all access” when in fact the application may only want to read the object. If Cheryl uses an application that attempts to access the object described above and the application requests full access to the object, she is denied access to the object unless she is an administrator. This is the prime reason so many applications failed to execute correctly on Windows XP unless the user is a member of the Administrator’s group.

69 More SD’s & Access Checks
each ACE in the DACL determines access an ACE can be an allow or a deny ACE Windows evaluates each ACE in the ACL until access is granted or explicitly denied so deny ACEs come before allow ACEs default if set using GUI explicitly order if create programmatically when user attempts to access a protected object, the O/S performs an access check comparing user/group info with ACE’s in ACL We mentioned earlier that a DACL grants or denies access; technically this is not 100% accurate. Each ACE in the DACL determines access; and an ACE can be an allow ACE or a deny ACE. Consider the variant of the previous SD shown above. Note that the first ACE is set to deny members of the guests account full control to the object .Basically, guests are out of luck if they attempt to access the object protected by this SD. Deny ACEs are not often used in Windows because they can be complicated to troubleshoot. Also note that the first ACE is the deny ACE; it is important that deny ACEs come before allow ACEs because Windows evaluates each ACE in the ACL until access is granted or explicitly denied. If the ACL grants access, then Windows will stop ACL evaluation, and if the deny ACE is at the end of the ACL, then it is not evaluated, so the user is granted access even if the account may be denied access. When setting an ACL from the user interface, Windows will always put deny ACEs before allow ACEs, but if you create an ACL programmatically (e.g. by using the SetSecurityDescriptorDacl function), you must explicitly place the deny ACEs first. Now put all this together. When a user account attempts to access a protected object, the operating system performs an access check. It does this by comparing the user account and group information in the user’s token and the ACEs in the object’s ACL. If all the requested operations (read, write, delete,and so on) are granted, then access is granted; otherwise the user gets an access denied error status (error value 5).

70 Application access Note that when an application requests access, it must also request an access level. Initially (before XP), most applications just requested “all access”, which is only given to owner or admin accounts. This is the reason so many applications failed on Windows XP unless they ran at admin level – essentially, poor coding. We mentioned earlier that a DACL grants or denies access; technically this is not 100% accurate. Each ACE in the DACL determines access; and an ACE can be an allow ACE or a deny ACE. Consider the variant of the previous SD shown above. Note that the first ACE is set to deny members of the guests account full control to the object .Basically, guests are out of luck if they attempt to access the object protected by this SD. Deny ACEs are not often used in Windows because they can be complicated to troubleshoot. Also note that the first ACE is the deny ACE; it is important that deny ACEs come before allow ACEs because Windows evaluates each ACE in the ACL until access is granted or explicitly denied. If the ACL grants access, then Windows will stop ACL evaluation, and if the deny ACE is at the end of the ACL, then it is not evaluated, so the user is granted access even if the account may be denied access. When setting an ACL from the user interface, Windows will always put deny ACEs before allow ACEs, but if you create an ACL programmatically (e.g. by using the SetSecurityDescriptorDacl function), you must explicitly place the deny ACEs first. Now put all this together. When a user account attempts to access a protected object, the operating system performs an access check. It does this by comparing the user account and group information in the user’s token and the ACEs in the object’s ACL. If all the requested operations (read, write, delete,and so on) are granted, then access is granted; otherwise the user gets an access denied error status (error value 5).

71 Interacting with SDs Powershell to get an object’s SD:
get-acl c:\folder\file.txt | format-list use set-acl to set DACL or SACL Can also use Security Descriptor Definition Language (SDDL): Example function: ConvertStringSecurityDescriptorToSecurityDescriptor() Windows is a multi-threaded operating system, which means a single process can have more than one thread of execution at a time.This is very common for both server and client applications. For example, a word processor might have one thread accepting user input and another performing a background spellcheck. A server application, such as a database server, might start a large number of threads to handle concurrent user requests. Let’s say the database server process runs as a predefined account named DB_ACCOUNT; when it takes a user request, the application can impersonate the calling user by calling an impersonation function. For example, one networking protocol supported by Windows is called Named Pipes, and the ImpersonateNamedPipeClient function will impersonate the caller. Impersonation means setting the user’s token on the current thread. Normally, access checks are performed against the process token, but when a thread is impersonating a user, the user’s token is assigned to the thread, and the access check for that thread is performed against the token on the thread, not the process token. When the connection is done, the thread “reverts” which means the token is dropped from the thread. So why impersonate? Imagine if the database server accesses a file named db.txt, and the DB_ACCOUNT account has read, write, delete, and update permission on the file. Without impersonation, any user could potentially read, write, delete, and update the file. With impersonation, it is possible to restrict who can do what to the db.txt file.

72 Impersonation process can have multiple threads
common for both clients and servers impersonation allows a server to serve a user, using their access privileges e.g. ImpersonateNamedPipeClient function sets user’s token on the current thread then access checks for that thread are performed against this token not server’s with user’s access rights Windows is a multi-threaded operating system, which means a single process can have more than one thread of execution at a time.This is very common for both server and client applications. For example, a word processor might have one thread accepting user input and another performing a background spellcheck. A server application, such as a database server, might start a large number of threads to handle concurrent user requests. Let’s say the database server process runs as a predefined account named DB_ACCOUNT; when it takes a user request, the application can impersonate the calling user by calling an impersonation function. For example, one networking protocol supported by Windows is called Named Pipes, and the ImpersonateNamedPipeClient function will impersonate the caller. Impersonation means setting the user’s token on the current thread. Normally, access checks are performed against the process token, but when a thread is impersonating a user, the user’s token is assigned to the thread, and the access check for that thread is performed against the token on the thread, not the process token. When the connection is done, the thread “reverts” which means the token is dropped from the thread. So why impersonate? Imagine if the database server accesses a file named db.txt, and the DB_ACCOUNT account has read, write, delete, and update permission on the file. Without impersonation, any user could potentially read, write, delete, and update the file. With impersonation, it is possible to restrict who can do what to the db.txt file.

73 Mandatory Access Control
have Integrity Control in Windows Vista (and later) that limits operations changing an object’s state objects and principals are labeled (using SID): Low integrity (S ) Medium integrity (S ) High integrity (S ) System integrity (S ) when write operation occurs first check subject’s integrity level dominates object’s integrity level much of O/S marked medium or higher integrity Windows Vista includes a new authorization technology named Integrity Control, which goes one step beyond DACLs. DACLs allow fine-grained access control, but integrity controls limit operations that might change the state of an object. The general premise behind integrity controls is simple; objects (such as files and processes) and principals (users) are labeled with one of the following integrity levels: Low integrity (S ); Medium integrity (S ); High integrity (S ); or System integrity (S ). Note the SIDs after the integrity levels. Microsoft implemented integrity levels using SIDs. For example, a high-integrity process will include the S SID in the process token. If a subject or object does not include an integrity label, then the subject or object is deemed medium integrity. When a write operation occurs, Windows Visa will first checks to see if the subject’s integrity level dominates the object’s integrity level, which means the subject’s integrity level is equal to or above the object’s integrity level. If it is, and the normal DACL check succeeds, then the write operation is granted. The most important component in Windows Vista that uses integrity controls is Internet Explorer 7.0; but the majority of the operating system is marked medium or higher integrity.

74 Vista User Account The screen shot of Figure 24.1 shows a normal user token in Windows Vista. It includes medium-integrity SID, which means this user account is medium integrity and any process run by this user can write only to objects of medium and lower integrity.

75 Windows Vulnerabilities
Windows, like all O/S’s, has security bugs and bugs have been exploited by attackers to compromise customer operating systems Microsoft now uses process improvement called the Security Development Lifecycle net effect approx 50% reduction in bugs Windows Vista used SDL start to finish IIS v6 (in Windows Server 2003) had only 3 vulnerabilities in 4 years, none critical Windows, like all operating systems, has security bugs, and a number of these bugs have been exploited by attackers to compromise customer operating systems. After 2001, Microsoft decided to change its software development process to better accommodate secure design, coding, testing, and maintenance requirements, with one goal in mind: reduce the number of vulnerabilities in all Microsoft products. This process improvement is called the Security Development Lifecycle (with core requirements as listed in text). A full explanation of SDL is beyond the scope of this chapter, but the net effect has been an approximately 50% reduction in security bugs. Windows Vista is the first version of Windows to have undergone SDL from start to finish. Other versions of Windows had a taste of SDL, such as Windows XP SP2, but Windows XP predates the introduction of SDL at Microsoft. SDL does not equate to “bug free” and the process is certainly not perfect, but there have been some major SDL success stories. Microsoft’s Web server, Internet Information Services (IIS), has a much-maligned reputation because of serious bugs found in the product that led to worms, such as CodeRed. IIS version 6, included with Windows Server 2003, has had a stellar security track record since its release; there have only been three reported vulnerabilities in the four years since its release, none of them critical. And this figure is reported as an order of magnitude less bugs than IIS’s main competitor, Apache.

76 Security Development Lifecycle (SDL)
Requirements: Mandatory security education Security design requirements Threat modeling Attack surface analysis and reduction Secure coding Secure testing Security push Final security review Security response Windows, like all operating systems, has security bugs, and a number of these bugs have been exploited by attackers to compromise customer operating systems. After 2001, Microsoft decided to change its software development process to better accommodate secure design, coding, testing, and maintenance requirements, with one goal in mind: reduce the number of vulnerabilities in all Microsoft products. This process improvement is called the Security Development Lifecycle (with core requirements as listed in text). A full explanation of SDL is beyond the scope of this chapter, but the net effect has been an approximately 50% reduction in security bugs. Windows Vista is the first version of Windows to have undergone SDL from start to finish. Other versions of Windows had a taste of SDL, such as Windows XP SP2, but Windows XP predates the introduction of SDL at Microsoft. SDL does not equate to “bug free” and the process is certainly not perfect, but there have been some major SDL success stories. Microsoft’s Web server, Internet Information Services (IIS), has a much-maligned reputation because of serious bugs found in the product that led to worms, such as CodeRed. IIS version 6, included with Windows Server 2003, has had a stellar security track record since its release; there have only been three reported vulnerabilities in the four years since its release, none of them critical. And this figure is reported as an order of magnitude less bugs than IIS’s main competitor, Apache.

77 Patch Management At first, patches were released at all times. Now, they release on the second Tuesday of each month (Patch Tuesday). More recently, they even announce the expected load the Thursday before, which has been popular with sys admins. Windows, like all operating systems, has security bugs, and a number of these bugs have been exploited by attackers to compromise customer operating systems. After 2001, Microsoft decided to change its software development process to better accommodate secure design, coding, testing, and maintenance requirements, with one goal in mind: reduce the number of vulnerabilities in all Microsoft products. This process improvement is called the Security Development Lifecycle (with core requirements as listed in text). A full explanation of SDL is beyond the scope of this chapter, but the net effect has been an approximately 50% reduction in security bugs. Windows Vista is the first version of Windows to have undergone SDL from start to finish. Other versions of Windows had a taste of SDL, such as Windows XP SP2, but Windows XP predates the introduction of SDL at Microsoft. SDL does not equate to “bug free” and the process is certainly not perfect, but there have been some major SDL success stories. Microsoft’s Web server, Internet Information Services (IIS), has a much-maligned reputation because of serious bugs found in the product that led to worms, such as CodeRed. IIS version 6, included with Windows Server 2003, has had a stellar security track record since its release; there have only been three reported vulnerabilities in the four years since its release, none of them critical. And this figure is reported as an order of magnitude less bugs than IIS’s main competitor, Apache.

78 Windows System Hardening
process of shoring up defenses, reducing exposed functionality, disabling features known as attack surface reduction use 80/20 rule on features not always achievable e.g. requiring RPC authentication in XP SP2 e.g. strip mobile code support on servers servers easier to harden: are used for very specific and controlled purposes server users are administrators with (theoretically) better computer configuration skills than typical users The process of hardening is the process of shoring up defenses, reducing the amount of functionality exposed to untrusted users, and disabling less-used features. At Microsoft, this process is called Attack Surface Reduction. The concept is simple: Apply the 80/20 rule to features. If the feature is not used by 80% of the population, then the feature should be disabled by default. While this is the goal, it is not always achievable simply because disabling vast amounts of functionality makes the product unusable for nontechnical users, which leads to increased support calls and customer frustration. One of the simplest and effective ways to reduce attack surface is to replace anonymous networking protocols with authenticated networking protocols. The biggest change of this nature in Windows XP SP2 was to change all anonymous remote procedure call (RPC) access to require authentication. This was a direct result of the Blaster worm, since making this simple change to RPC helps prevent worms exploiting vulnerabilities in RPC code and code that uses RPC. In practice, requiring authentication is a very good defense; the Zotob worm, which exploited a Plug ‘n’ Play vulnerability and which was accessible through RPC, did not affect Windows XP SP2, even with the coding bug, because an attacker must be authenticated first. Anotehr advantage is that most users don’t even know it’s there, yet the user is protected. Another example of hardening Windows occurred in Windows Server Because Windows Server 2003 is a server and not a client platform, the Web browser Internet Explorer was stripped of all mobile code support by default. In general, hardening servers is easier than hardening clients, for the reasons listed.

79 Windows Security Defenses
Have 4 broad categories of security defenses: account defenses network defenses buffer overrun defenses. browser defenses All versions of Windows offer security defenses, but the list of defenses has grown rapidly in the last five years to accommodate increased Internet-based threats. The attackers today are not just kids; they are criminals who see money in compromised computers. Must stress that attacks and compromises are very real, and the attackers are highly motivated by money. Attackers are no longer just young, anarchic miscreants; attackers are real criminals. The defenses in Windows Vista can be grouped into the four broad categories shown. We start by discussing system hardening, which is critical to the defensive posture of a computer system and network. We then discuss each of these types of security defenses in detail.

80 Account Defenses user accounts can have privileged SIDs
least privilege dictates that users operate with just enough privilege for tasks Windows XP users in local Administrators for application compatibility reasons can use “Secondary Logon” to run applications also restricted tokens reduce per-thread privilege Windows Vista onwards reverses default with UAC users prompted to perform a privileged operation unless admin on Server As we noted, user accounts can contain highly privileged SIDs (e.g. Administrators or Account operators groups) and dangerous privileges (such as Act as part of operating system), and malicious software running with these SIDs or privileges can wreak havoc. The principle of least privilege dictates that users should operate with just enough privilege to get the tasks done, and no more. Historically, Windows XP users operated by default as members of the local Administrators group; for application compatibility reasons. Many applications that used to run on Windows 95/98/ME would not run correctly on Windows XP unless the user was an administrator. If run as a “Standard User” they ran into errors. Of course, a user can run as a “Standard User.” Windows XP and Windows Server 2003 have “Secondary Logon,” which allows a user account to right click an application, select “Run as. . . ,” and then enter another user account and password to run the application. They also include support for a restricted token, which can reduce privilege on a per-thread level. A restricted token is simply a thread token with privileges removed and/or SIDs marked as deny-only SIDs. Windows Vista changes the default; all user accounts are users and not administrators.This is referred to as User Account Control (UAC.) When a user wants to perform a privileged operation, the user is prompted to enter an administrator’s account name and password. If the user is an administrator, the user is prompted to consent to the operation. Hence if malware attempts to perform a privileged task, the user is notified. Note that in the case of Windows “Longhorn” Server, if a user enters a command in the Run dialog box from the Start menu, the command will always run elevated if the user is normally an administrator and will not prompt the user. The great amount of user interaction required to perform these privileged operations mitigates the threat of malware performing tasks off the Run dialog box.

81 Low Privilege Service Accounts
Windows services are long-lived processes started after booting many ran with elevated privileges but many do not need elevated requirements Windows XP added Local Service and Network service accounts allow a service local or network access otherwise operate at much lower privilege level Windows XP SP2 split RPC service (RPCSS) in two (RPCSS and DCOM Server Process) example of least privilege in action, see also IIS6 direct result of Blastr worm Windows services are long-lived processes that usually start when the computer boots. e.g. File and Print service and the DNS service. Many such services run with elevated privileges because they perform privileged operations, but many services do not need such elevated requirements. In Windows XP, Microsoft added two new service accounts, the Local Service account and the Network service account, which allow a service local or network access, respectively, but processes running with these accounts operate at a much lower privilege level, and are not members of the local administrator’s group. In Windows XP SP2, Microsoft made an important change to the RPC service (RPCSS) as an outcome of the Blaster worm. In versions of Windows prior to Windows XP SP2, RPCSS ran as the System account, the most privileged account in Windows, to allow it to execute Distributed COM (DCOM, which layered on top of RPC) objects on a remote computer correctly. For Windows XP SP2 RPCSS was split in two, RPCSS shed its DCOM activation code, and a new service was created called the DCOM Server Process Launcher. RPCSS runs as the lower-privilege Network service account; DCOM runs as SYSTEM. This is a good example of the principle of least privilege in action. A small amount of code runs with elevated identity, and related components run with lower identity. IIS6 follows a similar model, a process named inetinfo starts under the System identity because it must perform administrative tasks, and it starts worker processes named w3wp.exe to handle user requests (these processes run under the lower-privilege network service identity).

82 Stripping Privileges another defense is to strip privileges from an account soon after an application starts e.g. Index server process runs as system to access all disk volumes but then sheds any unneeded privileges as soon as possible using AdjustTokenPrivileges Windows Vista can define privileges required by a service using ChangeServiceConfig2 Another useful defense, albeit not often used in Windows, is to strip privileges from an account when the application starts. This should be performed very early in the application startup code (for example, early in the application’s main function). The best way to describe this is by way of example. In Windows, the Index server process runs as the system account because it needs administrative access to all disk volumes to determine if any file has changed so it can reindex the file. Only members of the local Administrators group can get a volume handle. This is the sole reason Index server must run as the system account; yet as you will remember, the system account is bristling with dangerous privileges, such as the TCB privilege and backup privilege. So when the main index server process starts (cidaemon.exe), it sheds any unneeded privileges as soon as possible. The function that performs this is AdjustTokenPrivileges. Windows Vista also adds a function to define the set of privileges required by a service to run correctly. This function that performs this is ChangeServiceConfig2.

83 Network Defenses have IPSec and IPv6 with authenticated network packets enabled by default in Windows Vista IPv4 also enabled by default, expect less use E.g. Windows Xbox live network ipsec have built-in software firewall block inbound connections on specific ports Vista can allow local net access only optionally block outbound connections default was off (XP) but now default on Many users and industry pundits focus on “users-as-non-admins” and can lose sight of attacks that do not require human interaction. These cannot protect a computer from attacks exploiting a vulnerability in a network facing process with no user interaction, such as DNS server, server, or Web server. Windows offers many network defenses, most notably native IPSec and IPv6 support, and a bi-directional firewall. The reason why DDoS attacks occur is because IPv4 is an unauthenticated protocol. There are many other kinds of TCP/IP-related issues. The problem is that IPv4 is fundamentally flawed. Can use either IPSec or IPv6 which support authenticated network packets (see chapter 21). In Windows Vista, IPv6 is enabled by default. IPv4 is also enabled by default, but over time Microsoft anticipates that more of the world’s networks will migrate to the much more secure protocol. All versions of Windows since Windows XP have included a built-in software firewall. The Windows XP one was limited in that (1) it was not enabled by default, and (2) its configuration was limited to blocking only inbound connections on specific ports. The firewall in Windows XP SP2 was substantially improved: to allow users with multiple computers in the home to share files and print documents. The old firewall would only allow this to happen if the file and print ports (TCP 139 and 445) were open to the Internet. Windows XP SP2 has an option to open a port only on the local subnet. The other change in Windows XP SP2, and by far the most important, is that the firewall is enabled by default. Windows Vista adds two other functions. The first is that the firewall is a fully integrated component of the rewritten TCP/IP networking stack. Second, the firewall supports optionally blocking outbound connections, though this can easily be circumvented

84 Buffer Overrun Defenses
many compromises exploit buffer overruns Windows Vista has “Stack-Based Buffer Overrun Detection (/GS)” default enabled source code compiled with special /GS option does not affect every function; only those with at least 4- bytes of contiguous stack data and that takes a pointer or buffer as an argument defends against “classic stack smash” Many computers are compromised through attacks that take advantage of buffer overrun bugs, as we discussed in chapter 11. Windows Vista includes “Stack-Based Buffer Overrun Detection (/GS)” enabled by default. The source code for Windows XP SP2 is compiled with a special compiler option in Microsoft Visual C to add defenses to the function’s stack. The compiler switch is /GS, and it is usable by anyone with access to a Visual C compiler. This compiler option does not affect every function; it affects only functions that have at least 4-bytes of contiguous stack data and only when the function takes a pointer or buffer as an argument. Note that the stack data must be of a “classic stack smash” type, such as a char array.

85 Windows Stack and /GS flag
Normally in Windows, a function’s stack looks like Figure 24.2a. When the function returns, it must continue execution at the next instruction after the instruction that called this function. The CPU does this by taking the values off the stack (called popping) and populating the EBP and EIP registers. If the attacker can overflow the buffer on the stack, he can overrun the data used to populate the EBP and EIP registers with values under his control and hence change the application’s execution flow. Once the code is compiled with the /GS option, the stack is laid out as shown in Figure 24.2b. Now a cookie has been inserted between stack data and the function return address. This random value is checked when the function exits, and if the cookie is corrupted, the application is halted. You will also notice that buffers on the stack are placed in higher memory than non-buffers, such as function pointers, C objects, and scalar values. The reason for this is to make it harder for some attacks to succeed. Function pointers and C objects with virtual destructors (which are simply function pointers) are also subject to attack because they determine execution flow. If these constructs are placed in memory higher than buffers, then overflowing a buffer could corrupt a function pointer, for example. By switching the order around, the attacker must take advantage of a buffer underrun, which is rarer, to successfully corrupt the function pointer. There are other, rarer, variants of the buffer overrun that can still corrupt a function pointer, such as corrupting a stack frame in higher memory.

86 Buffer Overrun Defenses
No eXecuteNamed (NX) / Data Execution Prevention (DEP) / eXecution Disable (XD) prevent code executing in data segments as commonly used by buffer overrun exploits applications linked with /NXCOMPAT option Stack Randomization (Vista only) randomizes thread stack base addresses Heap-based buffer overrun defenses: add and check random value on each heap block heap integrity checking heap randomization (Vista only) No eXecuteNamed (NX) / Data Execution Prevention (DEP) / eXecution Disable (XD) use CPU support to prevent code from executing in data segments. Most modern Intel CPUs, and all current AMD CPUs support it. DEP support was first introduced in Windows XP SP2 and is a critically important defense in Windows Vista, especially when used with address space layout randomization (ASLR). The goal of NX is to prevent data executing. Most buffer overrun exploits enter a computer system as data, and then those data are executed. By default, most system components in Windows Vista and applications can use NX by linking with the /NXCOMPAT linker option. The Stack Randomization defense is available only in Windows Vista and later. When a thread starts in Windows Vista, the operating system will randomize the stack base address by 0–31 (4k size) pages. Once the page is chosen, a random offset is chosen within the page, and the stack starts from that spot. The purpose of randomization is to remove some of the predictability from the attacker, making attacks harder. Heap-based buffer over-runs are also exploitable, and can lead to code execution. The first heap defense, added to Windows XP SP2, is to add a random value to each heap block and detect that this cookie has not been tampered with. If the cookie has changed,then the heap has been corrupted and the application could be forced to crash (failstop). The second defense is heap integrity checking; when heap blocks are freed, metadata in the heap data structures are checked for validity, and if the data are compromised,either the heap block is leaked or the application crashes. Windows Vista also adds heap randomization. When a heap is created, the start of the heap is offset by 0–4 MB. Again, this makes things harder for the attacker.

87 Other Defenses Image Randomization Service Restart Policy
O/S boots in one of 256 configurations makes O/S less predictable for attackers Service Restart Policy services can be configured to restart if fail great for reliability but lousy for security Vista sets some critical services so can only restart twice, then manual restart needed gives attacker only two attempts Encrypted File System…..EFS, Bitlocker Windows Vista also adds image randomization. When the operating system boots ,it starts up in one of 256 configurations. In other words, the entire operating system is shifted up or down in memory when it is booted .The best way to think of this is to imagine that a random number is selected at boot, and every operating system component is loaded as an offset from that location, but the offset between each component is fixed. Again, this makes the operating system less predictable for attackers and makes it less likely that an exploit will succeed. In Windows a service can be configured to restart if the service fails. This is great for reliability but lousy for security, because if an attacker attacks the service and the attack fails but the service crashes, the server might restart and the attacker will have another chance to attack the system. In Windows Vista, Microsoft set some of the critical services to restart only twice, after which the service will not restart unless the administrator manually restarts it. This gives the attacker only two attempts to get the attack to work, and in the face of stack, heap, and image randomization, the server is much more difficult.

88 Browser Defenses web browser is a key point of attack
via script code, graphics, helper objects Microsoft added many defenses to IE7 onwards ActiveX opt-in unloads ActiveX controls by default when any then first run prompts user to confirm protected mode IE runs at low integrity level (see earlier) so more difficult for malware to manipulate O/S A web browser is a key point of attack, and not only renders web pages, but can also contain code in the form of scripting languages such as JavaScript, ActiveX controls, Flash, Java applets, or .NET applications. Mixing code and data is bad for security. Web browsers can also render various multimedia objects such as sound, JPEG, BMP, GIF, animated GIFs, and PNG files. Many file formats are rendered by helper objects, called MIME handlers, e.g. Quicktime, Windows Media Player, or Real Player. A malicious Web page could take advantage of many possible attack vectors; some vectors are under the direct control of the browser and some are not. With this setting in mind, Microsoft decided to add many defenses to Internet Explorer 7, especially the browser version in Windows Vista. Perhaps the most important single defense is ActiveX opt-in. An ActiveX control is a binary object that can potentially be invoked by the Web browser using the <OBJECT> HTML tag or by calling the object directly from script. Many common Web browser extensions are implemented as ActiveX controls; probably the most well known is Adobe Flash. It is possible for ActiveX controls to be malicious, and chances are very good that a user already has one or more ActiveX controls installed on his or her computer. Internet Explorer 7 adds a new feature called “ActiveX opt-in” which essentially unloads ActiveX controls by default, and when a control is used for the first time, the user is prompted to allow the control to run. At this point, the user knows that the control is on the computer. Another important defense on Windows Vista is protected mode. When this default configuration is used, Internet Explorer runs at low integrity level, making it more difficult for malware to manipulate the operating system. See Section 24.1 for a discussion of integrity levels in Windows Vista.

89 The End


Download ppt "Operating System Security"

Similar presentations


Ads by Google