Presentation on theme: "Operating System Security"— Presentation transcript:
1Operating System Security Chapter 12Operating System SecurityComputer client and server systems are central components of the IT infrastructurefor most organizations. The client systems provide access to organizational dataand applications, supported by the servers housing those data and applications.However, given that most large software systems will almost certainly have anumber of security weaknesses, as we discussed in Chapter 6 and in the previoustwo chapters, it is currently necessary to manage the installation and continuingoperation of these systems to provide appropriate levels of security despite theexpected presence of these vulnerabilities. In some circumstances we may be able touse systems designed and evaluated to provide security by design. We examinesome of these possibilities in the next chapter.In this chapter we discuss how to provide systems security as a hardeningprocess that includes planning, installation, configuration, update, and maintenanceof the operating system and the key applications in use, following the generalapproach detailed in [NIST08]. We consider this process for the operating system,and then key applications in general, and then discuss some specific aspects inrelation to Linux and Windows systems in particular. We conclude with a discussionon securing virtualized systems, where multiple virtual machines may execute onthe one physical system.
2Operating Systemeach layer of code needs measures in place to provide appropriate security serviceseach layer is vulnerable to attack from below if the lower layers are not secured appropriatelyWe view a system as having a number of layers, with the physical hardwareat the bottom; the base operating system above including privileged kernelcode, APIs, and services; and finally user applications and utilities in the toplayer, as shown in Figure This figure also shows the presence of BIOSand possibly other code that is external to, and largely not visible from, theoperating system kernel, but which is used when booting the system or to supportlow-level hardware control. Each of these layers of code needs appropriatehardening measures in place to provide appropriate security services. And eachlayer is vulnerable to attack from below, should the lower layers not also besecured appropriately.
3Measuresthe 2010 Australian Defense Signals Directorate (DSD) list the “Top 35 Mitigation Strategies”over 70% of the targeted cyber intrusions investigated by DSD in 2009 could have been preventedthe top four measures for prevention are:patch operating systems and applications using auto-updatepatch third-party applicationsrestrict admin privileges to users who need themwhite-list approved applicationsA number of reports note that the use of a small number of basic hardeningmeasures can prevent a large proportion of the attacks seen in recent years. The2010 Australian Defense Signals Directorate (DSD) list of the “Top 35 MitigationStrategies” notes that implementing just the top four of these would have preventedover 70% of the targeted cyber intrusions investigated by DSD in These topfour measures are:1. patch operating systems and applications using auto-update2. patch third-party applications3. restrict admin privileges to users who need them4. white-list approved applicationsWe discuss all four of these measures, and many others in the DSD list, in thischapter. Note that these measures largely align with those in the “20 CriticalControls” developed by DHS, NSA, the Department of Energy, SANS, and othersin the United States.
4Operating System Security possible for a system to be compromised during the installation process before it can install the latest patchesbuilding and deploying a system should be a planned process designed to counter this threatprocess must:assess risks and plan the system deploymentsecure the underlying operating system and then the key applicationsensure any critical content is securedensure appropriate network protection mechanisms are usedensure appropriate processes are used to maintain securityAs we noted above, computer client and server systems are central componentsof the IT infrastructure for most organizations, may hold critical data andapplications, and are a necessary tool for the function of an organization.Accordingly, we need to be aware of the expected presence of vulnerabilitiesin operating systems and applications as distributed, and the existence ofworms scanning for such vulnerabilities at high rates, such as we discussed inSection Thus, it is quite possible for a system to be compromised duringthe installation process before it can install the latest patches or implementother hardening measures. Hence building and deploying a system should bea planned process designed to counter such a threat, and to maintain securityduring its operational lifetime.[NIST08] states that this process must:• assess risks and plan the system deployment• secure the underlying operating system and then the key applications• ensure any critical content is secured• ensure appropriate network protection mechanisms are used• ensure appropriate processes are used to maintain securityWhile we address the selection of network protection mechanisms in Chapter 9 , weexamine the other items in the rest of this chapter.
5System Security Planning Process the purpose of the system, the type of information stored, the applications and services provided, and their security requirementsthe categories of users of the system, the privileges they have, and the types of information they can accesshow the users are authenticatedhow access to the information stored on the system is managedwhat access the system has to information stored on other hosts, such as file or database servers, and how this is managedwho will administer the system, and how they will manage the system (via local or remote access)any additional security measures required on the system, including the use of host firewalls, anti-virus or other malware protection mechanisms, and logging[NIST08] provides a list of items that should be considered during the systemsecurity planning process. While its focus is on secure server deployment, much ofthe list applies equally well to client system design. This list includes consideration of:• the purpose of the system, the type of information stored, the applications andservices provided, and their security requirements• the categories of users of the system, the privileges they have, and the types ofinformation they can access• how the users are authenticated• how access to the information stored on the system is managed• what access the system has to information stored on other hosts, such as file ordatabase servers, and how this is managed• who will administer the system, and how they will manage the system (vialocal or remote access)• any additional security measures required on the system, including the use ofhost firewalls, anti-virus or other malware protection mechanisms, and logging
6Operating Systems Hardening first critical step in securing a system is to secure the base operating systembasic stepsinstall and patch the operating systemharden and configure the operating system to adequately address the indentified security needs of the systeminstall and configure additional security controls, such as anti- virus, host-based firewalls, and intrusion detection system (IDS)test the security of the basic operating system to ensure that the steps taken adequately address its security needsThe first critical step in securing a system is to secure the base operating system uponwhich all other applications and services rely. A good security foundation needs aproperly installed, patched, and configured operating system. Unfortunately, thedefault configuration for many operating systems often maximizes ease of use andfunctionality, rather than security. Further, since every organization has its ownsecurity needs, the appropriate security profile, and hence configuration, will alsodiffer. What is required for a particular system should be identified during theplanning phase, as we have just discussed.While the details of how to secure each specific operating system differ, thebroad approach is similar. Appropriate security configuration guides and checklistsexist for most common operating systems, and these should be consulted, thoughalways informed by the specific needs of each organization and their systems. Insome cases, automated tools may be available to further assist in securing the systemconfiguration.[NIST08] suggests the following basic steps should be used to secure an operatingsystem:• install and patch the operating system• harden and configure the operating system to adequately address the identifiedsecurity needs of the system by:• removing unnecessary services, applications, and protocols• configuring users, groups, and permissions• configuring resource controls• install and configure additional security controls, such as anti-virus, host-basedfirewalls, and intrusion detection systems (IDS), if needed• test the security of the basic operating system to ensure that the steps takenadequately address its security needs
7Initial Setup and Patching system security begins with the installation of the operating systemideally new systems should be constructed on a protected networkfull installation and hardening process should occur before the system is deployed to its intended locationinitial installation should install the minimum necessary for the desired systemoverall boot process must also be securedthe integrity and source of any additional device driver code must be carefully validatedcritical that the system be kept up to date, with all critical security related patches installedshould stage and validate all patches on the test systems before deploying them in productionSystem security begins with the installation of the operating system. As we havealready noted, a network connected, unpatched system, is vulnerable to exploit duringits installation or continued use. Hence it is important that the system not beexposed while in this vulnerable state. Ideally new systems should be constructed ona protected network. This may be a completely isolated network, with the operatingsystem image and all available patches transferred to it using removable media suchas DVDs or USB drives. Given the existence of malware that can propagate usingremovable media, as we discuss in Chapter 6 , care is needed to ensure the mediaused here is not so infected. Alternatively, a network with severely restricted accessto the wider Internet may be used. Ideally it should have no inbound access, andhave outbound access only to the key sites needed for the system installation andpatching process. In either case, the full installation and hardening process shouldoccur before the system is deployed to its intended, more accessible, and hence vulnerable,location.The initial installation should install the minimum necessary for the desiredsystem, with additional software packages included only if they are required forthe function of the system. We explore the rationale for minimizing the number ofpackages on the system shortly.The overall boot process must also be secured. This may require adjustingoptions on, or specifying a password required for changes to, the BIOS code usedwhen the system initially boots. It may also require limiting which media the systemis normally permitted to boot from. This is necessary to prevent an attackerfrom changing the boot process to install a covert hypervisor, such as we discussedin Section 6.8 , or to just boot a system of their choice from external mediain order to bypass the normal system access controls on locally stored data. Theuse of a cryptographic file system may also be used to address this threat, as wenote later.Care is also required with the selection and installation of any additionaldevice driver code, since this executes with full kernel level privileges, but is oftensupplied by a third party. The integrity and source of such driver code must be carefullyvalidated given the high level of trust it has. A malicious driver can potentiallybypass many security controls to install malware. This was done in both theBlue Pill demonstration rootkit, which we discussed in Section 6.8 , and the Stuxnetworm, which we described in Section 6.3 .Given the continuing discovery of software and other vulnerabilities for commonlyused operating systems and applications, it is critical that the system be keptas up to date as possible, with all critical security related patches installed. Indeed,doing this addresses the top two of the four key DSD mitigation strategies we listedpreviously. Nearly all commonly used systems now provide utilities that can automaticallydownload and install security updates. These tools should be configuredand used to minimize the time any system is vulnerable to weaknesses for whichpatches are available.Note that on change-controlled systems, you should not run automaticupdates, because security patches can, on rare but significant occasions, introduceinstability. For systems on which availability and uptime are of paramount importance,therefore, you should stage and validate all patches on test systems beforedeploying them in production.
8Remove Unnecessary Services, Applications, Protocols when performing the initial installation the supplied defaults should not be useddefault configuration is set to maximize ease of use and functionality rather than securityif additional packages are needed later they can be installed when they are requiredif fewer software packages are available to run the risk is reducedsystem planning process should identify what is actually required for a given systemBecause any of the software packages running on a system may contain softwarevulnerabilities, clearly if fewer software packages are available to run, then the riskis reduced. There is clearly a balance between usability, providing all software thatmay be required at some time, with security and a desire to limit the amount ofsoftware installed. The range of services, applications, and protocols required willvary widely between organizations, and indeed between systems within an organization.The system planning process should identify what is actually required for agiven system, so that a suitable level of functionality is provided, while eliminatingsoftware that is not required to improve security.The default configuration for most distributed systems is set to maximize easeof use and functionality, rather than security. When performing the initial installation,the supplied defaults should not be used, but rather the installation should becustomized so that only the required packages are installed. If additional packagesare needed later, they can be installed when they required. [NIST08] and many ofthe security hardening guides provide lists of services, applications, and protocolsthat should not be installed if not required.[NIST08] also states a strong preference for not installing unwanted software,rather than installing and then later removing or disabling it. It argues this preferencebecause they note that many uninstall scripts fail to completely remove allcomponents of a package. They also note that disabling a service means that whileit is not available as an initial point of attack, should an attacker succeed in gainingsome access to a system, then disabled software could be re-enabled and used tofurther compromise a system. It is better for security if unwanted software is notinstalled, and thus not available for use at all.
9Configure Users, Groups, and Authentication system planning process should consider:categories of users on the systemprivileges they havetypes of information they can accesshow and where they are defined and authenticateddefault accounts included as part of the system installation should be securedthose that are not required should be either removed or disabledpolicies that apply to authentication credentials configuredConfigure Users, Groups, and Authenticationnot all users with access to a system will have the same access to all data and resources on that systemelevated privileges should be restricted to only those users that require them, and then only when they are needed to perform a taskNot all users with access to a system will have the same access to all data andresources on that system. All modern operating systems implement access controlsto data and resources, as we discuss in Chapter 4 . Nearly all provide some form ofdiscretionary access controls. Some systems may provide role-based or mandatoryaccess control mechanisms as well.The system planning process should consider the categories of users onthe system, the privileges they have, the types of information they can access,and how and where they are defined and authenticated. Some users will haveelevated privileges to administer the system; others will be normal users, sharingappropriate access to files and other data as required; and there may even beguest accounts with very limited access. The third of the four key DSD mitigationstrategies is to restrict elevated privileges to only those users that require them.Further, it is highly desirable that such users only access elevated privileges whenneeded to perform some task that requires them, and to otherwise access thesystem as a normal user. This improves security by providing a smaller windowof opportunity for an attacker to exploit the actions of such privileged users.Some operating systems provide special tools or access mechanisms to assistadministrative users to elevate their privileges only when necessary, and toappropriately log these actions.One key decision is whether the users, the groups they belong to, and theirauthentication methods are specified locally on the system or will use a centralizedauthentication server. Whichever is chosen, the appropriate details are nowconfigured on the system.Also at this stage, any default accounts included as part of the system installationshould be secured. Those which are not required should be either removedor at least disabled. System accounts that manage services on the system shouldbe set so they cannot be used for interactive logins. And any passwords installedby default should be changed to new values with appropriate security.Any policy that applies to authentication credentials, and especially topassword security, is also configured. This includes details of which authenticationmethods are accepted for different methods of account access. And it includesdetails of the required length, complexity, and age allowed for passwords. Wediscuss some of these issues in Chapter 3 .
10Install Configure Additional Resource Security Controls Controls once the users and groups are defined, appropriate permissions can be set on data and resourcesmany of the security hardening guides provide lists of recommended changes to the default access configurationfurther security possible by installing and configuring additional security tools:anti-virus softwarehost-based firewallsIDS or IPS softwareapplication white-listingOnce the users and their associated groups are defined, appropriate permissionscan be set on data and resources to match the specified policy. This may be to limitwhich users can execute some programs, especially those that modify the systemstate. Or it may be to limit which users can read or write data in certain directorytrees. Many of the security hardening guides provide lists of recommended changesto the default access configuration to improve security.Further security improvement may be possible by installing and configuring additionalsecurity tools such as anti-virus software, host-based firewalls, IDS or IPSsoftware, or application white-listing. Some of these may be supplied as part of theoperating systems installation, but not configured and enabled by default. Othersare third-party products that are acquired and used.Given the widespread prevalence of malware, as we discuss in Chapter 6 ,appropriate anti-virus (which as noted addresses a wide range of malwaretypes) is a critical security component on many systems. Anti-virus productshave traditionally been used on Windows systems, since their high use madethem a preferred target for attackers. However, the growth in other platforms,particularly smartphones, has led to more malware being developed for them.Hence appropriate anti-virus products should be considered for any system as partof its security profile.Host-based firewalls, IDS, and IPS software also may improve securityby limiting remote network access to services on the system. If remote access toa service is not required, though some local access is, then such restrictions helpsecure such services from remote exploit by an attacker. Firewalls are traditionallyconfigured to limit access by port or protocol, from some or all external systems.Some may also be configured to allow access from or to specific programs on thesystems, to further restrict the points of attack, and to prevent an attacker installingand accessing their own malware. IDS and IPS software may include additionalmechanisms such as traffic monitoring, or file integrity checking to identify andeven respond to some types of attack.Another additional control is to white-list applications. This limits theprograms that can execute on the system to just those in an explicit list. Such atool can prevent an attacker installing and running their own malware, and wasthe last of the four key DSD mitigation strategies. While this will improve security,it functions best in an environment with a predictable set of applicationsthat users require. Any change in software usage would require a change inthe configuration, which may result in increased IT support demands. Not allorganizations or all systems will be sufficiently predictable to suit this type ofcontrol.
11Test the System Security checklists are included in security hardening guidesthere are programs specifically designed to:review a system to ensure that a system meets the basic security requirementsscan for known vulnerabilities and poor configuration practicesshould be done following the initial hardening of the systemrepeated periodically as part of the security maintenance processTest the System Securityfinal step in the process of initially securing the base operating system is security testinggoal:ensure the previous security configuration steps are correctly implementedidentify any possible vulnerabilitiesThe final step in the process of initially securing the base operating system is securitytesting. The goal is to ensure that the previous security configuration steps arecorrectly implemented, and to identify any possible vulnerabilities that must be correctedor managed.Suitable checklists are included in many security hardening guides. Thereare also programs specifically designed to review a system to ensure that asystem meets the basic security requirements, and to scan for known vulnerabilitiesand poor configuration practices. This should be done following the initialhardening of the system, and then repeated periodically as part of the securitymaintenance process.
12Application Configuration may include:creating and specifying appropriate data storage areas for applicationmaking appropriate changes to the application or service default configuration detailssome applications or services may include:default datascriptsuser accountsof particular concern with remotely accessed services such as Web and file transfer servicesrisk from this form of attack is reduced by ensuring that most of the files can only be read, but not written, by the serverAny application specific configuration is then performed. This may include creatingand specifying appropriate data storage areas for the application, and makingappropriate changes to the application or service default configuration details.Some applications or services may include default data, scripts, or useraccounts. These should be reviewed, and only retained if required, and suitablysecured. A well-known example of this is found with Web servers, which ofteninclude a number of example scripts, quite a few of which are known to be insecure.These should not be used as supplied.As part of the configuration process, careful consideration should be given tothe access rights granted to the application. Again, this is of particular concern withremotely accessed services, such as Web and file transfer services. The server applicationshould not be granted the right to modify files, unless that function is specificallyrequired. A very common configuration fault seen with Web and file transferservers is for all the files supplied by the service to be owned by the same “user”account that the server executes as. The consequence is that any attacker able toexploit some vulnerability in either the server software or a script executed by theserver may be able to modify any of these files. The large number of “Web defacement”attacks is clear evidence of this type of insecure configuration. Much of therisk from this form of attack is reduced by ensuring that most of the files can onlybe read, but not written, by the server. Only those files that need to be modified, tostore uploaded form data for example, or logging details, should be writeable by theserver. Instead the files should mostly be owned and modified by the users on thesystem who are responsible for maintaining the information.
13Encryption Technology is a key enabling technology that may be used to secure data both in transit and when storedmust be configured and appropriate cryptographic keys created, signed, and securedif secure network services are provided using TLS or IPsec suitable public and private keys must be generated for each of themif secure network services are provided using SSH, appropriate server and client keys must be createdcryptographic file systems are another use of encryptionEncryption is a key enabling technology that may be used to secure data both intransit and when stored, as we discuss in Chapter 2 and in Parts Four and Five.If such technologies are required for the system, then they must be configured, andappropriate cryptographic keys created, signed, and secured.If secure network services are provided, most likely using either TLS or IPsec,then suitable public and private keys must be generated for each of them. ThenX.509 certificates are created and signed by a suitable certificate authority, linkingeach service identity with the public key in use, as we discuss in Section Ifsecure remote access is provided using Secure Shell (SSH), then appropriate server,and possibly client keys, must be created.Cryptographic file systems are another use of encryption. If desired, thenthese must be created and secured with suitable keys.
14Security Maintenance process of maintaining security is continuous security maintenance includes:monitoring and analyzing logging informationperforming regular backupsrecovering from security compromisesregularly testing system securityusing appropriate software maintenance processes to patch and update all critical software, and to monitor and revise configuration as neededOnce the system is appropriately built, secured, and deployed, the process of maintainingsecurity is continuous. This results from the constantly changing environment,the discovery of new vulnerabilities, and hence exposure to new threats.[NIST08] suggests that this process of security maintenance includes the followingadditional steps:• monitoring and analyzing logging information• performing regular backups• recovering from security compromises• regularly testing system security• using appropriate software maintenance processes to patch and update allcritical software, and to monitor and revise configuration as neededWe have already noted the need to configure automatic patching and update wherepossible, or to have a process to manually test and install patches on configurationcontrolled systems, and that the system should be regularly tested using checklistor automated tools where possible. We discuss the process of incident response inSection We now consider the critical logging and backup procedures.
15can only inform you about bad things that have already happened in the event of a system breach or failure, system administrators can more quickly identify what happenedkey is to ensure you capture the correct data and then appropriately monitor and analyze this datainformation can be generated by the system, network and applicationsrange of data acquired should be determined during the system planning stagegenerates significant volumes of information and it is important that sufficient space is allocated for themautomated analysis is preferredLogging[NIST08] notes that “logging is a cornerstone of a sound security posture.” Loggingis a reactive control that can only inform you about bad things that have alreadyhappened. But effective logging helps ensure that in the event of a system breachor failure, system administrators can more quickly and accurately identify whathappened and thus most effectively focus their remediation and recovery efforts.The key is to ensure you capture the correct data in the logs, and are then able toappropriately monitor and analyze this data. Logging information can be generatedby the system, network and applications. The range of logging data acquired shouldbe determined during the system planning stage, as it depends on the securityrequirements and information sensitivity of the server.Logging can generate significant volumes of information. It is important thatsufficient space is allocated for them. A suitable automatic log rotation and archivesystem should also be configured to assist in managing the overall size of the logginginformation.Manual analysis of logs is tedious and is not a reliable means of detectingadverse events. Rather, some form of automated analysis is preferred, as it is morelikely to identify abnormal activity.We discuss the process of logging further in Chapter 18 .
16Data Backup and Archive performing regular backups of data is a critical control that assists with maintaining the integrity of the system and user datamay be legal or operational requirements for the retention of databackupthe process of making copies of data at regular intervalsarchivethe process of retaining copies of data over extended periods of time in order to meet legal and operational requirements to access past dataneeds and policy relating to backup and archive should be determined during the system planning stagekept online or offlinestored locally or transported to a remote sitetrade-offs include ease of implementation and cost versus greater security and robustness against different threatsPerforming regular backups of data on a system is another critical control that assistswith maintaining the integrity of the system and user data. There are many reasonswhy data can be lost from a system, including hardware or software failures, or accidentalor deliberate corruption. There may also be legal or operational requirementsfor the retention of data. Backup is the process of making copies of dataat regular intervals, allowing the recovery of lost or corrupted data over relativelyshort time periods of a few hours to some weeks. Archive is the process of retainingcopies of data over extended periods of time, being months or years, in orderto meet legal and operational requirements to access past data. These processes areoften linked and managed together, although they do address distinct needs.The needs and policy relating to backup and archive should be determinedduring the system planning stage. Key decisions include whether the backup copiesare kept online or offline, and whether copies are stored locally or transported to aremote site. The trade-offs include ease of implementation and cost versus greatersecurity and robustness against different threats.A good example of the consequences of poor choices here was seen in theattack on an Australian hosting provider in early The attackers destroyednot only the live copies of thousands of customer’s sites, but also all of the onlinebackup copies. As a result, many customers who had not kept their own backupcopies lost all of their site content and data, with serious consequences for many ofthem, and for the hosting provider as well. In other examples, many organizationsthat only retained onsite backups have lost all their data as a result of fire or floodingin their IT center. These risks must be appropriately evaluated.
17Linux/Unix Security patch management keeping security patches up to date is a widely recognized and critical control for maintaining securityapplication and service configurationmost commonly implemented using separate text files for each application and servicegenerally located either in the /etc directory or in the installation tree for a specific applicationindividual user configurations that can override the system defaults are located in hidden “dot” files in each user’s home directorymost important changes needed to improve system security are to disable services and applications that are not requiredEnsuring that system and application code is kept up to date with security patches isa widely recognized and critical control for maintaining security.Modern Unix and Linux distributions typically include tools for automaticallydownloading and installing software updates, including security updates, whichcan minimize the time a system is vulnerable to known vulnerabilities for whichpatches exist. For example, Red Hat, Fedora, and CentOS include up2date oryum ; SuSE includes yast; and Debian uses apt-get , though you must run it asa cron job for automatic updates. It is important to configure whichever update toolis provided on the distribution in use, to install at least critical security patches in atimely manner.As noted earlier, change-controlled systems should not run automatic updates,because they may possibly introduce instability. Such systems should validate allpatches on test systems before deploying them to production systems.Configuration of applications and services on Unix and Linux systems is mostcommonly implemented using separate text files for each application and service.System-wide configuration details are generally located either in the /etc directoryor in the installation tree for a specific application. Where appropriate, individualuser configurations that can override the system defaults are located in hidden“dot” files in each user’s home directory. The name, format, and usage of these filesare very much dependent on the particular system version and applications in use.Hence the systems administrators responsible for the secure configuration of such asystem must be suitably trained and familiar with them.Traditionally, these files were individually edited using a text editor, withany changes made taking effect either when the system was next rebooted or whenthe relevant process was sent a signal indicating that it should reload its configurationsettings. Current systems often provide a GUI interface to these configurationfiles to ease management for novice administrators. Using such a manager may beappropriate for small sites with a limited number of systems. Organizations withlarger numbers of systems may instead employ some form of centralized management,with a central repository of critical configuration files that can be automaticallycustomized and distributed to the systems they manage.The most important changes needed to improve system security are to disableservices, especially remotely accessible services, and applications, that are notrequired, and to then ensure that applications and services that are needed areappropriately configured, following the relevant security guidance for each. Weprovide further details on this in Section
18Linux/Unix Security users, groups, and permissions access is specified as granting read, write, and execute permissions to each of owner, group, and others for each resourceguides recommend changing the access permissions for critical directories and fileslocal exploitsoftware vulnerability that can be exploited by an attacker to gain elevated privilegesremote exploitsoftware vulnerability in a network server that could be triggered by a remote attackerAs we describe in Sections 4.5 and 25.3 , Unix and Linux systems implement discretionaryaccess control to all file system resources. These include not only filesand directories but devices, processes, memory, and indeed most system resources.Access is specified as granting read, write, and execute permissions to each ofowner, group, and others, for each resource, as shown in Figure These are setusing the chmod command. Some systems also support extended file attributes withaccess control lists that provide more flexibility, by specifying these permissions foreach entry in a list of users and groups. These extended access rights are typically setand displayed using the getfacl and setfacl commands. These commands canalso be used to specify set user or set group permissions on the resource.Information on user accounts and group membership are traditionally storedin the /etc/passwd and /etc/group files, though modern systems also have theability to import these details from external repositories queried using LDAP or NISfor example. These sources of information, and indeed of any associated authenticationcredentials, are specified in the PAM (pluggable authentication module)configuration for the system, often using text files in the /etc/pam.d directory.In order to partition access to information and resources on the system, usersneed to be assigned to appropriate groups granting them any required access. Thenumber and assignments to groups should be decided during the system securityplanning process, and then configured in the appropriate information repository,whether locally using the configuration files in /etc, or on some centralized database.At this time, any default or generic users supplied with the system should bechecked, and removed if not required. Other accounts that are required, but are notassociated with a user that needs to login, should have login capability disabled, andany associated password or authentication credential removed.Guides to hardening Unix and Linux systems also often recommend changingthe access permissions for critical directories and files, in order to further limitaccess to them. Programs that set user (setuid) to root or set group (setgid) to a privilegedgroup are key target for attackers. As we detail in Sections 4.5 and 25.3 , suchprograms execute with superuser rights, or with access to resources belonging to theprivileged group, no matter which user executes them. A software vulnerability insuch a program can potentially be exploited by an attacker to gain these elevatedprivileges. This is known as a local exploit. A software vulnerability in a networkserver could be triggered by a remote attacker. This is known as a remote exploit.It is widely accepted that the number and size of setuid root programs in particularshould be minimized. They cannot be eliminated, as superuser privileges arerequired to access some resources on the system. The programs that manage userlogin, and allow network services to bind to privileged ports, are examples. However,other programs, that were once setuid root for programmer convenience, can functionas well if made setgid to a suitable privileged group that has the necessary access tosome resource. Programs to display system state, or deliver mail, have been modifiedin this way. System hardening guides may recommend further changes and indeed theremoval of some such programs that are not required on a particular system.
19Linux/Unix Security remote access controls logging and log rotation several host firewall programs may be usedmost systems provide an administrative utility to select which services will be permitted to access the systemlogging and log rotationshould not assume that the default setting is necessarily appropriateGiven that remote exploits are of concern, it is important to limit access to onlythose services required. This function may be provided by a perimeter firewall, aswe discussed in Chapter 9 . However, host-based firewall or network access controlmechanisms may provide additional defenses. Unix and Linux systems support severalalternatives for this.The TCP Wrappers library and tcpd daemon provide one mechanism thatnetwork servers may use. Lightly loaded services may be “wrapped” using tcpd, whichlistens for connection requests on their behalf. It checks that any request is permittedby configured policy before accepting it and invoking the server program to handleit. Requests that are rejected are logged. More complex and heavily loaded serversincorporate this functionality into their own connection management code, using theTCP Wrappers library, and the same policy configuration files. These files are /etc/hosts.allow and /etc/hosts.deny, which should be set as policy requires.There are several host firewall programs that may be used. Linux systemsprimarily now use the iptables program to configure the netfilter kernelmodule. This provides comprehensive, though complex, stateful packet filtering,monitoring, and modification capabilities. BSD-based systems (including MacOSX)typically use the ipfw program with similar, though less comprehensive, capabilities.Most systems provide an administrative utility to generate common configurationsand to select which services will be permitted to access the system. These shouldbe used unless there are non-standard requirements, given the skill and knowledgeneeded to edit these configuration files directly.Most applications can be configured to log with levels of detail ranging from “debugging”(maximum detail) to “none.” Some middle setting is usually the best choice,but you should not assume that the default setting is necessarily appropriate.In addition, many applications allow you to specify either a dedicated fileto write application event data to or a syslog facility to use when writing log datato /dev/log (see Section 25.5 ). If you wish to handle system logs in a consistent,centralized manner, it’s usually preferable for applications to send their log datato /dev/log. Note, however, that logrotate (also discussed in Section 25.5 )can be configured to rotate any logs on the system, whether written by syslogd,Syslog-NG, or individual applications.
20Linux/Unix Security chroot jail restricts the server’s view of the file system to just a specified portionuses chroot system call to confine a process by mapping the root of the filesystem to some other directoryfile directories outside the chroot jail aren’t visible or reachablemain disadvantage is added complexitySome network accessible services do not require access to the full file-system, butrather only need a limited set of data files and directories for their operation. FTP isa common example of such a service. It provides the ability to download files from,and upload files to, a specified directory tree. If such a server were compromised andhad access to the entire system, an attacker could potentially access and compromisedata elsewhere. Unix and Linux systems provide a mechanism to run such services in achroot jail , which restricts the server’s view of the file system to just a specified portion.This is done using the chroot system call that confines a process to some subset of the filesystem by mapping the root of the filesystem “/” to some other directory (e.g., /srv/ftp/public). To the “chrooted” server, everything in this chroot jail appears toactually be in / (e.g., the “real” directory /srv/ftp/public/etc/myconfigfileappears as /etc/myconfigfile in the chroot jail). Files in directories outside thechroot jail (e.g., /srv/www or /etc.) aren’t visible or reachable at all.Chrooting therefore helps contain the effects of a given server being compromisedor hijacked. The main disadvantage of this method is added complexity: anumber of files (including all executable libraries used by the server), directories,and devices needed must be copied into the chroot jail. Determining just what needsto go into the jail for the server to work properly can be tricky, though detailed proceduresfor chrooting many different applications are available.Troubleshooting a chrooted application can also be difficult. Even if an applicationexplicitly supports this feature, it may behave in unexpected ways when runchrooted. Note also that if the chrooted process runs as root, it can “break out” ofthe chroot jail with little difficulty. Still, the advantages usually far outweigh thedisadvantages of chrooting network services.The system hardening guides such as those provided by the “NSA—SecurityConfiguration Guides” include security checklists for a number of Unix and Linuxdistributions that may be followed.There are also a number of commercial and open-source tools available to performsystem security scanning and vulnerability testing. One of the best known is“Nessus.” This was originally an open-source tool, which was commercialized in 2005,though some limited free-use versions are available. “Tripwire” is a well-known fileintegrity checking tool that maintains a database of cryptographic hashes of monitoredfiles, and scans to detect any changes, whether as a result of malicious attack, or simplyaccidental or incorrectly managed update. This again was originally an open-sourcetool, which now has both commercial and free variants available. The “Nmap” networkscanner is another well-known and deployed assessment tool that focuses on identifyingand profiling hosts on the target network, and the network services they offer.
21Windows Security users administration and access controls patch management“Windows Update” and “Windows Server Update Service” assist with regular maintenance and should be usedthird party applications also provide automatic update supportusers administration and access controlssystems implement discretionary access controls resourcesVista and later systems include mandatory integrity controlsobjects are labeled as being of low, medium, high, or system integrity levelsystem ensures the subject’s integrity is equal or higher than the object’s levelimplements a form of the Biba Integrity modelWe now consider some specific issues with the secure installation, configuration,and management of Microsoft Windows systems. These systems have for manyyears formed a significant portion of all “general purpose” system installations.Hence, they have been specifically targeted by attackers, and consequently securitycountermeasures are needed to deal with these challenges. The process of providingappropriate levels of security still follows the general outline we describe in thischapter. Beyond the general guidance in this section, we provide more detailed discussionof Windows security mechanisms later in Chapter 26 .Again, there are a large range of resources available to assist administratorsof these systems, including reports such as [SYMA07], online resources such as the“Microsoft Security Tools & Checklists,” and specific system hardening guides suchas those provided by the “NSA—Security Configuration Guides.”The “Windows Update” service and the “Windows Server Update Services” assistwith the regular maintenance of Microsoft software, and should be configured andused. Many other third-party applications also provide automatic update support,and these should be enabled for selected applications.Users and groups in Windows systems are defined with a Security ID (SID). Thisinformation may be stored and used locally, on a single system, in the SecurityAccount Manager (SAM). It may also be centrally managed for a group of systemsbelonging to a domain, with the information supplied by a central Active Directory(AD) system using the LDAP protocol. Most organizations with multiple systemswill manage them using domains. These systems can also enforce common policyon users on any system in the domain. We further explore the Windows securityarchitecture in SectionWindows systems implement discretionary access controls to system resourcessuch as files, shared memory, and named pipes. The access control list has a number ofentries that may grant or deny access rights to a specific SID, which may be for an individualuser or for some group of users. Windows Vista and later systems also includemandatory integrity controls. These label all objects, such as processes and files, andall users, as being of low, medium, high, or system integrity level. Then whenever datais written to an object, the system first ensures that the subject’s integrity is equal orhigher than the object’s level. This implements a form of the Biba Integrity modelwe discuss in Section 13.2 that specifically targets the issue of untrusted remote codeexecuting in, for example Windows Internet Explorer, trying to modify local resources.
22Windows Security Users Administration and Access Controls Windows systems also define privilegessystem wide and granted to user accountscombination of share and NTFS permissions may be used to provide additional security and granularity when accessing files on a shared resourceUser Account Control (UAC)provided in Vista and later systemsassists with ensuring users with administrative rights only use them when required, otherwise accesses the system as a normal userLow Privilege Service Accountsused for long-lived service processes such as file, print, and DNS servicesWindows systems also define privileges, which are system wide and grantedto user accounts. Examples of privileges include the ability to backup the computer(which requires overriding the normal access controls to obtain a complete backup),or the ability to change the system time. Some privileges are considered dangerous,as an attacker may use them to damage the system. Hence they must be granted withcare. Others are regarded as benign, and may be granted to many or all user accounts.As with any system, hardening the system configuration can include furtherlimiting the rights and privileges granted to users and groups on the system. Asthe access control list gives deny entries greater precedence, you can set an explicitdeny permission to prevent unauthorized access to some resource, even if the user isa member of a group that otherwise grants access.When accessing files on a shared resource, a combination of share and NTFSpermissions may be used to provide additional security and granularity. For example,you can grant full control to a share, but read-only access to the files within it. Ifaccess-based enumeration is enabled on shared resources, it can automatically hideany objects that a user is not permitted to read. This is useful with shared folderscontaining many users’ home directories, for example.You should also ensure users with administrative rights only use them whenrequired, and otherwise access the system as a normal user. The User Account Control(UAC) provided in Vista and later systems assists with this requirement. These systemsalso provide Low Privilege Service Accounts that may be used for long-lived serviceprocesses, such as file, print, and DNS services that do not require elevated privileges.
23Windows Security application and service configuration much of the configuration information is centralized in the Registryforms a database of keys and values that may be queried and interpreted by applicationsregistry keys can be directly modified using the “Registry Editor”more useful for making bulk changesUnlike Unix and Linux systems, much of the configuration information in Windowssystems is centralized in the Registry, which forms a database of keys and valuesthat may be queried and interpreted by applications on these systems.Changes to these values can be made within specific applications, setting preferencesin the application that are then saved in the registry using the appropriate keysand values. This approach hides the detailed representation from the administrator.Alternatively, the registry keys can be directly modified using the “Registry Editor.”This approach is more useful for making bulk changes, such as those recommendedin hardening guides. These changes may also be recorded in a central repository, andpushed out whenever a user logs in to a system within a network domain.Given the predominance of malware that targets Windows systems, it is essentialthat suitable anti-virus, anti-spyware, personal firewall, and other malware andattack detection and handling software packages are installed and configured onsuch systems. This is clearly needed for network connected systems, as shown bythe high-incidence numbers in reports such as [SYMA11]. However, as the Stuxnetattacks in 2010 show, even isolated systems updated using removable media arevulnerable, and thus must also be protected.
24Windows Security other security controls essential that anti-virus, anti-spyware, personal firewall, and other malware and attack detection and handling software packages are installed and configuredcurrent generation Windows systems include basic firewall and malware countermeasure capabilitiesimportant to ensure the set of products in use are compatibleWindows systems also support a range of cryptographic functions:encrypting files and directories using the Encrypting File System (EFS)full-disk encryption with AES using BitLocker“Microsoft Baseline Security Analyzer”free, easy to use tool that checks for compliance with Microsoft’s security recommendationsCurrent generation Windows systems include some basic firewall and malwarecountermeasure capabilities, which should certainly be used at a minimum.However, many organizations find that these should be augmented with one ormore of the many commercial products available. One issue of concern is undesirableinteractions between anti-virus and other products from multiple vendors. Careis needed when planning and installing such products to identify possible adverseinteractions, and to ensure the set of products in use are compatible with each other.Windows systems also support a range of cryptographic functions that maybe used where desirable. These include support for encrypting files and directoriesusing the Encrypting File System (EFS), and for full-disk encryption with AESusing BitLocker.The system hardening guides such as those provided by the “NSA—SecurityConfiguration Guides” also include security checklists for various versions ofWindows.There are also a number of commercial and open-source tools available toperform system security scanning and vulnerability testing of Windows systems. The“Microsoft Baseline Security Analyzer” is a simple, free, easy-to-use tool that aimsto help small- to medium-sized businesses improve the security of their systems bychecking for compliance with Microsoft’s security recommendations. Larger organizationsare likely better served using one of the larger, centralized, commercialsecurity analysis suites available.
25Virtualizationa technology that provides an abstraction of the resources used by some software which runs in a simulated environment called a virtual machine (VM)benefits include better efficiency in the use of the physical system resourcesprovides support for multiple distinct operating systems and associated applications on one physical systemraises additional security concernsVirtualization refers to a technology that provides an abstraction of the computingresources used by some software, which thus runs in a simulated environmentcalled a virtual machine (VM). There are many types of virtualization; however, inthis section we are most interested in full virtualization. This allows multiple fulloperating system instances to execute on virtual hardware, supported by a hypervisorthat manages access to the actual physical hardware resources. Benefits arisingfrom using virtualization include better efficiency in the use of the physical systemresources than is typically seen using a single operating system instance. This is particularlyevident in the provision of virtualized server systems. Virtualization canalso provide support for multiple distinct operating systems and associated applicationson the one physical system. This is more commonly seen on client systems.There are a number of additional security concerns raised in virtualized systems,as a consequence both of the multiple operating systems executing side by sideand of the presence of the virtualized environment and hypervisor as a layer belowthe operating system kernels and the security services they provide. [CLEE09]presents a survey of some of the security issues arising from such a use of virtualization,a number of which we will discuss further.
26Virtualization Alternatives application virtualizationallows applications written for one environment to execute on some other operating systemfull virtualizationmultiple full operating system instances execute in parallelvirtual machine monitor (VMM)hypervisorcoordinates access between each of the guests and the actual physical hardware resourcesThere are many forms of creating a simulated, virtualized environment. Theseinclude application virtualization , as provided by the Java Virtual Machine environment.This allows applications written for one environment, to execute on someother operating system. It also includes full virtualization , in which multiple fulloperating system instances execute in parallel. Each of these guest operating systems,along with their own set of applications, executes in its own VM on virtualhardware. These guest OSs are managed by a hypervisor , or virtual machine monitor(VMM), that coordinates access between each of the guests and the actual physicalhardware resources, such as CPU, memory, disk, network, and other attacheddevices. The hypervisor provides a similar hardware interface as that seen by operatingsystems directly executing on the actual hardware. As a consequence, little ifany modification is needed to the guest OSs and their applications. Recent generationsof CPUs provide special instructions that improve the efficiency of hypervisoroperation.
27Native Virtualization Security Layers Full virtualization systems may be further divided into native virtualizationsystems, in which the hypervisor executes directly on the underlying hardware, aswe show in Figure 12.2 , and hosted virtualization systems, in which the hypervisorexecutes as just another application on a host OS that is running on the underlyinghardware, as we show in FigureNative virtualization systems are typicallyseen in servers, with the goal of improving the execution efficiency of the hardware.They are arguably also more secure, as they have fewer additional layers than thealternative hosted approach.
28Hosted Virtualization Security Layers Hosted virtualization systems are more common inclients, where they run along side other applications on the host OS, and are usedto support applications for alternate operating system versions or types. As thisapproach adds additional layers with the host OS under, and other host applicationsbeside, the hypervisor, this may result in increased security concerns.In virtualized systems, the available hardware resources must be appropriatelyshared between the various guest OSs. These include CPU, memory, disk, network,and other attached devices. CPU and memory are generally partitioned betweenthese, and scheduled as required. Disk storage may be partitioned, with each guesthaving exclusive use of some disk resources. Alternatively, a “virtual disk” may becreated for each guest, which appears to it as a physical disk with a full file-system,but is viewed externally as a single “disk image” file on the underlying file-system.Attached devices such as optical disks or USB devices are generally allocated to asingle guest OS at a time. Several alternatives exist for providing network access.The guest OS may have direct access to distinct network interface cards on thesystem; the hypervisor may mediate access to shared interfaces; or the hypervisormay implement virtual network interface cards for each guest, routing trafficbetween guests as required. This last approach is quite common, and arguably themost efficient since traffic between guests does not need to be relayed via externalnetwork links. It does have security consequences in that this traffic is not subjectto monitoring by probes attached to networks, such as we discussed in Chapter 9 .Hence alternative, host-based probes would be needed in such a system if suchmonitoring is required.
29Virtualization Security Issues security concerns include:guest OS isolationensuring that programs executing within a guest OS may only access and use the resources allocated to itguest OS monitoring by the hypervisorwhich has privileged access to the programs and data in each guest OSvirtualized environment securityparticularly image and snapshot management which attackers may attempt to view or modify[CLEF09] and [NIST11] both detail a number of security concerns that result fromthe use of virtualized systems, including:• guest OS isolation, ensuring that programs executing within a guest OS mayonly access and use the resources allocated to it, and not covertly interact withprograms or data either in other guest OSs or in the hypervisor.• guest OS monitoring by the hypervisor, which has privileged access to theprograms and data in each guest OS, and must be trusted as secure fromsubversion and compromised use of this accessvirtualized environment security, particularly as regards image and snapshotmanagement, which attackers may attempt to view or modifyThese security concerns may be regarded as an extension of the concerns we havealready discussed with securing operating systems and applications. If a particularoperating system and application configuration is vulnerable when running directlyon hardware in some context, it will most likely also be vulnerable when runningin a virtualized environment. And should that system actually be compromised, itwould be at least as capable of attacking other nearby systems, whether they arealso executing directly on hardware or running as other guests in a virtualized environment.The use of a virtualized environment may improve security by furtherisolating network traffic between guests than would be the case when such systemsrun natively, and from the ability of the hypervisor to transparently monitor activityon all guests OS. However, the presence of the virtualized environment and thehypervisor may reduce security if vulnerabilities exist within it which attackers mayexploit. Such vulnerabilities could allow programs executing in a guest to covertlyaccess the hypervisor, and hence other guest OS resources. This is known as VMescape, and is of concern, as we discussed in Section Virtualized systems alsooften provide support for suspending an executing guest OS in a snapshot, savingthat image, and then restarting execution at a later time, possibly even on anothersystem. If an attacker can view or modify this image, they can compromise the securityof the data and programs contained within it.Thus the use of virtualization adds additional layers of concern, as we havepreviously noted. Securing virtualized systems means extending the security processto secure and harden these additional layers. In addition to securing each guestoperating system and applications, the virtualized environment and the hypervisormust also be secured.
30Hypervisor Security should be secured using a process similar to securing an operating systeminstalled in an isolated environmentconfigured so that it is updated automaticallymonitored for any signs of compromiseaccessed only by authorized administrationmay support both local and remote administration so must be configured appropriatelyremote administration access should be considered and secured in the design of any network firewall and IDS capability in useAccess to VM image and snapshots must be carefully controlledThe hypervisor should be secured using a process similarto that with securing an operating system. That is, it should be installed in an isolatedenvironment, from known clean media, and updated to the latest patch level inorder to minimize the number of vulnerabilities that may be present. It shouldthen be configured so that it is updated automatically, any unused services aredisabled or removed, unused hardware is disconnected, appropriate introspectioncapabilities are used with the guest OSs, and the hypervisor is monitored for anysigns of compromise.Access to the hypervisor should be limited to authorized administratorsonly, since these users would be capable of accessing and monitoring activity inany of the guest OSs. The hypervisor may support both local and remote administration.This must be configured appropriately, with suitable authentication andencryption mechanisms used, particularly when using remote administration.Remote administration access should also be considered and secured in the designof any network firewall and IDS capability in use. Ideally such administration trafficshould use a separate network, with very limited, if any, access provided fromoutside the organization.
31Linux SecurityLecture slides prepared by Dr Lawrie Brown for “Computer Security: Principles and Practice”, 1/e, by William Stallings and Lawrie Brown, Chapter 23 “Linux Security”.
32Linux SecurityLinux has evolved into one of the most popular and versatile operating systemsmany features mean broad attack surfacecan create highly secure Linux systemswill review:Discretionary Access Controlstypical vulnerabilities and exploits in Linuxbest practices for mitigating those threatsnew improvements to Linux security modelLinux’s traditional security model is:people or proceses with “root” privileges can do anythingother accounts can do much lesshence attacker’s want to get root privilegescan run robust, secure Linux systemscrux of problem is use of Discretionary Access Controls (DAC)Since Linus Torvalds created Linux in 1991, more or less on a whim, Linux has evolved into one of the world’s most popular and versatile operating systems. Linux is free and open-sourced, and is available in a wide variety of “distributions” targeted at almost every usage-scenario imaginable. Like other general-purpose operating systems, Linux’s wide range of features presents a broad attack surface, but by leveraging native Linux security controls, carefully configuring Linux applications, and deploying certain add-on security packages, you can create highly secure Linux systems. The study and practice of Linux security therefore has wide-ranging uses and ramifications. New exploits against popular Linux applications affect many thousands of users around the world. New Linux security tools and techniques have just as profound of an impact, albeit a much more constructive one!In this chapter we’ll examine the Discretionary Access Control-based security model and architecture common to all Linux distributions and to most other Unix-derived and Unix-like operating systems (and also, to a surprising degree, to Microsoft Windows). We’ll discuss the strengths and weaknesses of this ubiquitous model; typical vulnerabilities and exploits in Linux; best practices for mitigating those threats; and improvements to the Linux security model that are only slowly gaining popularity, but that hold the promise to correct decades-old shortcomings in this platform.
33Linux Security Transactions In the Linux DAC system, there are users, each of which belongs to one or more groups; and there are also objects: files and directories. Users read, write, and execute these objects, based on the objects’ permissions, of which each object has three sets: one each defining the permissions for the object’s user-owner, group-owner, and “other” (everyone else). These permissions are enforced by the Linux kernel, the “brain” of the operating system, as is illustrated in Figure 23.1 from the text.When running, a process normally “runs as” (with the identity of) the user and group of the person or process that executed it. Since processes “act as” users, if a running process attempts to read, write, or execute some other object, the kernel will first evaluate that object’s permissions against the process’ user and group identity, just as though the process was an actual human user. This basic transaction, wherein a subject (user or process) attempts some action (read, write, execute) against some object (file, directory, special file).Whoever owns an object can set or change its permissions. Herein lies the Linux DAC model’s real weakness: the system superuser account, called “root,” has the ability to both take ownership and change the permissions of all objects in the system. And as it happens, it’s not uncommon for both processes and administrator-users to routinely run with root privileges, in ways that provide attackers with opportunities to hijack those privileges.
34File System Security in Linux everything as a file e.g. memory, device-drivers, named pipes, and other system resourceshence why filesystem security is so importantI/O to devices is via a “special” filee.g. /dev/cdromhave other special files like named pipesa conduit between processes / programsLinux treats everything as a file, including memory, device-drivers, named pipes, and other system resources. A device like a CDROM is a file to the Linux kernel: the "special" device-file /dev/cdrom (which is usually a symbolic link to /dev/hdb or some other special file). To send data from or write it to the CDROM drive, the Linux kernel actually reads to and writes from this special file. Other special files, such as named pipes, act as input/output (I/O) "conduits," allowing one process or program to pass data to another. One common example of a named pipe on Linux systems is /dev/urandom: when a program reads this file, /dev/urandom returns random characters from the kernel's random-number generator. So in Linux/Unix, nearly everything is represented by a file. Once you understand this, it's much easier to understand why filesystem security is important, and how it works.
35Users and Groups a user-account (user) a group-account (group) represents someone capable of using filesassociated both with humans and processesa group-account (group)is a list of user-accountsusers have a main groupmay also belong to other groupsusers & groups are not filesActually, there are two things on a Unix system that aren't represented by files: user-accounts and group-accounts (in short users and groups). Various files contain information about a system's users and groups, but none actually represents them.A user-account represents someone or something capable of using files. As we saw in the previous section, a user-account can be associated both with actual human beings and with processes. The standard Linux user-account "lp," for example, is used by the Line Printer Daemon (lpd): the lpd program runs as the user lp.A group-account is simply a list of user-accounts. Each user-account is defined with a main group membership, but may in fact belong to as many groups as you want or need it to. For example, the user "maestro" may have a main group membership in "conductors," and also belong to the group "pianists."
36Users and Groups user's details are kept in /etc/password maestro:x:200:100:Maestro Edward Hizzersands:/home/maestro:/bin/bashadditional group details in /etc/groupconductors:x:100:pianists:x:102:maestro,volodyause useradd, usermod, userdel to alterA user's main group membership is specified in the user's entry in /etc/password; additional groups are specified in /etc/group by adding the username to the end of the entry for each group the user needs to belong to.Listing 23.1 shows user "maestro"'s entry in the file /etc/password. Can see that the first field contains the name of the user-account, "maestro;" the second field ("x") is a placeholder for maestro's password (which is actually stored in /etc/shadow); the third field shows maestro's numeric userid (or "uid," in this case "200"); and the fourth field shows the numeric groupid (or "gid," in this case "100") of maestro's main group membership. The remaining fields specify a comment, maestro's home directory, and maestro's default login shell.Listing 23.2 shows part of the corresponding /etc/group file. Each line simply contains a group-name, a group-password (usually unused -- "x" is a placeholder), and numeric group-id (gid), and a comma-delimited list of users with "secondary" memberships in the group. Thus we see that the group "conductors" has a gid of "100", which corresonds to the gid specified as maestro's main group in Listing 23.1; and also that the group "pianists" includes the user "maestro" (plus another named "volodya") as a secondary member.The simplest way to modify /etc/password and /etc/group in order to create, modify, and delete user-accounts is via the commands useradd, usermod, and userdel, respectively. All three of these commands can be used to set and modify group-memberships.
37File Permissions files have two owners: a user & a group each with its own set of permissionswith a third set of permissions for otherpermissions are to read/write/execute in order user/group/other, cf.-rw-rw-r maestro user Mar :38 baton.txtset using chmod commandEach file on a Unix system (which, as we've seen, means "practically every single thing on a Unix system"), has two owners: a user and a group, each with its own set of permissions that specify what the user or group may do with the file (read it, write to it or delete it, and execute it). A third set of permissions pertains to other, that is, user-accounts that don't own the file or belong to the group that owns it.Listing 23.3 shows a "long file-listing" for the file /home/maestro/baton.txt. Permissions are listed in the order "user-permissions, group-permissions, other-permissions." Thus we see that for the file shown in Listing 23.3 in the text, its user-owner ("maestro") may read and write/delete the file ("rw-"); its group-owner ("conductors") may also read and write/delete the file ("rw-"); but that other users (who are neither "maestro" nor members of "conductors") may only read the file.There's a third permission besides "read" and "write": "execute," denoted by "x" (when set). If maestro writes a shell script named "punish_bassoonists.sh", and if he sets its permissions to "-rwxrw-r--", then maestro will be able to execute his script by entering the name of the script at the command-line. If, however, he forgets to do so, he won't be able to run the script, even though he owns it. Permissions are usually set via the "chmod" command (short for "change mode").
38Directory Permissions read = list contentswrite = create or delete files in directoryexecute = use anything in or change working directory to this directorye.g.$ chmod g+rx extreme_casseroles$ ls -l extreme_casserolesdrwxr-x--- 8 biff drummers 288 Mar :38 extreme_casserolesDirectory-permissions work slightly differently than permissions on regular files. "Read" and "write" are similar; for directories these permissions translate to "list the directory's contents" and "create or delete files within the directory," respectively. "Execute" is a little less intuitive: for directories, "execute" translates to "use anything within or change working directory to this directory".That is, if a user or group has execute-permissions on a given directory, they can list that directory's contents, read that directory's files (assuming those individual files' own permissions include this), and change their working directory to that directory, like with the command "cd". If a user or group does not have execute-permissions on a given directory, they will be unable to list or read anything in it, regardless of the permissions set on the things inside. (Note that if you lack execute permissions on a directory but do have read permissions on a it, and you try to list its contents with ls, you will receive an error message that, in fact, lists the directory's contents. But this doesn't work if you have neither read nor execute permissions on the directory.)Suppose our example system has a user named "biff" who belongs to the group "drummers." And suppose further that his home-directory contains a directory called "extreme_casseroles" that he wishes to share with his fellow percussionists. Listing 13-4 shows how biff might set that directory's permissions. In Listing 13-4, only biff has the ability to create, change, or delete files inside extreme_casseroles. Other members of the group "drummers" may list its contents and cd to it. Everyone else on the system, however (except root, who is always all-powerful), is blocked from listing, reading, cd-ing, or doing anything else with the directory.
39Sticky Bit originally used to lock file in memory now used on directories to limit deleteif set must own file or dir to deleteother users cannot delete even if have writeset using chmod command with +t flag, e.g.chmod +t extreme_casserolesdirectory listing includes t or T flagdrwxrwx--T 8 biff drummers Mar 25 01:38 extreme_casserolesonly apply to specific directory not child dirsIn older Unix operating systems, the sticky bit was used to write a file (program) to memory so it would load more quickly when invoked. On Linux, however, it serves a different function: when you set the sticky bit on a directory, it limits users' ability to delete things in that directory. That is, to delete a given file in the directory you must either own that file or own the directory, even if you belong to the group which owns the directory and group-write permissions are set on it. To set the sticky bit, use the chmod command with +t. In our example, this would bechmod +t extreme_casserolesIf we set the sticky bit on extreme_casseroles and then do a long listing of the directory itself, using "ls -ld extreme_casseroles", we'll see:drwxrwx--T 8 biff drummers Mar 25 01:38 extreme_casserolesNote the "T" at the end of the permissions-string. We'd normally expect to see either "x" or "-" there, depending on whether the directory is "other-writable". "T" denotes that the directory is not "other-executable" but has the sticky bit set. A lower-case "t" would denote that the directory is other-executable and has the sticky bit set.The sticky bit only applies to the directory's first level downwards, it is not inherited by child directories.
40SetUID and SetGID setuid bit means program "runs as" owner no matter who executes itsetgid bit means run as a member of the group which owns itagain regardless of who executes it"run as" = "run with same privileges as”are very dangerous if set on file owned by root or other privileged account or grouponly used on executable files, not shell scriptssetuid has no effect on directories. setgid does and causes any file created in a directory to inherit the directory's groupuseful if users belong to other groups and routinely create files to be shared with other members of those groupsinstead of manually changing its groupNow come to two of the most dangerous permissions-bits in the Unix world: setuid and segid. If set on an executable binary file, the setuid bit causes that program to "run as" its owner, no matter who executes it. Similarly, the setgid bit, when set on an executable, causes that program to run as a member of the group which owns it, again regardless of who exectutes it. "run as" means "to run with the same privileges as."IMPORTANT WARNING: setuid and setgid are very dangerous if set on any file owned by root or any other privileged account or group. The command "sudo" is a much better tool for delegating root's authority.Note that if you want a program to run setuid, that program must be group- executable or other-executable, for obvious reasons. Note also that the Linux kernel ignores the setuid and setgid bits on shell scripts; these bits only work on binary (compiled) executables.setgid works the same way, but with group-permissions: if you set the setgid bit on an executable file via the command "chmod g+s filename", and if the file is also "other-executable" (-r-xr-sr-x), then when that program is executed it will run with the group-ID of the file rather than of the user who executed it.
41Numeric File Permissions Internally, Linux uses numbers to represent permissions; only user programs display permissions as letters. The chmod command recognizes both mnemonic permission-modifiers ("u+rwx,go-w") and numeric modes.A numeric mode consists of four digits: as you read left-to-right, these represent special-permissions, user-permissions, group-permissions, and other-permissions. For example, 0700 translates to "no special permissions set, all user-permissions set, no group permissions set, no other-permissions set."Each permission has a numeric value, and the permissions in each digit-place are additive: the digit represents the sum of all permission-bits you wish to set. If, for example, user-permissions are set to "7", this represents 4 (the value for "read") plus 2 (the value for "write") plus 1 (the value for "execute").As I just mentioned, the basic numeric values are 4 for read, 2 for write, and 1 for execute. Why no "3? Because(a) these values represent bits in a binary stream and are therefore all powers of 2; and (b) this way, no two combination of permissions have the same sum.Special permissions are as follows: 4 stands for setuid, 2 stands for setgid, and 1 stands for sticky-bit. For example, the numeric mode 3000 translates to "setgid set, sticky-bit set, no other permissions set" (which is a useless set of permissions).Figure 23.2 shows the permissions of "mycoolfile" set using the command:chmod 0644 mycoolfile
42Kernel vs User Space Kernel space User space refers to memory used by the Linux kernel and its loadable modules (e.g., device drivers)User spacerefers to memory used by all other processessince kernel enforces Linux DAC and security critical to isolate kernel from userso kernel space never swapped to diskonly root may load and unload kernel modulesIt’s a little bit of an oversimplification to say that users, groups, files, and directories are all that matter in the Linux DAC: memory is important too. Therefore we should at least briefly discuss kernel space and user space.Kernel space refers to memory used by the Linux kernel and its loadable modules (e.g., device drivers). User space refers to memory used by all other processes. Since the kernel enforces the Linux DAC and, in real terms, dictates system reality, it’s extremely important to isolate kernel space from user space. For this reason, kernel space is never swapped to hard disk.It’s also the reason that only root may load and unload kernel modules. As we’re about to see, one of the worst things that can happen on a compromised Linux system is for an attacker to gain the ability to load kernel modules.
43setuid root Vulnerabilities a setuid root program runs as rootno matter who executes itused to provide unprivileged users with access to privileged resourcesmust be very carefully programmedif can be exploited due to a software bugmay allow otherwise-unprivileged users to use it to wield unauthorized root privilegesdistributions now minimise setuid-root programssystem attackers still scan for them!In this section we’ll discuss the most common weaknesses in Linux systems.As we discussed in the previous section, any program whose “setuid” permission-bit is set will run with the privileges of the user that owns it, rather than those of the process or user executing it. A setuid root program is a root-owned program with its setuid bit set, that is, a program that runs as root no matter who executes it.Running setuid-root is necessary for programs that need to be run by unprivileged users, yet must provide such users with access to privileged functions (for example, changing their password, which requires changes to protected system files). But such a program needs to have been very carefully programmed, with impeccable user-input validation, strict memory management, etc. That is, it needs to have been designed to be run setuid (or setgid) root. Even then, a root-owned program should only have its setuid bit set if absolutely necessary.The risk here is that if a setuid root program can be exploited or abused in some way (for example, via a buffer overflow vulnerability or race condition as we discuss in chapters 11 and 12), then otherwise-unprivileged users may be able to use that program to wield unauthorized root privileges, possibly including opening a root shell (a command-line session running with root privileges).Due to a history of abuse against setuid root programs, major Linux distributions no longer ship with unecessarily setuid-root programs. But system attackers still scan for them!
44Web Vulnerabilities a very broad category of vulnerabilities because of ubiquity of world wide web have big and visible attack surfaceswhen written in scripting languagesnot as prone to classic buffer overflowscan suffer from poor input-handlingfew “enabled-by-default” web applicationsbut users install vulnerable web applicationsor write custom web applications having easily- identified and easily-exploited flawsThis is a very broad category of vulnerabilities, many of which also fall into other categories in this list. It warrants its own category because of the ubiquity of the world wide web: there are few attack surfaces as big and visible as an Internet-facing web site.While web applications written in scripting languages such as PHP, Perl, and Java may not be as prone to classic buffer overflows (thanks to the additional layers of abstraction presented by those languages’ interpreters), they’re nonetheless prone to similar abuses of poor input-handling, including cross-site scripting, SQL code injection, and a plethora of other vulnerabilities, as we discuss in chapters 11 and 12.Nowadays, few Linux distributions ship with “enabled-by-default” web applications (such as the default cgi-scripts included with older versions of the Apache webserver). However, many users install web applications with known vulnerabilities, or write custom web applications having easily-identified and easily-exploited flaws.
45Rootkits allow attacker to cover their tracks if successfully installed before detection, all is very nearly lostoriginally collections of hacked commandshiding attacker’s files, directories, processesnow use loadable kernel modulesintercepting system calls in kernel-spacehiding attacker from standard commandsmay be able to detect with chkrootkitgenerally have to wipe and rebuild systemRootkits allows an attacker to cover their tracks, typically installed after a root compromise: if successfully install a rootkit before detection, all is very nearly lost.Rootkits began as collections of “hacked replacements” for common Unix commands (ls, ps, etc.) that behaved like the legitimate commands they replaced, except for hiding an attacker’s files, directories and processes.In the Linux world, since the advent of loadable kernel modules (LKMs), rootkits have more frequently taken the form of LKMs. An LKM rootkit does its business (covering the tracks of attackers) in kernel-space, intercepting system calls pertaining to any user’s attempts to view the intruder’s resources. In this way, files, directories, and processes owned by an attacker are hidden even to a compromised system’s standard, un-tampered-with commands, including customized software. Besides operating at a lower, more global level, another advantage of the LKM rootkit over traditional rootkits is that system integrity-checking tools such as Tripwire won’t generate alerts from system commands being replaced.Luckily, even LKM rootkits do not always ensure complete invisibility for attackers. Many traditional and LKM rootkits can be detected with the script chkrootkit, available at In general, however, if an attacker gets far enough to install a KVM rootkit, your system can be considered to be completely compromised; when and if you detect the breach (e.g., via a defaced website, missing data, suspicious network traffic, etc.), the only way to restore your system with any confidence of completely shutting out the intruder will be to erase its hard disk (or replace it, if you have the means and inclination to analyze the old one), re-install Linux, and apply all the latest software patches.
46Linux Hardening - OS Installation security begins with O/S installationespecially what software is runsince unused applications liable to be left in default, un- hardened and un-patched stategenerally should not run:X Window system, RPC services, R-services, inetd, SMTP daemons, telnet etcalso have some initial system s/w configuration:setting root passwordcreating a non-root user accountsetting an overall system security levelenabling a simple host-based firewall policyenabling SELinuxLinux system security begins at operating system installation time: one of the most critical, system-impacting decisions a system administrator makes is what software will run on the system. Since it’s hard enough to find the time to secure a system’s critical applications, an unused application is liable to be left in a default, un-hardened and un-patched state. Therefore, it’s very important that from the start, careful consideration be given to what applications should be installed, and which should not.What software should you not install? Common sense should be your guide: for example, an SMTP ( ) relay shouldn’t need the Apache webserver; a database server shouldn’t need an office productivity suite such as OpenOffice; etc.Given the plethora of roles Linux systems play (desktops, servers, laptops, firewalls, embedded systems, etc), will generalize enumerating what software to not install. Here is a list of software packages that should seldom, if ever, be installed on hardened (especially Internet-facing) servers: X Window System, RPC Services, R-Services, inetd, SMTP Daemons, Telnet and other cleartext-logon services (see text for discussion why).In addition to initial software selection and installation, Linux installation utilities also perform varying amounts of initial system and software configuration, including:setting the root passwordcreating a non-root user accountsetting an overall system security level (usually initial file-permission settings)enabling a simple host-based firewall policyenabling SELinux or Novell AppArmor (see Section 23.7)
47Patch Management installed server applications must be: configured securelykept up to date with security patchespatching can never win “patch rat-race”have tools to automatically download and install security updatese.g. up2date, YaST, apt-getnote should not run automatic updates on change-controlled systems without testingCarefully selecting what gets installed (and what doesn’t get installed) on a Linux system is an important first step in securing it. All the server applications you do install, however, must be configured securely (the subject of Section 13.6), and they must also be kept up to date with security patches.The bad news with patching is that you can never win the “patch rat-race:” there will always be software vulnerabilities that attackers are able to exploit for some period of time before vendors issue patches for them. (As-yet-unpatchable vulnerabilities are known as zero-day, or 0-day, vulnerabilities.).The good news is, modern Linux distributions usually include tools for automatically downloading and installing security updates, which can minimize the time your system is vulnerable to things against which patches are available. For example, Red Hat, Fedora, and CentOS include up2date (YUM can be used instead); SuSE includes YaST Online Update; and Debian uses apt-get, though you must run it as a cron job for automatic updates.Note that on change-controlled systems, you should not run automatic updates, since security patches can, on rare but significant occasions, introduce instability. For systems on which availability and up-time are of paramount importance, therefore, you should stage all patches on test systems before deploying them in production.
48Network Access Controls network a key attack vector to secureTCP wrappers a key tool to check accessoriginally tcpd inetd wrapper daemonbefore allowing connection to service checksif requesting host explicitly in hosts.allow is okif requesting host explicitly in hosts.deny is blockedif not in either is okchecks on service, source IP, usernamenow often part of app using libwrappersOne of the most important attack-vectors in Linux threats is the network. Network-level access controls, that restrict access to local resources based on the IP addresses of the systems attempting access, are therefore an important tool in Linux security.One of the most mature network access control mechanisms in Linux is libwappers. In its original form, the software package TCP Wrappers, the daemon tcpd is used as a “wrapper” process for each service initiated by inetd. Before allowing a connection to any given service, tcpd first evaluates access controls defined in the files /etc/hosts.allow and /etc/hosts.deny: if the transaction matches any rule in hosts.allow (which tcpd parses first), it’s allowed. If no rule in hosts.allow matches, tcpd then evaluates the transaction against the rules in hosts.deny; if any rule in hosts.deny matches, the transaction is logged and denied, but is otherwise permitted. These access controls are based on the name of the local service being connected to, on the source IP address or hostname of the client attempting the connection, and on the username of the client attempting the connection (that is, the owner of the client-process). Note that client usernames are validated via the ident service, which unfortunately is trivially easy to forge on the client side, which makes this criterion’s value questionable. The best way to configure TCP Wrappers access controls is therefore to set a “deny all” policy in hosts.deny, such that the only transactions permitted are those explicitly specified in hosts.allow.Since inetd is essentially obsolete, TCP Wrappers is no longer used as commonly as libwrappers, a system library which allows applications to defend themselves by leveraging /etc/hosts.allow and /etc/hosts.deny without requiring tcpd to act as an intermediary.
49Network Access Controls also have the very powerful netfilter Linux kernel native firewall mechanismand iptables user-space front endas useful on firewalls, servers, desktopsdirect config tricky, steep learning curvedo have automated rule generatorstypically for “personnal” firewall use will:allow incoming requests to specified servicesblock all other inbound service requestsallow all outbound (locally-originating) requestsif need greater security, manually configWhile TCP Wrappers ubiquitous and easy-to-use, more powerful is the Linux kernel’s native firewall mechanism, netfilter (and its user-space front end iptables)iptables is useful both on multi-interface firewall systems and on ordinary servers and desktop systems. iptables command does however have a steep learning curve. Nearly all Linux distributions now include utilities for automatically generating “personal” (local) firewall rules, especially at installation time. Typically, they prompt the administrator/user for local services that external hosts should be allowed to reach, if any (e.g., HTTP on TCP port 80, HTTPS on TCP port 443, and SSH on TCP port 22), and then generate rules that:allow incoming requests to those services;block all other inbound (externally-originating) transactions; andallow all outbound (locally-originating) services;with the assumption that all outbound network transactions are legitimate. This assumption does not hold if the system is compromised by a human attacker or by malware. In cases in which a greater level of caution is justified, it may be necessary to create more complex iptables policies than your Linux installer’s firewall wizard can provide. Many people manually create their own startup-script for this purpose (an iptables “policy” is actually just a list of iptables commands), but a tool such as Shorewall or Firewall Builder may instead be used.
50Antivirus Software historically Linux not as vulnerable to viruses more to lesser popularity than securityprompt patching was effective for wormsbut viruses abuse users privilegesnon-root users have less scope to exploitbut can still consume resourcesgrowing Linux popularity mean exploitshence antivirus software will more importantvarious commercial and free Linux A/VHistorically, Linux hasn’t been nearly so vulnerable to viruses as other operating systems (e.g., Windows); more due to its lesser popularity as a desktop platform than necessarily inherent security. Hence Linux users have tended to worry less about viruses, and have tended to rely on keeping up to date with security patches for protection against malware. This is arguably a more proactive technique than relying on signature-based antivirus tools. And indeed, prompt patching of security holes is an effective protection against worms, which have historically been a much bigger threat against Linux systems than viruses.Viruses, however, typically abuse the privileges of whatever user unwittingly executes them, rather than actually exploiting a software vulnerability: the virus simply “runs as” the user. This may not have system-wide ramifications so long as that user isn’t root, but even relatively unprivileged users can execute network client applications, create large files that could fill a disk volume, and perform any number of other problematic actions. As Linux’s popularity continues to grow, especially as a general-purpose desktop platform we can expect Linux viruses to become much more common. Sooner or later, therefore, antivirus software will become much more important on Linux systems than it is presently. There are a variety of commercial and free antivirus software packages that run on (and protect) Linux, including products from McAfee, Symantec, and Sophos; and the free, open-source tool ClamAV.
51User Management guiding principles in user-account security: need care setting file / directory permissionsuse groups to differentiate between rolesuse extreme care in granting / using root privscommands: chmod, useradd/mod/del, groupadd/mod/del, passwd, chageinfo in files /etc/passwd & /etc/groupmanage user’s group membershipsset appropriate password agesRecall the guiding principles in Linux user-account security:be very careful when setting file and directory permissions;use group memberships to differentiate between different roles on your system; andbe extremely careful in granting and using root privileges.We now discuss some details of user- and group-account management, and delegation of root privileges. First, some commands: use chmod command to set and change permissions for objects belonging to existing users and groups. To create, modify, and delete user accounts, use the useradd, usermod, and userdel commands. To create, modify, and delete group accounts, use the groupadd, groupmod, and groupdel commands. Alternatively, you can simply edit the file /etc/passwd directly to create, modify, or delete users, or edit /etc/group to create, modify, or delete groups.Note that initial (primary) group memberships are set in a user’s entry in /etc/passwd; supplementary (secondary) group memberships are set in /etc/group. use the usermod command to change either primary or supplementary group memberships for any user. You can use passwd to change your own (or as root anyone’s) password. Password aging (maximum and minimum lifetime for user passwords), is set globally in the files /etc/login.defs and /etc/default/useradd, but these settings are only applied when new user accounts are created. To modify the password lifetime for an existing account, use the chage command. Passwords should have a minimum age to prevent users from rapidly “cycling through” password-changes; 7 days is a reasonable. Maximum lifetime is trickier, balancing exposure risk vs user annoyance; 60 days is a reasonable balance for many organizations.
52Root Delegation have "root can to anything, users do little” issue “su” command allows users to run as rooteither root shell or single commandmust supply root passwordmeans likely too many people know thisSELinux RBAC can limit root authority, complex“sudo” allows users to run as rootbut only need their password, not root password/etc/sudoers file specifies what commands allowedor configure user/group perms to allow, trickyA key problem with Linux/Unix security is "root can to anything, users do little."The “su” command, is used to promote a user root (provided you know the root password). However, it's much easier to do a quick "su" to become root for awhile than it is to create a granular system of group-memberships and permissions that allows administrators and sub-administrators to have exactly the permissions they need. You can use the su command with the "-c" flag, which allows you to specify a single command to run as root rather than an entire shell-session (for example, "su -c rm somefile.txt"), but since this requires you to enter the root password, everyone who needs to run a particular root command needs the root password. But it's never good for more than a small number of people to know root's password.Another approach to solving the "root takes all" problem is to use SELinux’s Role-Based Access Controls (RBAC), which enforce access-controls that reduce root's effective authority. However, this is much more complicated than setting up effective groups and group-permissions.A reasonable compromise is the sudo command, which is standard on most Linux distributions. "sudo" allows users to execute specified commands as root, without their actually needing to know the root password (unlike su). sudo is configured via the file /etc/sudoers, but you shouldn't edit this file directly; rather, you should use the command visudo, which opens a special vi (text editor) session. sudo is a very powerful tool, havr to use it wisely as root privileges are never to be trifled with! It really is better to use user- and group-permissions judiciously than to hand out root privileges even via sudo, and it's better still to use an RBAC-based system like SELinux if feasible.
53Logging effective logging a key resource Linux logs using syslogd or Syslog-NGreceive log data from a variety of sourcessorts by facility (category) and severitywrites log messages to local/remote log filesSyslog-NG preferable because it has:variety of log-data sources / destinations. much more flexible “rules engine” to configureLog Management………can log via TCP which can be encryptedbalance number of log files usedsize of few to finding info in manymanage size of log filesmust rotate log files and delete old copies. typically use logrotate utility run by cronto manage both system and application logsmust also configure application loggingEffective logging helps ensure that in the event of a system breach or failure, system administrators can more quickly and accurately identify what happened, and thus most effectively focus their remediation and recovery efforts.On Linux systems, system logs are handled either by the ubiquitous Berkeley Syslog daemon (syslogd) in conjunction with the kernel log daemon (klogd), or by the much-more-feature-rich Syslog-NG. System log daemons receive log data from a variety of sources, sort it by facility (category) and severity, and then write the log messages to log files, as we discussed in section 15.3 in the text.Syslog-NG is preferable both because it can use a much wider variety of log-data sources and destinations, and because its “rules engine” is much more flexible than syslogd’s simple configuration file (/etc/syslogd.conf), allowing you to create a much more sophisticated set of rules for evaluating and processing log data. Syslog-NG also supports logging via TCP, which can be encrypted via a TLS “wrapper” such as Stunnel or Secure Shell.Both syslogd and Syslog-NG install with default settings for what gets logged, and where. While these default settings are adequate in many cases, this should be checked. At the very least, you should decide what combination of local and remote logging to perform. If logs remain local to the system that generates them, they may be tampered with by an attacker. If some or all log data is transmitted over the network to some central log-server, audit-trails can be more effectively preserved, but log data may also be exposed to network eavesdroppers.
54Running As Unprivileged User/Group every process “runs as” some userextremely important this user is not rootsince any bug can compromise entire systemmay need root privileges, e.g. bind porthave root parent perform privileged functionbut main service from unprivileged childuser/group used should be dedicatedeasier to identify source of log messagesRecall in Linux and other Unix-like operating systems that every process “runs as” some user. For network daemons in particular, it’s extremely important that this user not be root; any process running as root is never more than a single buffer-overflow or race-condition away from being a means for attackers to achieve remote root compromise. Therefore, one of the most important security features a daemon can have is the ability to run as a non-privileged user or group.Running network processes as root isn’t entirely avoidable: for example, only root can bind processes to “privileged ports” (TCP and UDP ports lower than 1024). However, it’s still possible for a service’s parent process to run as root in order to bind to a privileged port, but to then then spawn a new child process that runs as an unprivileged user, each time an incoming connection is made.Ideally, the unprivileged users and groups used by a given network daemon should be dedicated for that purpose, if for no other reason than for auditability. (I.e., if entries start appearing in /var/log/messages indicating failed attempts by the user ftpuser to run the command /sbin/halt, it will be much easier to determe precisely what’s going on if the ftpuser account isn’t shared by five different network applications).
55Running in chroot Jail chroot confines a process to a subset of / maps a virtual “/” to some other directoryuseful if have a daemon that should only access a portion of the file system, e.g. FTPdirectories outside the chroot jail aren’t visible or reachable at allcontains effects of compromised daemoncomplex to configure and troubleshootmust mirror portions of system in chroot jailThe chroot system call confines a process to some subset of /, that is, it maps a virtual “/” to some other directory (e.g.,. /srv/ftp/public). This is useful since, for example, an FTP daemon that serves files from a particular directory, say, /srv/ftp/public, shouldn’t have any reason to have access to the rest of the filesystem. We call this directory to which we restrict the daemon a chroot jail. To the “chrooted” daemon, everything in the chroot jail appears to actually be in /, e.g., the “real” directory /srv/ftp/public/etc/myconfigfile appears as /etc/myconfigfile in the chroot jail. Things in directories outside the chroot jail, e.g., /srv/www or /etc, aren’t visible or reachable at all.Chrooting therefore helps contain the effects of a given daemon’s being compromised or hijacked. The main disadvantage of this method is added complexity: certain files, directories, and special files typically must be copied into the “chroot jail,” and determining just what needs to go into the jail for the daemon to work properly can be tricky, though detailed procedures for chrooting many different Linux applications are easy to find on the World Wide Web.Troubleshooting a chrooted application can also be difficult: even if an application explicitly supports this feature, it may behave in unexpected ways when run chrooted. Note also that if the chrooted process runs as root, it can “break out” of the chroot jail with little difficulty. Still, the advantages usually far outweigh the disadvantages of chrooting network services.
56Modularityapplications running as a single, large, multipurpose process can be:more difficult to run as an unprivileged userharder to locate / fix security bugs in sourceharder to disable unnecessary functionalityhence modularity a highly prized featureproviding a much smaller attack surfacecf. postfix vs sendmail, Apache modulesIf an application runs in the form of a single, large, multipurpose process, it may be more difficult to run it as an unprivileged user; it may be harder to locate and fix security bugs in its source code (depending on how well-documented and structured the code is); and it may be harder to disable unnecessary areas of functionality. In modern network service applications, therefore,.Postfix, for example, consists of a suite of daemons and commands, each dedicated to a different mail-transfer-related task. Only a couple of these processes ever run as root, and they practically never run all at the same time. Postfix therefore has a much smaller attack surface than the monolithic Sendmail. The popular web server Apache used to be monolithic, but it now supports code modules that can be loaded at startup time as needed; this both reduces Apache’s memory footprint, and reduces the threat posed by vulnerabilities in unused functionality areas.
57Encryptionsending logins & passwords or application data over networks in clear text exposes them to network eavesdropping attackshence many network applications now support encryption to protect such dataoften using OpenSSL librarymay need own X.509 certificates to usecan generate/sign using openssl commandmay use commercial/own/free CASending logon credentials or application data over networks in clear text (i.e., unencrypted) exposes them to network eavesdropping attacks. Most Linux network applications therefore support encryption nowadays, most commonly via the OpenSSL library. Using application-level encryption is, in fact, the most effective way to ensure end-to-end encryption of network transactions.The SSL and TLS protocols provided by OpenSSL require the use of X.509 digital certificates. These can be generated and signed by the user-space openssl command. For optimal security, either a local or commecial (third-party) Certificate Authority (CA) should be used to sign all server certificates, but self-signed (that is, non-verifiable) certificates may also be used. [BAUE05] provides detailed instructions on how to create and use your own Certificate Authority with OpenSSL.
58Loggingapplications can usually be configured to log to any level of detail (debug to none)need appropriate settingmust decide if use dedicated file or system logging facility (e.g. syslog)central facility useful for consistent usemust ensure any log files are rotatedThe last common application security feature we’ll discuss here is logging. Most applications can be configured to log to whatever level of detail you want, ranging from “debugging” (maximum detail) to “none.” Some middle setting is usually the best choice, but you should not assume that the default setting is adequate.In addition, many applications allow you to specify either a dedicated file to write application event data to, or a syslog facility to use when writing log data to /dev/log. If you wish to handle system logs in a consistent, centralized manner, it’s usually preferable for applications to send their log data to /dev/log. Note, however, that logrotate can be configured to rotate any logs on the system, whether written by syslogd, Syslog-NG, or individual applications.
59Mandatory Access Controls Linux uses a DAC security modelbut Mandatory Access Controls (MAC) impose a global security policy on all usersusers may not set controls weaker than policynormal admin done with accounts without authority to change the global security policybut MAC systems have been hard to manageNovell’s SuSE Linux has AppArmorRedHat Enterprise Linux has SELinuxpure SELinux for high-sensitivity, high-securityLinux uses a DAC security model, in which the owner of a given system object can set whatever access permissions on that resource they like. Stringent security controls, in general, are optional. In contrast, a computer with Mandatory Access Controls (MAC) has a global security policy that all users of the system are subject to. A user who creates a file on a MAC system may not set access controls on that file weaker than the controls dictated by the system security policy. See chapters 4 and 9.In general, compromising a system using a DAC-based security model is a matter of hijacking some root process. On a MAC-based system, the superuser account is only used for maintaining the global security policy. Day-to-day system administration is performed using accounts that lack the authority to change the global security policy. Hence, it's impossible to compromise the entire system by attacking any one process.Unfortunately, while MAC schemes have been available on various platforms over the years, they have traditionally been much more complicated to configure and maintain. To create an effective global security policy requires detailed knowledge of the precise behavior of every application on the system. Also the more restrictive the security controls are, the less convenient that system becomes for its users to use.Novell’s SuSE Linux includes AppArmor, a partial MAC implementation that restricts specific processes but leaves everything else subject to the conventional Linux DAC. In Fedora and Red Hat Enterprise Linux, SELinux has been implemented with a policy that restricts key network daemons, but relies on the Linux DAC to secure everything else. For high-sensitivity, high-security, multi-user scenarios, a “pure” SELinux implementation may be deployed, in which all processes, system resources, and data are regulated by comprehensive, granular access controls.
60Windows Security Windows is the world’s most popular O/S advantage is that security enhancements can protect millions of nontechnical userschallenge is that vulnerabilities in Windows can also affect millions of userswill review overall security architecture of Windowsthen security defenses built into WindowsWindows is the world’s most popular operating system and as such has a number of interesting security-related advantages and challenges. The major advantage is any security advancement made to Windows can protect hundreds of millions of nontechnical users, and advances in security technologies can be used by thousands of corporations to secure their assets. The challenges for Microsoft are many, including the fact that security vulnerabilities in Windows can affect millions of users. Of course, there is nothing unique about Windows having security vulnerabilities; all software products have security bugs. However, Windows is used by so many nontechnical users that Microsoft has some interesting engineering challenges. This chapter begins with a description the overall security architecture of Windows 2000 and later. It is important to point out that versions of Windows based on the Windows 95 code base,including Windows 98, Windows 98 SE, and Windows Me, had no real security model, in contrast to Windows NT and later versions. The Windows 9x codebase is no longer supported. The remainder of the chapter cover the security defenses built into Windows, most notably the new defenses in Windows Vista.
61Windows Security Architecture Security Reference Monitor (SRM)a kernel-mode component that performs access checks, generates audit log entries, and manipulates user rights (privileges)Local Security Authority (LSA)responsible for enforcing local security policySecurity Account Manager (SAM)a database that stores user accounts and local users and groups security informationlocal logins perform lookup against SAM DBpasswords are stored using MD4The basic fundamental security blocks in the Windows operating system include:• The Security Reference Monitor (SRM) - a kernel-mode component that performs access checks, generates audit log entries, and manipulates user rights, also called privileges. Ultimately, every permission check is performed by the SRM.• The Local Security Authority (LSA) - resides in a user-mode process named lsass.exe and is responsible for enforcing local security policy in Windows. It also issues security tokens to accounts as they log on to the system.Security policy includes password policy, auditing policy, and privilege settings.• The Security Account Manager (SAM) - a database that stores user accounts and relevant security information about local users and local groups. When a user logs on to a computer using a local account, the SAM process (SamSrv) takes the logon information and performs a lookup against the SAM database, which resides in the Windows System32\config directory. If the credentials match, then the user can log on to the system, assuming there are no other factors preventing logon, such as logon time restrictions or privilege issues. Note that the SAM does not perform the logon; that is the job of the LSA. The SAM file is binary rather than text, and passwords are stored using the MD4 hash algorithm. On Windows Vista, the SAM stores password information using a password-based key derivation function (PBKCS).
62Windows Security Architecture Active Directory (AD)Microsoft’s LDAP directoryall Windows clients can use AD to perform security operations including account logonauthenticate using AD when the user logs on using a domain rather than local accountuser’s credential information is sent securely across the network to be verified by ADWinLogon (local) and NetLogon (net) handle login requests• Active Directory (AD) - is Microsoft’s LDAP directory included with Windows Server 2000 and later. All client versions of Windows, including Windows XP and Windows Vista, can communicate with AD to perform security operations including account logon. A Windows client will authenticate using AD when the user logs on to the computer using a domain account rather than a local account. Like the SAM scenario, the user’s credential information is sent securely across the network, verified by AD, and then, if the information is correct, the user can logon.• WinLogon and NetLogon - WinLogon handles local logons at the keyboard and NetLogon handles logons across the network.
63Local vs Domain Accounts a networked Windows computer can be:domain joinedcan login with either domain or local accountsif local may not access domain resourcescentrally managed and much more securein a workgroupa collection of computers connected togetheronly local accounts in SAM can be usedno infrastructure to support AD domainA networked Windows computer can be in one of two configurations, either domain joined or in a workgroup. When a computer is domain joined, users can gain access to that computer using domain accounts, which are centrally managed in Active Directory. They can, if they wish, also log on using local accounts, but local accounts may not have access to domain resources such as networked printers, Web servers, servers, and so on. When a computer is in a workgroup, only local accounts can be used, held in the SAM. There are pros and cons to each scenario. A domain has the major advantage of being centrally managed and as such is much more secure. If an environment has 1000 Windows computers and an employee leaves, the user’s account can be disabled centrally rather than on 1000 individual computers. The only advantage of using local accounts is that a computer does not need the infrastructure required to support a domain using AD. Windows also has the notion of a workgroup, which is simply a collection of computers connected to one another using a network; but rather than using a central database of accounts in AD, the machines use only local accounts. The difference between a workgroup and a domain is simply where accounts are authenticated. A workgroup has no domain controllers; authentication is performed on each computer, and a domain authenticates accounts at domain controllers running AD.
64Domain Login Exampledomain admin adds user’s account info (name, account, password, groups, privileges)account is represented by a Security ID (SID)unique to each account within a domainof form: S-1–5–21-AAA-BBB-CCC-RRRBreakdown: S means SID; 1 is version number; 5 is identifier authority (here is SECURITY_NT_AUTHORITY); 21 means “not unique”, although always unique within a domain; AAA-BBB- CCC is unique number representing domain; and RRR is a relative id (increments by 1 for each new account)Now give an example of what happens when a user logs on to a Windows system. First, a domain admin must add the user’s account information to the system; this will include the user’s name, account name, password, and optionally group membership and privileges. Then Windows creates an account for the user in the domain controller running Active Directory.Each user account is uniquely represented by a Security ID (SID) within a domain, and every account gets a different SID. A user account’s SID is of the following form (see text for details): S-1–5–21-AAA-BBB-CCC-RRR.In Windows, a username can be in one of two formats. The first, the SAM format, is supported by all versions of Windows and is of the form DOMAIN\Username. The second is called User Principal Name (UPN) and looks more like an RFC822 address: The SAM name should be considered a legacy format.When a user logs on to Windows, he or she does so using either a username and password, or a username and a smart card. It is possible to use other authentication or identification mechanisms, such as an RSA SecureID token or biometric device, but these require third-party support.Assuming the user logs on correctly, a token is generated by the operating system and assigned to the user. A token contains the user’s SID, group membership information, and privileges. Groups are also represented using SIDs. We explain privileges subsequently. The user’s token is assigned to every process run by the user, and is used to perform access checks, as discussed subsequently.
65Domain Login Example (cont.) username in one of two forms:SAM format: DOMAIN\UsernameUser Principal Name (UPN):login using username & password or smartcardassuming login is correct, token is generated and assigned to the usercontains user’s SID, group membership info, and privilegesassigned to every process run by user, and used for access checksNow give an example of what happens when a user logs on to a Windows system. First, a domain admin must add the user’s account information to the system; this will include the user’s name, account name, password, and optionally group membership and privileges. Then Windows creates an account for the user in the domain controller running Active Directory.Each user account is uniquely represented by a Security ID (SID) within a domain, and every account gets a different SID. A user account’s SID is of the following form (see text for details): S-1–5–21-AAA-BBB-CCC-RRR.In Windows, a username can be in one of two formats. The first, the SAM format, is supported by all versions of Windows and is of the form DOMAIN\Username. The second is called User Principal Name (UPN) and looks more like an RFC822 address: The SAM name should be considered a legacy format.When a user logs on to Windows, he or she does so using either a username and password, or a username and a smart card. It is possible to use other authentication or identification mechanisms, such as an RSA SecureID token or biometric device, but these require third-party support.Assuming the user logs on correctly, a token is generated by the operating system and assigned to the user. A token contains the user’s SID, group membership information, and privileges. Groups are also represented using SIDs. We explain privileges subsequently. The user’s token is assigned to every process run by the user, and is used to perform access checks, as discussed subsequently.
66Windows Privilegesare systemwide permissions assigned to user accounts – over 45 totale.g. backup computer, or change system timesome are deemed “dangerous” such as:act as part of operating system privilegedebug programs privilegebackup files and directories privilegeothers are deemed “benign” such asbypass traverse checking privilegePrivileges are systemwide permissions assigned to user accounts, such as the ability to back up the computer, or to change the system time. There are over 45 privileges in Windows Vista. Some privileges are deemed “dangerous” which means a malicious account that is granted such a privilege can cause damage. Examples include:• Act as part of operating system privilege. This is often referred to as the Trusted Computing Base (TCB) privilege, because it allows code run by an account granted this privilege to act as part of the most trusted code in the operating system: the security code. This is the most dangerous privilege in Windows and is granted only the Local System account; even administrators are not granted this privilege.• Debug programs privilege. This privilege allows an account to debug any process running in Windows. A user account does not need this privilege to debug an application running under the user’s account. Because of the nature of debuggers, this privilege means a user can run any code he or she wants in any running process.• Backup files and directories privilege. Any process with this privilege will bypass all access control list (ACL) checks, because the process must be able to read all files to build a complete backup. Its sister privilege Restore files and directories is just as dangerous because it will ignore ACL checks when copying files to source media.Some privileges are generally deemed benign. An example is the “bypass traverse checking” privilege that is used to traverse directory trees even though the user may not have permissions on the traversed directory. This privilege is assigned to all user accounts by default and is used as an NTFS file system optimization.
67Access Control Lists two forms of access control list (ACL): Discretionary ACL (DACL)grants or denies access to protected resources such as files, shared memory, named pipes etcSystem ACL (ACL)used for auditing and in Windows Vista to enforce mandatory integrity policyobjects needing protection are assigned a DACL (and possible SACL) that includesSID of the object owner. list of access control entries (ACEs)each ACE includes a SID & access maskaccess mask could include ability to:read, write, create, delete, modify, etcaccess masks are object-type specifice.g. service abilities are create, enumerateWindows has two forms of access control list (ACL). The first is Discretionary ACL (DACL), which grants or denies access to protected resources in Windows such as files, shared memory, named pipes, and so on. The other kind of ACL is the System ACL (ACL), which is used for auditing and in Windows Vista used to enforce mandatory integrity policy.
68Security Descriptor (SD) data structure with object owner, DACL, & SACLe.g.Owner: CORP\BlakeACE: Allow CORP\Paige Full ControlACE: Allow Administrators Full ControlACE: Allow CORP\Cheryl Read, Write and Deletehave no implied access, if there is no ACE for requesting user, then access is deniedapplications must request correct type of accessif just request “all access” when need less (e.g. read) some user’s who should have access will be deniedThe data structure that includes the object owner, DACL, and SACL is referred to as a the object’s security descriptor (SD). A sample SD with no SACL is shown above. The DACL in this SD allows the user Paige (from the CORP domain) full access to the object; she can do anything to this object. Members of the Administrators can do likewise. Cheryl can read, write, and delete the object. Note the object owner is Blake; as the owner, he can do anything to the object, too. This was always the case until the release of Windows Vista. Some customers do not want owners to have such unbridled access to objects, even though they created them. In Windows Vista you can include an Owner SID in the DACL, and the access mask associated with that account applies to the object owner. There are two important things to keep in mind about access control in Windows. First, if the user accesses an object with the SD example above, and the user is not one of Blake, Paige, Cheryl, or in Administrator’s group, then that user is denied access to the object. There is no implied access. Second, if Cheryl requests read access to the object, she is granted read access. If she requests read and write access, she is also granted access. If she requests create access, she is denied access unless Cheryl is also a member of the Administrators group, because the “Cheryl ACE” does not include the “create” access mask. When a Windows application accesses an object, it must request the type of access the application requires. Many developers would simply request “all access” when in fact the application may only want to read the object. If Cheryl uses an application that attempts to access the object described above and the application requests full access to the object, she is denied access to the object unless she is an administrator. This is the prime reason so many applications failed to execute correctly on Windows XP unless the user is a member of the Administrator’s group.
69More SD’s & Access Checks each ACE in the DACL determines accessan ACE can be an allow or a deny ACEWindows evaluates each ACE in the ACL until access is granted or explicitly deniedso deny ACEs come before allow ACEsdefault if set using GUIexplicitly order if create programmaticallywhen user attempts to access a protected object, the O/S performs an access checkcomparing user/group info with ACE’s in ACLWe mentioned earlier that a DACL grants or denies access; technically this is not 100% accurate. Each ACE in the DACL determines access; and an ACE can be an allow ACE or a deny ACE. Consider the variant of the previous SD shown above. Note that the first ACE is set to deny members of the guests account full control to the object .Basically, guests are out of luck if they attempt to access the object protected by this SD. Deny ACEs are not often used in Windows because they can be complicated to troubleshoot. Also note that the first ACE is the deny ACE; it is important that deny ACEs come before allow ACEs because Windows evaluates each ACE in the ACL until access is granted or explicitly denied. If the ACL grants access, then Windows will stop ACL evaluation, and if the deny ACE is at the end of the ACL, then it is not evaluated, so the user is granted access even if the account may be denied access. When setting an ACL from the user interface, Windows will always put deny ACEs before allow ACEs, but if you create an ACL programmatically (e.g. by using the SetSecurityDescriptorDacl function), you must explicitly place the deny ACEs first.Now put all this together. When a user account attempts to access a protected object, the operating system performs an access check. It does this by comparing the user account and group information in the user’s token and the ACEs in the object’s ACL. If all the requested operations (read, write, delete,and so on) are granted, then access is granted; otherwise the user gets an access denied error status (error value 5).
70Application accessNote that when an application requests access, it must also request an access level.Initially (before XP), most applications just requested “all access”, which is only given to owner or admin accounts.This is the reason so many applications failed on Windows XP unless they ran at admin level – essentially, poor coding.We mentioned earlier that a DACL grants or denies access; technically this is not 100% accurate. Each ACE in the DACL determines access; and an ACE can be an allow ACE or a deny ACE. Consider the variant of the previous SD shown above. Note that the first ACE is set to deny members of the guests account full control to the object .Basically, guests are out of luck if they attempt to access the object protected by this SD. Deny ACEs are not often used in Windows because they can be complicated to troubleshoot. Also note that the first ACE is the deny ACE; it is important that deny ACEs come before allow ACEs because Windows evaluates each ACE in the ACL until access is granted or explicitly denied. If the ACL grants access, then Windows will stop ACL evaluation, and if the deny ACE is at the end of the ACL, then it is not evaluated, so the user is granted access even if the account may be denied access. When setting an ACL from the user interface, Windows will always put deny ACEs before allow ACEs, but if you create an ACL programmatically (e.g. by using the SetSecurityDescriptorDacl function), you must explicitly place the deny ACEs first.Now put all this together. When a user account attempts to access a protected object, the operating system performs an access check. It does this by comparing the user account and group information in the user’s token and the ACEs in the object’s ACL. If all the requested operations (read, write, delete,and so on) are granted, then access is granted; otherwise the user gets an access denied error status (error value 5).
71Interacting with SDs Powershell to get an object’s SD: get-acl c:\folder\file.txt | format-listuse set-acl to set DACL or SACLCan also use Security Descriptor Definition Language (SDDL):Example function: ConvertStringSecurityDescriptorToSecurityDescriptor()Windows is a multi-threaded operating system, which means a single process can have more than one thread of execution at a time.This is very common for both server and client applications. For example, a word processor might have one thread accepting user input and another performing a background spellcheck. A server application, such as a database server, might start a large number of threads to handle concurrent user requests. Let’s say the database server process runs as a predefined account named DB_ACCOUNT; when it takes a user request, the application can impersonate the calling user by calling an impersonation function. For example, one networking protocol supported by Windows is called Named Pipes, and the ImpersonateNamedPipeClient function will impersonate the caller. Impersonation means setting the user’s token on the current thread. Normally, access checks are performed against the process token, but when a thread is impersonating a user, the user’s token is assigned to the thread, and the access check for that thread is performed against the token on the thread, not the process token. When the connection is done, the thread “reverts” which means the token is dropped from the thread. So why impersonate? Imagine if the database server accesses a file named db.txt, and the DB_ACCOUNT account has read, write, delete, and update permission on the file. Without impersonation, any user could potentially read, write, delete, and update the file. With impersonation, it is possible to restrict who can do what to the db.txt file.
72Impersonation process can have multiple threads common for both clients and serversimpersonation allows a server to serve a user, using their access privilegese.g. ImpersonateNamedPipeClient function sets user’s token on the current threadthen access checks for that thread are performed against this token not server’swith user’s access rightsWindows is a multi-threaded operating system, which means a single process can have more than one thread of execution at a time.This is very common for both server and client applications. For example, a word processor might have one thread accepting user input and another performing a background spellcheck. A server application, such as a database server, might start a large number of threads to handle concurrent user requests. Let’s say the database server process runs as a predefined account named DB_ACCOUNT; when it takes a user request, the application can impersonate the calling user by calling an impersonation function. For example, one networking protocol supported by Windows is called Named Pipes, and the ImpersonateNamedPipeClient function will impersonate the caller. Impersonation means setting the user’s token on the current thread. Normally, access checks are performed against the process token, but when a thread is impersonating a user, the user’s token is assigned to the thread, and the access check for that thread is performed against the token on the thread, not the process token. When the connection is done, the thread “reverts” which means the token is dropped from the thread. So why impersonate? Imagine if the database server accesses a file named db.txt, and the DB_ACCOUNT account has read, write, delete, and update permission on the file. Without impersonation, any user could potentially read, write, delete, and update the file. With impersonation, it is possible to restrict who can do what to the db.txt file.
73Mandatory Access Control have Integrity Control in Windows Vista (and later) that limits operations changing an object’s stateobjects and principals are labeled (using SID):Low integrity (S )Medium integrity (S )High integrity (S )System integrity (S )when write operation occurs first check subject’s integrity level dominates object’s integrity levelmuch of O/S marked medium or higher integrityWindows Vista includes a new authorization technology named Integrity Control, which goes one step beyond DACLs. DACLs allow fine-grained access control, but integrity controls limit operations that might change the state of an object. The general premise behind integrity controls is simple; objects (such as files and processes) and principals (users) are labeled with one of the following integrity levels: Low integrity (S ); Medium integrity (S ); High integrity (S ); or System integrity (S ).Note the SIDs after the integrity levels. Microsoft implemented integrity levels using SIDs. For example, a high-integrity process will include the S SID in the process token. If a subject or object does not include an integrity label, then the subject or object is deemed medium integrity.When a write operation occurs, Windows Visa will first checks to see if the subject’s integrity level dominates the object’s integrity level, which means the subject’s integrity level is equal to or above the object’s integrity level. If it is, and the normal DACL check succeeds, then the write operation is granted. The most important component in Windows Vista that uses integrity controls is Internet Explorer 7.0; but the majority of the operating system is marked medium or higher integrity.
74Vista User AccountThe screen shot of Figure 24.1 shows a normal user token in Windows Vista. It includes medium-integrity SID, which means this user account is medium integrity and any process run by this user can write only to objects of medium and lower integrity.
75Windows Vulnerabilities Windows, like all O/S’s, has security bugsand bugs have been exploited by attackers to compromise customer operating systemsMicrosoft now uses process improvement called the Security Development Lifecyclenet effect approx 50% reduction in bugsWindows Vista used SDL start to finishIIS v6 (in Windows Server 2003) had only 3 vulnerabilities in 4 years, none criticalWindows, like all operating systems, has security bugs, and a number of these bugs have been exploited by attackers to compromise customer operating systems. After 2001, Microsoft decided to change its software development process to better accommodate secure design, coding, testing, and maintenance requirements, with one goal in mind: reduce the number of vulnerabilities in all Microsoft products. This process improvement is called the Security Development Lifecycle (with core requirements as listed in text). A full explanation of SDL is beyond the scope of this chapter, but the net effect has been an approximately 50% reduction in security bugs. Windows Vista is the first version of Windows to have undergone SDL from start to finish. Other versions of Windows had a taste of SDL, such as Windows XP SP2, but Windows XP predates the introduction of SDL at Microsoft. SDL does not equate to “bug free” and the process is certainly not perfect, but there have been some major SDL success stories. Microsoft’s Web server, Internet Information Services (IIS), has a much-maligned reputation because of serious bugs found in the product that led to worms, such as CodeRed. IIS version 6, included with Windows Server 2003, has had a stellar security track record since its release; there have only been three reported vulnerabilities in the four years since its release, none of them critical. And this figure is reported as an order of magnitude less bugs than IIS’s main competitor, Apache.
76Security Development Lifecycle (SDL) Requirements:Mandatory security educationSecurity design requirementsThreat modelingAttack surface analysis and reductionSecure codingSecure testingSecurity pushFinal security reviewSecurity responseWindows, like all operating systems, has security bugs, and a number of these bugs have been exploited by attackers to compromise customer operating systems. After 2001, Microsoft decided to change its software development process to better accommodate secure design, coding, testing, and maintenance requirements, with one goal in mind: reduce the number of vulnerabilities in all Microsoft products. This process improvement is called the Security Development Lifecycle (with core requirements as listed in text). A full explanation of SDL is beyond the scope of this chapter, but the net effect has been an approximately 50% reduction in security bugs. Windows Vista is the first version of Windows to have undergone SDL from start to finish. Other versions of Windows had a taste of SDL, such as Windows XP SP2, but Windows XP predates the introduction of SDL at Microsoft. SDL does not equate to “bug free” and the process is certainly not perfect, but there have been some major SDL success stories. Microsoft’s Web server, Internet Information Services (IIS), has a much-maligned reputation because of serious bugs found in the product that led to worms, such as CodeRed. IIS version 6, included with Windows Server 2003, has had a stellar security track record since its release; there have only been three reported vulnerabilities in the four years since its release, none of them critical. And this figure is reported as an order of magnitude less bugs than IIS’s main competitor, Apache.
77Patch ManagementAt first, patches were released at all times. Now, they release on the second Tuesday of each month (Patch Tuesday).More recently, they even announce the expected load the Thursday before, which has been popular with sys admins.Windows, like all operating systems, has security bugs, and a number of these bugs have been exploited by attackers to compromise customer operating systems. After 2001, Microsoft decided to change its software development process to better accommodate secure design, coding, testing, and maintenance requirements, with one goal in mind: reduce the number of vulnerabilities in all Microsoft products. This process improvement is called the Security Development Lifecycle (with core requirements as listed in text). A full explanation of SDL is beyond the scope of this chapter, but the net effect has been an approximately 50% reduction in security bugs. Windows Vista is the first version of Windows to have undergone SDL from start to finish. Other versions of Windows had a taste of SDL, such as Windows XP SP2, but Windows XP predates the introduction of SDL at Microsoft. SDL does not equate to “bug free” and the process is certainly not perfect, but there have been some major SDL success stories. Microsoft’s Web server, Internet Information Services (IIS), has a much-maligned reputation because of serious bugs found in the product that led to worms, such as CodeRed. IIS version 6, included with Windows Server 2003, has had a stellar security track record since its release; there have only been three reported vulnerabilities in the four years since its release, none of them critical. And this figure is reported as an order of magnitude less bugs than IIS’s main competitor, Apache.
78Windows System Hardening process of shoring up defenses, reducing exposed functionality, disabling featuresknown as attack surface reductionuse 80/20 rule on featuresnot always achievablee.g. requiring RPC authentication in XP SP2e.g. strip mobile code support on serversservers easier to harden:are used for very specific and controlled purposesserver users are administrators with (theoretically) better computer configuration skills than typical usersThe process of hardening is the process of shoring up defenses, reducing the amount of functionality exposed to untrusted users, and disabling less-used features. At Microsoft, this process is called Attack Surface Reduction. The concept is simple: Apply the 80/20 rule to features. If the feature is not used by 80% of the population, then the feature should be disabled by default. While this is the goal, it is not always achievable simply because disabling vast amounts of functionality makes the product unusable for nontechnical users, which leads to increased support calls and customer frustration. One of the simplest and effective ways to reduce attack surface is to replace anonymous networking protocols with authenticated networking protocols. The biggest change of this nature in Windows XP SP2 was to change all anonymous remote procedure call (RPC) access to require authentication. This was a direct result of the Blaster worm, since making this simple change to RPC helps prevent worms exploiting vulnerabilities in RPC code and code that uses RPC. In practice, requiring authentication is a very good defense; the Zotob worm, which exploited a Plug ‘n’ Play vulnerability and which was accessible through RPC, did not affect Windows XP SP2, even with the coding bug, because an attacker must be authenticated first. Anotehr advantage is that most users don’t even know it’s there, yet the user is protected. Another example of hardening Windows occurred in Windows Server Because Windows Server 2003 is a server and not a client platform, the Web browser Internet Explorer was stripped of all mobile code support by default. In general, hardening servers is easier than hardening clients, for the reasons listed.
79Windows Security Defenses Have 4 broad categories of security defenses:account defensesnetwork defensesbuffer overrun defenses.browser defensesAll versions of Windows offer security defenses, but the list of defenses has grown rapidly in the last five years to accommodate increased Internet-based threats. The attackers today are not just kids; they are criminals who see money in compromised computers. Must stress that attacks and compromises are very real, and the attackers are highly motivated by money. Attackers are no longer just young, anarchic miscreants; attackers are real criminals. The defenses in Windows Vista can be grouped into the four broad categories shown.We start by discussing system hardening, which is critical to the defensive posture of a computer system and network. We then discuss each of these types of security defenses in detail.
80Account Defenses user accounts can have privileged SIDs least privilege dictates that users operate with just enough privilege for tasksWindows XP users in local Administratorsfor application compatibility reasonscan use “Secondary Logon” to run applicationsalso restricted tokens reduce per-thread privilegeWindows Vista onwards reverses default with UACusers prompted to perform a privileged operationunless admin on ServerAs we noted, user accounts can contain highly privileged SIDs (e.g. Administrators or Account operators groups) and dangerous privileges (such as Act as part of operating system), and malicious software running with these SIDs or privileges can wreak havoc. The principle of least privilege dictates that users should operate with just enough privilege to get the tasks done, and no more. Historically, Windows XP users operated by default as members of the local Administrators group; for application compatibility reasons. Many applications that used to run on Windows 95/98/ME would not run correctly on Windows XP unless the user was an administrator. If run as a “Standard User” they ran into errors. Of course, a user can run as a “Standard User.” Windows XP and Windows Server 2003 have “Secondary Logon,” which allows a user account to right click an application, select “Run as. . . ,” and then enter another user account and password to run the application. They also include support for a restricted token, which can reduce privilege on a per-thread level. A restricted token is simply a thread token with privileges removed and/or SIDs marked as deny-only SIDs. Windows Vista changes the default; all user accounts are users and not administrators.This is referred to as User Account Control (UAC.) When a user wants to perform a privileged operation, the user is prompted to enter an administrator’s account name and password. If the user is an administrator, the user is prompted to consent to the operation. Hence if malware attempts to perform a privileged task, the user is notified. Note that in the case of Windows “Longhorn” Server, if a user enters a command in the Run dialog box from the Start menu, the command will always run elevated if the user is normally an administrator and will not prompt the user. The great amount of user interaction required to perform these privileged operations mitigates the threat of malware performing tasks off the Run dialog box.
81Low Privilege Service Accounts Windows services are long-lived processes started after bootingmany ran with elevated privilegesbut many do not need elevated requirementsWindows XP added Local Service and Network service accountsallow a service local or network accessotherwise operate at much lower privilege levelWindows XP SP2 split RPC service (RPCSS) in two (RPCSS and DCOM Server Process)example of least privilege in action, see also IIS6direct result of Blastr wormWindows services are long-lived processes that usually start when the computer boots. e.g. File and Print service and the DNS service. Many such services run with elevated privileges because they perform privileged operations, but many services do not need such elevated requirements. In Windows XP, Microsoft added two new service accounts, the Local Service account and the Network service account, which allow a service local or network access, respectively, but processes running with these accounts operate at a much lower privilege level, and are not members of the local administrator’s group. In Windows XP SP2, Microsoft made an important change to the RPC service (RPCSS) as an outcome of the Blaster worm. In versions of Windows prior to Windows XP SP2, RPCSS ran as the System account, the most privileged account in Windows, to allow it to execute Distributed COM (DCOM, which layered on top of RPC) objects on a remote computer correctly. For Windows XP SP2 RPCSS was split in two, RPCSS shed its DCOM activation code, and a new service was created called the DCOM Server Process Launcher. RPCSS runs as the lower-privilege Network service account; DCOM runs as SYSTEM. This is a good example of the principle of least privilege in action. A small amount of code runs with elevated identity, and related components run with lower identity. IIS6 follows a similar model, a process named inetinfo starts under the System identity because it must perform administrative tasks, and it starts worker processes named w3wp.exe to handle user requests (these processes run under the lower-privilege network service identity).
82Stripping Privilegesanother defense is to strip privileges from an account soon after an application startse.g. Index server process runs as system to access all disk volumesbut then sheds any unneeded privileges as soon as possibleusing AdjustTokenPrivilegesWindows Vista can define privileges required by a serviceusing ChangeServiceConfig2Another useful defense, albeit not often used in Windows, is to strip privileges from an account when the application starts. This should be performed very early in the application startup code (for example, early in the application’s main function). The best way to describe this is by way of example. In Windows, the Index server process runs as the system account because it needs administrative access to all disk volumes to determine if any file has changed so it can reindex the file. Only members of the local Administrators group can get a volume handle. This is the sole reason Index server must run as the system account; yet as you will remember, the system account is bristling with dangerous privileges, such as the TCB privilege and backup privilege. So when the main index server process starts (cidaemon.exe), it sheds any unneeded privileges as soon as possible. The function that performs this is AdjustTokenPrivileges. Windows Vista also adds a function to define the set of privileges required by a service to run correctly. This function that performs this is ChangeServiceConfig2.
83Network Defenseshave IPSec and IPv6 with authenticated network packets enabled by default in Windows VistaIPv4 also enabled by default, expect less useE.g. Windows Xbox live network ipsechave built-in software firewallblock inbound connections on specific portsVista can allow local net access onlyoptionally block outbound connectionsdefault was off (XP) but now default onMany users and industry pundits focus on “users-as-non-admins” and can lose sight of attacks that do not require human interaction. These cannot protect a computer from attacks exploiting a vulnerability in a network facing process with no user interaction, such as DNS server, server, or Web server. Windows offers many network defenses, most notably native IPSec and IPv6 support, and a bi-directional firewall.The reason why DDoS attacks occur is because IPv4 is an unauthenticated protocol. There are many other kinds of TCP/IP-related issues. The problem is that IPv4 is fundamentally flawed. Can use either IPSec or IPv6 which support authenticated network packets (see chapter 21). In Windows Vista, IPv6 is enabled by default. IPv4 is also enabled by default, but over time Microsoft anticipates that more of the world’s networks will migrate to the much more secure protocol.All versions of Windows since Windows XP have included a built-in software firewall. The Windows XP one was limited in that (1) it was not enabled by default, and (2) its configuration was limited to blocking only inbound connections on specific ports. The firewall in Windows XP SP2 was substantially improved: to allow users with multiple computers in the home to share files and print documents. The old firewall would only allow this to happen if the file and print ports (TCP 139 and 445) were open to the Internet. Windows XP SP2 has an option to open a port only on the local subnet. The other change in Windows XP SP2, and by far the most important, is that the firewall is enabled by default. Windows Vista adds two other functions. The first is that the firewall is a fully integrated component of the rewritten TCP/IP networking stack. Second, the firewall supports optionally blocking outbound connections, though this can easily be circumvented
84Buffer Overrun Defenses many compromises exploit buffer overrunsWindows Vista has “Stack-Based Buffer Overrun Detection (/GS)” default enabledsource code compiled with special /GS optiondoes not affect every function; only those with at least 4- bytes of contiguous stack data and that takes a pointer or buffer as an argumentdefends against “classic stack smash”Many computers are compromised through attacks that take advantage of buffer overrun bugs, as we discussed in chapter 11. Windows Vista includes “Stack-Based Buffer Overrun Detection (/GS)” enabled by default. The source code for Windows XP SP2 is compiled with a special compiler option in Microsoft Visual C to add defenses to the function’s stack. The compiler switch is /GS, and it is usable by anyone with access to a Visual C compiler. This compiler option does not affect every function; it affects only functions that have at least 4-bytes of contiguous stack data and only when the function takes a pointer or buffer as an argument. Note that the stack data must be of a “classic stack smash” type, such as a char array.
85Windows Stack and /GS flag Normally in Windows, a function’s stack looks like Figure 24.2a. When the function returns, it must continue execution at the next instruction after the instruction that called this function. The CPU does this by taking the values off the stack (called popping) and populating the EBP and EIP registers. If the attacker can overflow the buffer on the stack, he can overrun the data used to populate the EBP and EIP registers with values under his control and hence change the application’s execution flow. Once the code is compiled with the /GS option, the stack is laid out as shown in Figure 24.2b. Now a cookie has been inserted between stack data and the function return address. This random value is checked when the function exits, and if the cookie is corrupted, the application is halted. You will also notice that buffers on the stack are placed in higher memory than non-buffers, such as function pointers, C objects, and scalar values. The reason for this is to make it harder for some attacks to succeed. Function pointers and C objects with virtual destructors (which are simply function pointers) are also subject to attack because they determine execution flow. If these constructs are placed in memory higher than buffers, then overflowing a buffer could corrupt a function pointer, for example. By switching the order around, the attacker must take advantage of a buffer underrun, which is rarer, to successfully corrupt the function pointer. There are other, rarer, variants of the buffer overrun that can still corrupt a function pointer, such as corrupting a stack frame in higher memory.
86Buffer Overrun Defenses No eXecuteNamed (NX) / Data Execution Prevention (DEP) / eXecution Disable (XD)prevent code executing in data segmentsas commonly used by buffer overrun exploitsapplications linked with /NXCOMPAT optionStack Randomization (Vista only)randomizes thread stack base addressesHeap-based buffer overrun defenses:add and check random value on each heap blockheap integrity checkingheap randomization (Vista only)No eXecuteNamed (NX) / Data Execution Prevention (DEP) / eXecution Disable (XD) use CPU support to prevent code from executing in data segments. Most modern Intel CPUs, and all current AMD CPUs support it. DEP support was first introduced in Windows XP SP2 and is a critically important defense in Windows Vista, especially when used with address space layout randomization (ASLR). The goal of NX is to prevent data executing. Most buffer overrun exploits enter a computer system as data, and then those data are executed. By default, most system components in Windows Vista and applications can use NX by linking with the /NXCOMPAT linker option.The Stack Randomization defense is available only in Windows Vista and later. When a thread starts in Windows Vista, the operating system will randomize the stack base address by 0–31 (4k size) pages. Once the page is chosen, a random offset is chosen within the page, and the stack starts from that spot. The purpose of randomization is to remove some of the predictability from the attacker, making attacks harder.Heap-based buffer over-runs are also exploitable, and can lead to code execution. The first heap defense, added to Windows XP SP2, is to add a random value to each heap block and detect that this cookie has not been tampered with. If the cookie has changed,then the heap has been corrupted and the application could be forced to crash (failstop). The second defense is heap integrity checking; when heap blocks are freed, metadata in the heap data structures are checked for validity, and if the data are compromised,either the heap block is leaked or the application crashes. Windows Vista also adds heap randomization. When a heap is created, the start of the heap is offset by 0–4 MB. Again, this makes things harder for the attacker.
87Other Defenses Image Randomization Service Restart Policy O/S boots in one of 256 configurationsmakes O/S less predictable for attackersService Restart Policyservices can be configured to restart if failgreat for reliability but lousy for securityVista sets some critical services so can only restart twice, then manual restart neededgives attacker only two attemptsEncrypted File System…..EFS, BitlockerWindows Vista also adds image randomization. When the operating system boots ,it starts up in one of 256 configurations. In other words, the entire operating system is shifted up or down in memory when it is booted .The best way to think of this is to imagine that a random number is selected at boot, and every operating system component is loaded as an offset from that location, but the offset between each component is fixed. Again, this makes the operating system less predictable for attackers and makes it less likely that an exploit will succeed.In Windows a service can be configured to restart if the service fails. This is great for reliability but lousy for security, because if an attacker attacks the service and the attack fails but the service crashes, the server might restart and the attacker will have another chance to attack the system. In Windows Vista, Microsoft set some of the critical services to restart only twice, after which the service will not restart unless the administrator manually restarts it. This gives the attacker only two attempts to get the attack to work, and in the face of stack, heap, and image randomization, the server is much more difficult.