Intrusion Detection
- Intrusion Detection
- Common Exploits
- Firewalls
- Hardware and Network Protection
- Secure Network Topologies
- Network Segmentation and DMZs
- Incident Response
- Attackers and Vulnerabilities
- Security Updates
- Using Red Hat Network
- Server Security
- Secure Services With TCP Wrappers and xinetd
- Setting a Trap
- Controlling Server Resources
- Virtual Private Networks
- VPNs and Red Hat Linux
- IP6Tables
- Host-based IDS
- Tripwire
- RPM as an IDS
- Other Host-based IDS
- Network-based IDS
- snort
- Hardware Security
- Implementing the Incident Response Plan
- Investigating the Incident
- Collecting an Evidential Image
- Gathering Post-Breach Information
- Create an Incident Response Plan
- The Computer Emergency Response Team (CERT)
- Restoring and Recovering Resources
- Threats to Network Security
- Insecure Architectures
- Broadcast Networks
- Centralized Servers
- Threats to Server Security
- Unused Services and Open Ports
- Unpatched Services
- Inattentive Administration
- Inherently Insecure Services
- Bad Passwords
- Vulnerable Client Applications
- Enable/Disable FTP
- FTP Greeting Banner
- Anonymous Access
- Anonymous Upload
- User Accounts
- Restricting User Accounts
- Use TCP Wrappers To Control Access
- Use xinetd To Control the Load
- FollowSymLinks
- The Indexes Directive
- The UserDir Directive
- Do Not Remove the IncludesNoExec Directive
- Restrict Permissions for Executable Directories
- Limiting Denial of Service Attack
- NFS and Sendmail
- Mail-only Users
- Carefully Plan the Network
- Beware of Syntax Errors
- Do Not Use the no_root_squash Option
- Carefully Plan the Network
- Use a Password-Like NIS Domain Name and Hostname
- Edit the /var/yp/securenets File
- Assign Static Ports and Use iptables Rules
- Use Kerberos Authentication
- Protect portmap With TCP Wrappers
- Protect portmap With iptables
- Physical Controls
- Technical Controls
- Administrative Controls
- Generating CIPE Keys
- Establishing a Methodology
- Scanning Hosts with nmap
- Using nmap
- Nessus
- Whisker
- VLAD the Scanner
- Anticipating Your Future Needs
- BIOS Passwords
- Boot Loader Passwords
- Create Strong Passwords
- Secure Password Creation Methodology
- Create User Passwords Within an Organization
- Force Strong Passwords
- Password Aging
- Allow Root Access
- Risks To Services
- Insecure Services
Intrusion Detection
An intrusion detection system (IDS) analyzes a system and network activity for unauthorized entry and/or malicious activity.
- A knowledge-based IDS preemptively alert security administrators before an intrusion occurs using a database of common attacks.
- A behavioral IDS track all resource usage for anomalies.
For a useful article listing various intrusion detection tools: 10 Top Intrusion Detection Tools for 2018
Common Exploits
Exploit Description Null or Default Passwords Leaving administrative passwords blank or using a default password provided by the application package. Common in hardware such as routers and BIOSes. Default Shared Keys Secure services sometimes package default security keys for development or evaluation testing purposes. If these keys are left unchanged and placed in a production environment on the Internet, any user with the same default keys have access to that shared-key resource, and any sensitive information contained in it. Most common in wireless APs and preconfigured secure server appliances. CIPE contains an sample static key that must be changed before moving to a production environment
IP Spoofing A remote machine acts as a node on the local network, finds vulnerabilities with servers, and installs a backdoor program or trojan to gain control over network resources. Spoofing is quite difficult as it involves the attacker predicting TCP/IP SYN-ACK numbers to coordinate a connection to target systems, but several tools are available to assist crackers in performing such a vulnerability
Depends on target system running services (such as rsh, telnet, FTP and others) that use source-based authentication techniques, which are not usually recommended compared to PKI or other forms of encryption authentication as used in ssh or SSL/TLS.
Eavesdropping Collecting data that passes between two active nodes on a network by eavesdropping the connection between the two nodes. This type of attack works mostly with plain text transmission protocols such as telnet, FTP, and HTTP transfers.
Remote attacker must have access to a compromised system on a LAN in order to perform such an attack; usually the cracker has used an active attack (such as IP spoofing or Man-in-the-middle) to compromise a system on the LAN
Preventative measures include services with cryptographic key exchange, one-time passwords, or encrypted authentication to prevent password snooping; strong encryption during transmission also advised.
Service Vulnerabilities An attacker finds a flaw or loophole in a service run over the Internet; through this vulnerability, the attacker compromises the entire system and and any data that it may hold and could possibly compromise other systems on the network. HTTP-based services such as CGI are vulnerable to remote command executions and even shell access. Even if the HTTP service runs as a non-privileged user such as "nobody", information such as configuration files and network maps can be read, or the attacker can start a denial of service attack which drains system resources or renders it unavailable to other users.
Services sometimes can have vulnerabilities that go unnoticed during development and testing; these vulnerabilities (such as buffer overflow, where attackers gain access by filling addressable memory with a quantity over what is acceptable by the service, crashing the service and giving the attacker an interactive command prompt from which they may execute arbitrary commands.
Administrators should make sure that services do not run as the root user; stay vigilant of patches and errata updates for their applications from vendors or security organizations such as CERT and CVE.
Application Vulnerabilities Attackers find faults in desktop and workstation applications such as e-mail clients and execute arbitrary code, implant trojans for future compromise, or crash systems. Further exploitation can occur if the compromised workstation has administrative privileges on the rest of the network. Workstations and desktops are more prone to exploitation because workers do not have the expertise or experience to prevent or detect a compromise as servers run by an administrator; it is imperative to inform individuals of the risks they are taking when they install unauthorized software or open unsolicited mail
Safeguards can be implemented such that email client software does not automatically open or execute attachments. Additionally, the automatic updating of workstation software via Red Hat Network or other system management service can alleviate the burdens of multi-seat security deployments.
Denial of Service (DoS) Attacks Attacker or group of attackers coordinate an attack on network or server resources by sending unauthorized packets to the target machine (either server, router, or workstation). This forces the resource to become unavailable to legitimate users. Coordinated ping flood attacks have been used in the past. Source packets are usually forged (as well as rebroadcasted), making investigation to the true source of the attack difficult. Advances in ingress filtering (IETF rfc2267), and Network IDS technology assist administrators in tracking down and preventing distributed DoS attacks.
Firewalls
Information security is commonly thought of as a process and not a product. However, standard security implementations usually employ some form of dedicated mechanism to control access privileges and restrict network resources to users who are authorized, identifiable, and traceable.
Aside from VPN solutions such as CIPE or IPSec firewalls are one of the core components of network security implementation. Several vendors market firewall solutions catering to all levels of the marketplace: from home users protecting one PC to data center solutions safeguarding vital enterprise information. Firewalls can be standalone hardware solutions, such as firewall appliances by Cisco, Nokia, and Sonicwall. There are also proprietary software firewall solutions developed for home and business markets by vendors such as Checkpoint, McAfee, and Symantec.
Apart from the differences between hardware and software firewalls, there are also differences in the way firewalls function that separate one solution from another.
- NAT
Network Address Translation (NAT) places internal network IP subnetworks behind one or a small pool of external IP addresses, masquerading all requests to one source rather than several
Can be configured transparently to machines on a LAN
Protection of many machines and services behind one or more external IP address(es), simplifying administration duties
Restriction of user access to and from the LAN can be configured by opening and closing ports on the NAT firewall/gateway
Cannot prevent malicious activity once users connect to a service outside of the firewall
- Packet Filter
Packet filtering firewalls read each data packet that passes within and outside of a LAN. It can read and process packets by header information and filters the packet based on sets of programmable rules implemented by the firewall administrator. The Linux kernel has built-in packet filtering functionality through the netfilter kernel subsystem.
Customizable through the iptables front-end utility
Does not require any customization on the client side, as all network activity is filtered at the router level rather than at the application level
Since packets are not transmitted through a proxy, network performance is faster due to direct connection from client to remote host
Cannot filter packets for content like proxy firewalls
Processes packets at the protocol layer, but cannot filter packets at an application layer
Complex network architectures can make establishing packet filtering rules difficult, especially if coupled with IP masquerading or local subnets and DMZ networks
- Proxy
Proxy firewalls filter all requests of a certain protocol or type from LAN clients to a proxy machine, which then makes those requests to the Internet on behalf of the local client. A proxy machine acts as a buffer between malicious remote users and the internal network client machines.
Gives administrators control over what applications and protocols function outside of the LAN. Some proxy servers can cache data so that clients can access frequently requested data from the local cache rather than having to use the Internet connection to request it, which is convenient for cutting down on unnecessary bandwidth consumption Proxy services can be logged and monitored closely, allowing tighter control over resource utilization on the network
Proxies are often application specific (HTTP, telnet, etc.) or protocol restricted (most proxies work with TCP connected services only) Proxies can become a network bottleneck, as all requests and transmissions are passed through one source rather than direct client to remote service connections
netfilter and iptables
The Linux kernel features a networking subsystem called netfilter, which provides stateful or stateless packet filtering, NAT and IP masquerading services, and the ability to mangle IP header information for advanced routing and connection state management.
netfilter is controlled through the iptables utility, which uses the netfilter subsystem to enhance network connection, inspection, and processing; whereas ipchains used intricate rule sets for filtering source and destination paths, as well as connection ports for both. iptables features advanced logging, pre- and post-routing actions, network address translation, and port forwarding all in one command-line interface.
The first step in is to start the iptables service. This can be done with the command:
service iptables start
The ipchains and IP6Tables services must be turned off to use the iptables service with the following commands:
service ipchains stop chkconfig ipchains offservice ip6tables stop chkconfig ip6tables offTo make iptables start by default whenever the system is booted, change runlevel status on the service using chkconfig.
chkconfig --level 345 iptables onThe syntax of iptables is separated into tiers. The main tier is the chain. A chain specifies the state at which a packet will be manipulated. The usage is as follows:
iptables -A chain -j target >The -A appends a rule at the end of an existing ruleset. The chain is the name of the chain for a rule. The three built-in chains of iptables (that is, the chains that affect every packet which traverses a network) are INPUT, OUTPUT, and FORWARD. These chains are permanent and cannot be deleted.
When creating an iptables ruleset, it is critical to remember that order is important. For example, a chain that specifies that any packets from the local 192.168.100.0/24 subnet be dropped, and then a chain is appended ( -A) which allows packets from 192.168.100.13 (which is within the dropped restricted subnet), then the appended rule is ignored. You must set a rule to allow 192.168.100.13 first, and then set a drop rule on the subnet.
The only exception to rule ordering and iptables is with setting default policies ( -P ), as iptables will honor any rules that follow default policies.
Basic Firewall Policies
Some basic policies established from the beginning can aid as a foundation for building more detailed, user-defined rules. iptables uses policies ( -P) to create default rules. Security-minded administrators usually elect to drop all packets as a policy and only allow specific packets on a case-by-case basis. The following rules will block all incoming and outgoing packets on a network gateway:
iptables -P INPUT DENY iptables -P OUTPUT REJECTAdditionally, it is recommended that any forwarded packets — network traffic that is to be routed from the firewall to its destination node — be denied as well, to restrict internal clients from inadvertent exposure to the Internet. To do this, use the following rule:
iptables -P FORWARD REJECTAfter setting the policy chains, we can now create new rules for your particular network and security requirements. The following sections will outline some common rules you may implement in the course of building your iptables firewall.
Saving and Restoring iptables Rules
Firewall rules are only valid for the time the computer is on. If the system is rebooted, the rules are automatically flushed and reset. To save the rules so that they will load later, use the following command:
/sbin/service iptables saveThe rules will be stored in the file /etc/sysconfig/iptables and will be applied whenever the service is started, restarted, or the machine rebooted.
INPUT Filtering
Keeping remote attackers out of a LAN is an important aspect of network security, if not the most important. The integrity of a LAN should be protected from malicious remote users through the use of stringent firewall rules. In the following example, The LAN (which uses a private class C 192.168.1.0/24 IP range) rejects telnet access to the firewall from the outside:
iptables -A INPUT -p tcp --sport telnet -j REJECT iptables -A INPUT -p udp --sport telnet -j REJECTThe rule rejects all outside tcp and udp connections using the telnet protocol (typically port 23) with a connection refused error message. Rules using the --sport or --dport options can use either port numbers or common service names. So, using both --sport telnet and --sport 23 are acceptable. However, if the port number is changed in /etc/services, then using the telnet option, instead of explicitly stating the port number, will not work.
There is a distinction between the REJECT and DROP target actions. The REJECT target denies access and returns a connection refused error to users who attempt to connect to the service. The DROP, as the name implies, drops the packet without any warning to telnet users. Administrators can use their own discretion when using these targets; however, to avoid user confusion and attempts to continue connecting, the REJECT target is recommended.
There may be times when certain users require remote access to the LAN from outside the LAN. Secure services, such as SSH and CIPE, can be used for encrypted remote connection to LAN services. For administrators with PPP-based resources (such as modem banks or bulk ISP accounts), dialup access can be used to circumvent firewall barriers securely, as modem connections are typically behind a firewall/gateway because they are direct connections. However, for remote users with broadband connections, special cases can be made. You can configure iptables to accept connections from remote SSH and CIPE clients. For example, to allow remote SSH access to the LAN, the following may be used:
iptables -A INPUT -p tcp --sport 22 -j ACCEPT iptables -A INPUT -p udp --sport 22 -j ACCEPTCIPE connection requests from the outside can be accepted with the following command (replacing x with your device number):
iptables -A INPUT -p udp -i cipcbx -j ACCEPTSince CIPE uses its own virtual device which transmits datagram (UDP) packets, the rule allows the cipcb interface for incoming connections, instead of source or destination ports (though they can be used in place of device options).
There are other services for which you may need to define INPUT rules.
OUTPUT Filtering
There may be instances when an administrator must allow certain users on the internal network to make outbound connections. Perhaps the administrator wants an accountant to connect to a special port specialized rules can be established using OUTPUT action in iptables. The OUTPUT action places restrictions on outbound data.
For example, it may be prudent for an administrator to install a VPN client on the gateway to allow the entire internal network to access a remote LAN (such as a satelite office). To use CIPE as the VPN client installed on the gateway, use a rule similar to the following:
iptables -A OUTPUT -p udp -i cipcbx -j ACCEPTMore elaborate rules can be created that control access to specific subnets, or even specific nodes, within a LAN. You can also restrict certain dubious services such as trojans, worms, and other client/server viruses from contacting their server. For example, there are some trojans that scan networks for services on ports from 31337 to 31340 (called the elite ports in cracking lingo). Since there are no legitimate services that communicate via these non-standard ports, blocking it can effectively diminish the chances that potentially infected nodes on your network independently communicate with their remote master servers. Note that the following rule is only useful if your default OUTPUT policy is set to ACCEPT. If you set OUTPUT policy to REJECT, then this rule is not needed.
iptables -A OUTPUT -o eth0 -p tcp --dport 31337 --sport 31337 -j DROP
FORWARD and NAT Rules
Most organizations are allotted a limited number of publicly routable IP addresses from their ISP. Due to this limited allowance, administrators must find creative ways to share access to Internet services without giving scarce IP addresses to every node on the LAN. Using class C private IP address is the common way to allow all nodes on a LAN to properly access network services internally and externally. Edge routers (such as firewalls) can receive incoming transmissions from the Internet and route the bits to the intended LAN node; at the same time, it can also route outgoing requests from a LAN node to the remote Internet service. This forwarding of network traffic can become dangerous at times, especially with the availability of modern cracking tools that can spoof internal IP addresses and make the remote attacker's machine act as a node on your LAN. To prevent this, iptables provides routing and forwarding policies that can be implemented to prevent aberrant usage of network resources.
The FORWARD policy allows an administrator to control where packets can be routed. For example, to allow forwarding for an entire internal IP address range (assuming the gateway has an internal IP address on eth1), the following rule can be set:
iptables -A FORWARD -i eth1 -j ACCEPTIn this example, the -i is used to only accept incoming packets on the internal interface. The -i option is used for packets bound for that device (in this case, an internally assigned device).
By default, IPv4 policy in Red Hat Linux kernels disables support for IP forwarding, which prevents boxes running Red Hat Linux from functioning as dedicated edge routers. To enable IP forwarding, run the following command or place it in your firewall initialization script:
echo "1" > /proc/sys/net/ipv4/ip_forwardIf this command is run via shell prompt, then the setting is not remembered after a reboot. Thus it is recommended that it be added to the firewall initialization script.
FORWARD rules can be implemented to restrict certain types of traffic to the LAN only, such as local network file shares through NFS or Samba. The following rules reject outside connections to Samba shares:
iptables -A FORWARD -p tcp --sport 137:139 -j DROP iptables -A FORWARD -p udp --sport 137:139 -j DROPTo take the restrictions a step further, block all outside connections that attempt to spoof private IP address ranges to infiltrate your LAN. If a LAN uses the 192.168.1.0/24 range, a rule can set the Internet facing network device (for example, eth0) to drop any packets to that device with an address in your LAN IP range. Because it is recommended to reject forwarded packets as a default policy, any other spoofed IP address to the external-facing device (eth0) will be rejected automatically.
iptables -A FORWARD -p tcp -s 192.168.1.0/24 -i eth0 -j DROP iptables -A FORWARD -p udp -s 192.168.1.0/24 -i eth0 -j DROPRules can also be set to route traffic to certain machines, such as a dedicated HTTP or FTP server, preferably one that is isolated from the internal network on a de-militarized zone (DMZ). To set a rule for routing all incoming HTTP requests to a dedicated HTTP server at IP address 10.0.4.2 and port 80 (outside of the 192.168.1.0/24 range of the LAN), network address translation (NAT) calls a PREROUTING table to forward the packets to the proper destination:
iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 80 -j DNAT --to 10.0.4.2:80With this command, all HTTP connections to port 80 from the outside of the LAN are routed to the HTTP server on a separate network from the rest of the internal network. This form of network segmentation can prove safer than allowing HTTP connections to a machine on the network. If the HTTP server is configured to accept secure connections, then port 443 must be forwarded as well.
Hardware and Network Protection
The best practice before deploying a machine into a production environment or connecting your network to the Internet is to determine your organizational needs and how security can fit into the requirements as transparently as possible. Since the main goal of Red Hat Linux Security Guide is to explain how to secure Red Hat Linux operating system, a more detailed examination of hardware and physical network security is beyond the scope of this document. However, this chapter is a brief overview of establishing security policies with regard to hardware and physical networks. Important factors to consider are how computing needs and connectivity requirements fit into the overall security strategy. The following explains some of these factors in detail.
- Connectivity is the method by which an administrator intends to connect disparate resources on a network. An administrator may use Ethernet (hubbed or switched CAT-5/RJ-45 cabling), token ring, 10-base-2 coaxial cable, or even cable-free (wireless, 802.11 x) technologies. Depending on which medium an administrator chooses, certain media and network topologies require complementary technologies such as hubs, routers, switches, base stations, and access points. Determining a functional network architecture will allow an easier administrative process if security issues arise.
- Computing involves more than just workstations running desktop software. Modern organizations require massive computational power and highly-available services, which can include mainframes, compute/server clusters, powerful workstations, and specialized appliances. With these organizational requirements, however, come increased susceptibility to hardware failure, natural disasters, and tampering or theft of equipment.
From these general considerations, administrators can get a better view of implementation. The design of a computing environment will then be based on both organizational need and security considerations — a true, "ground-up" implementation that places priority on both factors.
Secure Network Topologies
The foundation of a LAN is the topology, or network architecture. A topology is the physical and logical layout of a LAN in terms of resource provided, distance between nodes, and transmission medium. Depending upon the needs of the organization that the network will service, there are several choices available for network implementation. Each topology has its advantages and security issues that network architects should regard when designing their network layout.
Physical Topologies
As defined by the Institute of Electrical and Electronics Engineers (IEEE), there are three common topologies for physical connection of a LAN.
Ring Topology
The Ring topology connects each node by exactly two connections. This creates a ring structure where each node is accessible to the other either directly by its two physically closest neighboring nodes and indirectly through the physical ring. Token Ring, FDDI, and SONET networks are connected in this fashion (with FDDI utilizing a dual-ring technique); however, there are no common Ethernet connections using this physical topology, so rings are not commonly deployed except in legacy or institutional settings with a large installed base of nodes (for example, a university).
Linear Bus Topology
The linear bus topology consists of nodes which connect to a terminated main linear cable (the backbone). The linear bus topology requires the least amount of cabling and networking equipment, making it the most cost-effective topology. However, the linear bus depends on the backbone being constantly available, making it a single point-of-failure if it has to be taken off-line or is severed. Linear bus topologies are commonly used in peer-to-peer LANs using co-axial (coax) cabling and 50-93 ohm terminators at both ends of the bus.
Star Topology
The Star topology incorporates a central point where nodes connect and through which communication is passed. This centerpoint, called a hub can be either broadcasted or switched. This topology does introduce a single point of failure in the centralized networking hardware that will connect the nodes. However, because of this centralization, networking issues that affect segments or the entire LAN itself is easily traceable to this one source.
Transmission Considerations
In a broadcast network, a node will send a packet that traverses through every other node until the recipient accepts the packet. Every node in the network will conceivably receive this packet of data until the recipient processes the packet. In a broadcast network, all packets are sent in this manner.
In a switched network, packets are not broadcasted, but are processed in the switched hub which, in turn, will create a direct connection between the sending and recipient nodes using the unicast transmission principles. This eliminates the need to broadcast packets to each node, thus lowering traffic overhead.
The switched network also prevents packets from being intercepted by malicious nodes or users. In a broadcast network, since each node receives the packet en route to its destination, malicious users can set their Ethernet device to promiscuous mode and accept all packets regardless of whether or not the data is intended for them. Once in promiscuous mode, a sniffer application can be used to filter, analyze, and reconstruct packets for passwords, personal data, and more. Sophisticated sniffer applications will store such information in a text file and, perhaps, even send the information to an arbitrary source (for example, the malicious user's email address).
A switched network requires a network switch, a specialized piece of hardware which replaces the role of the traditional hub in which all nodes on a LAN are connected. Switches store MAC addresses of all nodes within an internal database, which it uses to perform its direct routing. Several manufacturers, including Cisco Systems, Linksys, and Netgear offer various types of switches with features such as 10/100-Base-T compatibility, gigabit Ethernet support, and support for Carrier Sensing Multiple Access and Collision Detection (CSMA/CD) which is ideal for high-traffic networks because it will queue connections and detect when packets collide in transit.
Wireless Networks
An emerging issue for enterprises today is that of mobility. Remote workers, field technicians, and executives require portable solutions, including laptops, Personal Digital Assistants (PDAs), and wireless access to network resources. The IEEE has established a standards body for the 802.11 wireless specification, which establishes standards for wireless data communication throughout all industries. The current standard in practice today is the 802.11b specification.
The 802.11b specification is actually a group of standards governing wireless communication and access control at the 2.4 GHz communication band. This specification has already been adopted at an industry level, and several vendors market 802.11b (also called Wi-Fi) access and compatibility as a value-added feature of their core offerings. Consumers have also embraced the standard for small-office/home-office (SOHO) networks. The popularity has also extended from LANs to MANs (Metropolitan Area Networks), especially in populated areas where a concentration of wireless access points (WAPs) are available. There are also wireless Internet service providers (WISPs) that cater to frequent travelers who require broadband Internet access to conduct business remotely.
The 802.11b specification allows for direct, peer-to-peer connections between nodes with wireless NICs. This loose grouping of nodes, called an ad hoc network, is ideal for quick connection sharing between two or more nodes, but introduces scalability issues that are not suitable for long-term wireless connectivity.
A more suitable solution for wireless access in fixed structures is to install one or more WAPs that connect to the traditional network and allowing wireless nodes to connect to through the WAP as if it were on the Ethernet-mediated network. The WAP effectively acts as a bridge router between the nodes connected to it and the rest of the network.
802.11b Security
Although wireless networking is comparable in speed and certainly more convenient than traditional wired networking mediums, there are some limitations to the specification that warrants thorough consideration. The most important of these limitations is in its security implementation.
In the excitement of successfully deploying an 802.11 x network, many administrators fail to exercise even the most basic security precautions. Since all 802.11b networking is done using high-band radio-frequency (RF) signals, the data transmitted is easily accessible to any user with a 802.11b NIC, a wireless network scanning tool such as NetStumbler or Wellenreiter, and common sniffing tools such as dsniff and snort. To prevent such aberrant usage of private wireless networks, the 802.11b standard uses the Wired Equivalency Privacy (WEP) protocol, which is an RC4-based 64- to 128-bit encrypted key shared between each node or between the AP and the node. This key will encrypt transmissions and decrypt incoming packets dynamically and transparently. Administrators often fail to employ this shared-key encryption scheme, however; either they forget to do so or choose not to do so because of performance degradation (especially over long distances). Enabling WEP on a wireless network can greatly reduce the possibility of data interception.
Relying on WEP, however, is still not a sound enough means of protection against determined malicious users. There are specialized utilities whose purpose is to crack the RC4 WEP encryption algorithm and exposes the shared key. AirSnort and WEP Crack are two such specialized applications. To protect against this, administrators should adhere to strict policies regarding usage of wireless methods to access sensitive information. Administrators may choose to augment the security of wireless by restricting connectivity to SSH or VPN connections, which introduces an additional encryption layer above the WEP encryption. Using this policy, a malicious user outside of the network that cracks the WEP encryption has to additionally crack the VPN or SSH encryption which, depending on the encryption method, can employ up to triple-strength 168- or 192-bit DES algorithm encryption (3DES) or proprietary algorithms of even greater strength. Administrators who apply these policies should certainly restrict plain text protocols such as TELNET or FTP, as passwords and data can be exposed using any of the aforementioned attacks.
Network Segmentation and DMZs
For administrators who wish to run externally accessible services such as HTTP, email, FTP, and DNS, it is recommended that these publicly available services be physically and/or logically segmented from the internal network. Firewalls and hardening of hosts and applications are effective ways to deter casual intruders. However, determined crackers will find ways into the internal network if the services they have cracked reside on the same logical route as the rest of the network. The externally accessible services become what the security regards as a demilitarized zone (DMZ), a logical network segment where inbound traffic from the Internet would only be able to access those services in the DMZ. This is effective in that, even though a malicious user exploits a machine on the DMZ, the rest of the Internal network lies behind a firewall on a separated segment.
Most enterprises have a limited pool of publicly routable IP addresses from which they can host external services, so administrators utilize elaborate firewall rules to accept, forward, reject, and deny packet transmissions. Firewall policies implemented with iptables or dedicated hardware firewalls allow for complex routing and forwarding rules, which administrators can use to segment inbound traffic to specific services at specified addresses and ports, as well as allow only the LAN to access internal services, which can prevent IP spoofing exploits.
Incident Response
In the event that the security of a system has been compromised, an incident response is necessary. It is the responsibility of the security team to respond to the problem quickly and effectively.
Defining Incident Response
Incident response is an expedited reaction to an issue or occurrence. Pertaining to information security, an example would be a security team's actions against a hacker who has penetrated a firewall and is currently sniffing internal network traffic. The incident is the breach of security. The response depends upon how the security team reacts, what they do to minimize damages, and when they restore resources, all while attempting to guarantee data integrity.
Think of your organization and how almost every aspect of it relies upon technology and computer systems. If there is a compromise, imagine the potentially devastating results. Besides the obvious system downtime and theft of data, there could be data corruption, identity theft (from online personnel records), embarrassing publicity, or even financially devastating results as customers and business partners learn of and react negatively to news of a compromise.
Research on past security breaches (both internal and external) shows that companies can sometimes be run out of business as a result of a breach. A breach can result in resources rendered unavailable and stolen or corrupted data. But one cannot overlook issues that are difficult to calculate financially, such as bad publicity. An organization must calculate the cost of a breach and how it will detrimentally affect an organization, both in the short and long term.
Attackers and Vulnerabilities
In order to plan and implement a good security strategy, first be aware of some of the issues which determined, motivated attackers exploit to compromise systems. But before detailing these issues, the terminology used when identifying an attacker must be defined.
A Quick History of Hackers
The modern meaning of the term hacker has origins dating back to the 1960s and the Massachusetts Institute of Technology (MIT) Tech Model Railroad Club, which designed train sets of large scale and intricate detail. Hacker was a name used for club members who discovered a clever trick or workaround for a problem.
The term hacker has since come to describe everything from computer buffs to gifted programmers. A common trait among most hackers is a willingness to explore in detail how computer systems and networks function with little or no outside motivation. Open source software developers often consider themselves and their colleagues to be hackers and use the word as a term of respect.
Typically, hackers follow a form of the hacker ethic which dictates that the quest for information and expertise is essential and that sharing this knowledge is the hackers duty to the community. During this quest for knowledge, some hackers enjoy the academic challenges of circumventing security controls on computer systems. For this reason, the press often uses the term hacker to describe those who illicitly access systems and networks with unscrupulous, malicious, or criminal intent. The more accurate term for this type of computer hacker is cracker — a term created by hackers in the mid-1980s to differentiate the two communities.
Vulnerability Assessment
Given the time, resources, and motivation, a cracker can break into nearly any system. At the end of the day, all the security procedures and technologies currently available cannot guarantee that the systems are safe from intrusion. Routers can help to secure your gateways to the Internet. Firewalls help secure the edge of the network. Virtual Private Networks can safely pass your data in an encrypted stream. Intrusion detection systems have the potential to warn you of malicious activity. However, the success of each of these technologies is dependent upon a number of variables, including:
- The expertise of the staff responsible for configuring, monitoring, and maintaining the technologies
- The ability to patch and update services and kernels quickly and efficiently
- The ability of those responsible to keep constant vigilance over the network
Given the dynamic state of data systems and technologies, securing your corporate resources can be quite complex. Because of this complexity, it may be difficult to find expert resources for all of your systems. While it is possible to have personnel knowledgeable in many areas of information security at a high level, it is difficult to retain staff who are experts in more than a few subject areas. This is mainly because each subject area of Information Security requires constant attention and focus. Information security does not stand still.
Thinking Like the Enemy
Suppose you administer an enterprise network. Such networks are commonly comprised of operating systems, applications, servers, network monitors, firewalls, intrusion detection systems, and more. Now imagine trying to keep current with every one of these. Given the complexity of today's software and networking environments, exploits and bugs are a certainty. Keeping current with patches and updates for an entire network can prove to be a daunting task in a large organization with heterogeneous systems.
Combine the expertise requirements with the task of keeping current, and it is inevitable that adverse incidents occur, systems are breached, data is corrupted, and service is interrupted.
To augment security technologies and aid in protecting systems, networks, and data, think like a cracker and gauge the security of systems by checking for weaknesses. Preventative vulnerability assessments against your own systems and network resources can reveal potential issues that can be addressed before a cracker finds it.
A vulnerability assessment is an internal audit of your network and system security; the results of which indicate the confidentiality, integrity, and availability of your network (as explained in Section 1.1.4 Standardizing Security). A vulnerability assessment will typically start with a reconnaissance phase during which important data regarding the target systems and resources are gathered. This phase will lead to the system readiness phase, whereby the target is essentially checked for all known vulnerabilities. The readiness phase culminates in the reporting phase, where the findings are classified into categories of high, medium, and low risk; and methods for improving the security (or mitigating the risk of vulnerability) of the target are discussed.
If you were to perform a vulnerability assessment of your home, you would likely check each door to your home to see if they are shut and locked. You would also check every window, making sure that they shut completely and latch correctly. This same concept applies to systems, networks, and electronic data. Malicious users are the thieves and vandals of your data. Focus on their tools, mentality, and motivations, and we can then react swiftly to their actions.
Security Updates
As security exploits in software are discovered, the software must be fixed to close the possible security risk. If the package is part of an Red Hat Linux distribution that is currently supported, Red Hat, Inc. is committed to releasing updated packages that fix security holes as soon as possible. If the announcement of the security exploit is accompanied with a patch (or source code that fixes the problem), the patch is applied to the Red Hat Linux package, tested by the quality assurance team, and released as an errata update. If the announcement does not include a patch, a Red Hat Linux developer will work with the maintainer of the package to fix the problem. After the problem is fixed, it is tested and released as an errata update.
To minimize the time a system is exploitable, update to new security errata packages as soon as they are released
Not only do we want to update to the latest packages that fix any security exploits, but you also want to make sure the latest packages do not contain further exploits such as a trojan horse. A cracker can rebuild a version of a package (with the same version number as the one that is supposed to fix the problem) but with a different security exploit in the package and release it on the Internet. If this happens, using security measures such as verifying files against the original RPM will not detect the exploit. Thus, it is very important that you only download RPMs from sources, such as from Red Hat, Inc., and check the signature of the package to make sure it was built by the source.
Red Hat offers two ways to retrieve security updates:
- Download from Red Hat Network
- Downloaded from the Red Hat Linux Errata website
Using Red Hat Network
Red Hat Network allows you to automate most of the update process. It determines which RPM packages are necessary for the system, downloads them from a secure repository, verifies the RPM signature to make sure they have not been tampered with, and updates them. The package install can occur immediately or can be scheduled during a certain time period.
Red Hat Network requires you to provide a System Profile for each machine that you want updated. The System Profile contains hardware and software information about the system. This information is kept confidential and not give to anyone else. It is only used to determine which errata updates are applicable to each system. Without it, Red Hat Network can not determine whether the system needs updates. When a security errata (or any type of errata) is released, Red Hat Network will send you an email with a description of the errata as well as which of the systems are affected. To apply the update, we can use the Red Hat Update Agent or schedule the package to be updated through the website http://rhn.redhat.com.
To learn more about the benefits of Red Hat Network, refer to the Red Hat Network Reference Guide available at http://www.redhat.com/docs/manuals/RHNetwork/ or visit http://rhn.redhat.com.
Server Security
When a system is used as a server on a public network, it becomes a target for attacks. For this reason, hardening the system and locking down services is of paramount importance for the system administrator.
Before delving into specific issues, you should review the following general tips for enhancing server security:
- Keep all services up to date to protect against the latest threats.
- Use secure protocols whenever possible.
- Serve only one type of network service per machine whenever possible.
- Monitor all servers carefully for suspicious activity.
Secure Services With TCP Wrappers and xinetd
TCP wrappers provide access control to a variety of services. Most modern network services, such as SSH, Telnet, and FTP, make use of TCP wrappers, which stands guard between an incoming request and the requested service.
The benefits offered by TCP wrappers are enhanced when used in conjunction with xinetd, a super service that provides additional access, logging, binding, redirection, and resource utilization control.
The following subsections will assume a basic knowledge of each topic and focus on specific security options.
Enhancing Security With TCP Wrappers
TCP wrappers are capable of much more than denying access to services. This section will illustrate how it can be used to send connection banners, warn of attacks from particular hosts, and enhance logging functionality. For a thorough list of TCP wrapper functionality and control language, refer to the hosts_options man page.
TCP Wrappers and Connection Banners
Sending client connections to a service an intimidating banner is a good way to disguise what system the server is running while letting a potential attacker know that system administrator is vigilant. To implement a TCP wrappers banner for a service, use the banner option.
This example implements a banner for vsftpd. To begin create a banner file. It can be anywhere on the system, but it must bear same name as the daemon. This example we will name the file /etc/banners/vsftpd.
The contents of the file will look like this:
220-Hello, %c 220-All activity on ftp.example.com is logged. 220-Act up and you will be banned.The %c token supplies a variety of client information, such as the username and hostname, or the username and IP address to make the connection even more intimidating. The Red Hat Linux Reference Guide has a list of other tokens available for TCP wrappers.
For this banner to be presented to incoming connections, add the following line to the /etc/hosts.allow file:
vsftpd : ALL : banners /etc/banners/
TCP Wrappers and Attack Warnings
If a particular host or network has been caught attacking the server, TCP wrappers can be used to warn of subsequent attacks from that host or network via the spawn directive.
In this example, assume that a cracker from the 206.182.68.0/24 network has been caught attempting to attack the server. By placing the following line in the /etc/hosts.deny file, the connection attempt is denied and logged into a special file:
ALL : 206.182.68.0 : spawn /bin/ 'date' %c %d >> /var/log/intruder_alertThe %d token supplies the name of the service that the attacker was trying to access.
To allow the connection and log it, place the spawn directive in the /etc/hosts.allow file.
Since the spawn directive executes any shell command, we can create a special script to notify you or execute a chain of commands in the event that a particular client attempts to connect to the server.
TCP Wrappers and Enhanced Logging
If certain types of connections are of more concern than others, the log level can be elevated for that service via the severity option.
In this example, assume anyone attempting to connect to port 23 (the Telnet port) on an FTP server is a cracker. To denote this, place a emerg flag in the log files instead of the default flag, info, and deny the connection.
To do this, place the following line in /etc/hosts.deny:
in.telnetd : ALL : severity emergThis will use the default authpriv logging facility, but elevate the priority from the default value of info to emerg.
Enhancing Security With xinetd
The xinetd super server is another useful tool for controlling access to its subordinate services. This section will focus on how xinetd can be used to set a trap service and control the amount of resources any given xinetd service can use in order to thwart denial of service attacks. For a more thorough list of the options available, refer to the man pages for xinetd and xinetd.conf.
Setting a Trap
One important feature of xinetd is its ability to add hosts to a global no_access list. Hosts on this list are denied subsequent connections to services managed by xinetd for a specified length of time or until xinetd is restarted. This is accomplished using the SENSOR attribute. This technique is an easy way to block hosts attempting to port scan the server.
The first step in setting up a SENSOR is to choose a service we do not plan on using. For this example, Telnet will be used.
Edit the file /etc/xinetd.d/telnet and change the line flags line to read:
flags = SENSORAdd the following line within the braces:
deny_time = 30This will deny the host that attempted to connect to the port for 30 minutes. Other acceptable values for the deny_time attribute are FOREVER, which keeps the ban in effect until xinetd is restarted, and NEVER, which allows the connection and logs it.
Finally, the last line should read:
disable = noWhile using SENSOR is a good way to detect and stop connections from nefarious hosts, it has two drawbacks:
- It will not work against stealth scans.
- An attacker who knows you are running SENSOR can mount a denial of service attack against particular hosts by forging their IP addresses and connecting to the forbidden port.
Controlling Server Resources
Another important feature of xinetd is its ability to control the amount of resources services under its control can utilize.
It does this by way of the following directives:
- cps = <number_of_connections> <wait_period> — Dictates the connections allowed to the service per second. This directive accepts only integer values.
- instances = <number_of_connections> — Dictates the total number of connections allowed to a service. This directive accepts either an integer value or UNLIMITED.
- per_source = <number_of_connections> — Dictates the connections allowed to a service by each host. This directive accepts either an integer value or UNLIMITED.
- rlimit_as = <number[K|M]> — Dictates the amount of memory address space the service can occupy in kilobytes or megabytes. This directive accepts either an integer value or UNLIMITED.
- rlimit_cpu = <number_of_seconds> — Dictates the amount of time in seconds that a service may occupy the CPU. This directive accepts either an integer value or UNLIMITED.
Using these directives can help prevent any one xinetd service from overwhelming the system, resulting in a denial of service.
Virtual Private Networks
Following the same functional principles as dedicated circuits, VPNs allow for secured digital communication between two parties (or networks), creating a Wide Area Network (WAN) from existing LANs. Where it differs from frame relay or ATM is in its transport medium. VPNs transmit over IP or datagram (UDP) layers, making it a secure conduit through the Internet to an intended destination. Most free software VPN implementations incorporate open standard, open source encryption to further mask data in transit.
Some organizations employ hardware VPN solutions to augment security, while others use the software or protocol-based implementations. There are several vendors with hardware VPN solutions such as Cisco, Nortel, IBM, and Checkpoint. There is a free software-based VPN solution for Linux called FreeS/Wan that utilizes a standardized IPSec (or Internet Protocol Security) implementation. These VPN solutions act as specialized routers that sit between the IP connection from one office to another. When a packet is transmitted from a client, it sends it through the router or gateway, which then adds header information for routing and authentication called the Authentication Header (AH) and trailer information for CRC file integrity and security called the Encapsulation Security Payload (ESP).
With such a heightened level of security, a cracker must not only intercept a packet, but decrypt the packet as well (which, in the case of most VPNs, usually employ the triple Data Encryption Standard [3DES] 168-bit cipher). Intruders who employ a man-in-the-middle attack between a server and client must also have access to the keys exchanged for authenticating sessions. VPNs are a secure and effective means to connect multiple remote nodes to act as a unified Intranet.
The security, reliability, and functional compatibility with similar IPSec implementations make FreeS/Wan a strong candidate for WAN deployment. However, because of its strict focus on security, FreeS/Wan and other IPSec implementations have been observed to be more difficult to configure, deploy, and maintain than hardware VPN or proprietary software solutions. Red Hat Linux system administrators and security specialists must also take into account that there is currently no supported IPSec implementation included in their distribution of choice.
VPNs and Red Hat Linux
Red Hat Linux users and administrators have various options in terms of implementing a software solution to connect and secure their WAN. There are, however, two methods of implementing VPN and VPN-equivalent connections that are currently supported in Red Hat Linux. One equivalent solution involves using OpenSSH as a tunnel between two remote nodes. This solution is a sound alternative to telnet, rsh, and other remote host communication protocols, but it does not completely address the usability needs of all corporate telecommuters and branch offices. Another solution that is more adherent to the de facto definition of a VPN is Crypto IP Encapsulation (CIPE), a method of connecting remote LANs to function as a unified network.
IP6Tables
The introduction of the next-generation Internet Protocol, called IPv6, expands beyond the 32-bit address limit of IPv4 (or IP). IPv6 supports 128-bit addresses and, as such, carrier networks that are IPv6 aware are able to address a larger number of routable addresses than IPv4.
Red Hat Linux supports IPv6 firewall rules using the Netfilter 6 subsystem and the IP6Tables command. The first step in using IP6Tables is to start the IP6Tables service. This can be done with the command:
service ip6tables start
The ipchains and iptables services must be turned off to use the IP6Tables service exclusively:
service ipchains stop chkconfig ipchains offservice iptables stop chkconfig iptables offTo make IP6Tables start by default whenever the system is booted, change the runlevel status on the service using chkconfig.
chkconfig --level 345 ip6tables onThe syntax is identical to iptables in every aspect except that IP6Tables supports 128-bit addresses. For example, SSH connections on a IPv6-aware network server can be enabled with the following rule:
ip6tables -A INPUT -i eth0 -p tcp -s 3ffe:ffff:100::1/128 --dport 22 -j ACCEPT
Host-based IDS
A host-based IDS analyzes several areas to determine misuse (malicious or abusive activity inside the network) or intrusion (breaches from the outside). Host-based IDS consult several types of log files (kernel, system, server, network, firewall, and more), and compare the logs against an internal database of common signatures for known attacks. UNIX and Linux host-based IDS make heavy use of syslog and its ability to separate logged events by their severity (for example, minor printer messages versus major kernel warnings). The host-based IDS filters logs (which, in the case of some network and kernel event logs, can be quite verbose), analyze them, re-tag the anomalous messages with its own system of severity rating, and collect them in its own specialized log for administrator analysis.
Host-based IDS can also verify the data integrity of important files and executables. It checks a database of sensitive files (and any files that you may want to add) and creates a checksum of each file with a message-file digest utility such as md5sum (128-bit algorithm) or sha1sum (160-bit algorithm). The host-based IDS then stores the sums in a plain text file, and periodically compares the file checksums against the values in the text file. If any of the file checksums do not match, the IDS will alert the administrator by email or cellular pager. This is the process used by Tripwire.
Tripwire
Tripwire is the most popular host-based IDS for Linux. Tripwire, Inc., the developers of Tripwire, recently opened the software source code for the Linux version and licensed it under the terms of the GNU General Public License. Red Hat Linux includes Tripwire, which is available in RPM package format for easy installation and upgrade.
Detailed information on the installation and configuration of Tripwire can be found in the Red Hat Linux Reference Guide.
RPM as an IDS
The RPM Package Manager (RPM) is another program that can be used as a host-based IDS. RPM contains various options for querying packages and their contents. These verification options can be invaluable to an administrator who suspects that critical system files and executables have been modified.
The following list details some options for RPM that we can use to verify file integrity on your Red Hat Linux system.
Some of the commands in the list that follows requirethat you import the Red Hat GPG public key into your RPM keyring. This key verifies that packages installed on the system contain an Red Hat package signature, which ensures that your packages originated from Red Hat. The key can be imported with the following command (substituting <version> with the version of RPM installed on the system):
rpm --import /usr/share/doc/rpm-<version>/RPM-GPG-KEY
- rpm -V package_name
The -V option verifies the files in the installed package called package_name. If it shows no output and exits, this means that all of the files have not been modified in anyway since the last time the RPM database was updated. If there is an error, such as
S.5....T c /bin/psthen the file has been modified in some way and you need to assess whether to keep the file (such is the case with modified configuration files in /etc) or delete the file and reinstall the package that contains it. The following list defines the elements of the 8-character string ( S.5....T in the above example) that notifies of a verification failure.
. The test has passed this phase of verification ? The test has found a file that could not be read, which is most likely a file permission issue S The test has encountered a file that that is smaller or larger than it was when originally installed on the system 5 The test has found a file whose md5 checksum does not match the original checksum of the file when first installed M The test has detected a file permission or file type error on the file D The test has encountered a device file mismatch in major/minor number L The test has found a symbolic link that has been changed to another file path U The test has found a file that had its user ownership changed G The test has found a file that had its group ownership changed T The test has encountered mtime verification errors on the file - rpm -Va
The -Va option verifies all installed packages and finds any failure in its verification tests (much like the -V option, but more verbose in its output since it is verifying every installed package).
- rpm -Vf /bin/ls
The -Vf option verifies individual files in an installed package. This can be useful if you wish to perform a quick verification of a suspect file.
- rpm -K application-1.0.i386.rpm
The -K option is useful for checking the md5 checksum and the GPG signature of an RPM package file. This is useful for checking whether a package we want to install is signed by Red Hat or any organization for which you have the GPG public key imported into your GPG keyring. A package that has not been properly signed will emit an error message similar to the following:
application-1.0.i386.rpm (SHA1) DSA sha1 md5 (GPG) NOT OK (MISSING KEYS: GPG#897da07a)Exercise caution when installing packages that are unsigned as they are not approved by Red Hat, Inc. and could contain malicious code.
RPM can be a powerful tool, as evidenced by its many verification tools for installed packages and RPM package files. It is strongly recommended that you backup the contents of your RPM database directory ( /var/lib/rpm/) to read-only media, such as CD-ROM, after you install Red Hat Linux. Doing so allows you to safely verify files and packages against the read-only database, rather than against the database on the system, as malicious users may corrupt the database and skew your results.
Network-based IDS
Network-based intrusion detection systems operate differently from host-based IDS. The design philosophy of a network-based IDS is to scan network packets at the router or host-level, auditing packet information, and logging any suspicious packets into a special log file with extended information. Based on these suspicious packets, a network-based IDS can scan its own database of known network attack signatures and assign a severity level for each packet. If severity levels are high enough, a warning email or pager call is placed to security team members so they can further investigate the nature of the anomaly.
Network-based IDS have become popular as the Internet grows in size and traffic. IDS that can scan the voluminous amounts of network activity and successfully tag suspect transmissions are well-received within the security industry. Due to the inherent insecurity of the TCP/IP protocols, it has become imperative to develop scanners, sniffers, and other network auditing and detection tools to prevent security breaches due to such malicious network activity as:
- IP Spoofing
- Denial-of-service attacks
- arp cache poisoning
- DNS name corruption
- Man-in-the-middle attacks
Most network-based IDS require that the host system network device be set to promiscuous mode, which allows the device to capture every packet passed on the network. Promiscuous mode can be set through the ifconfig command, such as the following:
ifconfig eth0 promiscRunning ifconfig with no options reveals that eth0 is now in promiscuous ( PROMISC)mode.
eth0 Link encap:Ethernet HWaddr 00:00:D0:0D:00:01 inet addr:192.168.1.50 Bcast:192.168.1.255 Mask:255.255.252.0 UP BROADCAST RUNNING PROMISC MULTICAST MTU:1500 Metric:1 RX packets:6222015 errors:0 dropped:0 overruns:138 frame:0 TX packets:5370458 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:100 RX bytes:2505498554 (2389.4 Mb) TX bytes:1521375170 (1450.8 Mb) Interrupt:9 Base address:0xec80 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:21621 errors:0 dropped:0 overruns:0 frame:0 TX packets:21621 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:1070918 (1.0 Mb) TX bytes:1070918 (1.0 Mb)Using a tool such as tcpdump (included with Red Hat Linux), we can see the large amounts of traffic flowing throughout a network:
# tcpdump tcpdump: listening on eth0 02:05:53.702142 pinky.example.com.ha-cluster > \ heavenly.example.com.860: udp 92 (DF) 02:05:53.702294 heavenly.example.com.860 > \ pinky.example.com.ha-cluster: udp 32 (DF) 02:05:53.702360 pinky.example.com.55828 > dns1.example.com.domain: \ PTR? 192.35.168.192.in-addr.arpa. (45) (DF) 02:05:53.702706 ns1.example.com.domain > pinky.example.com.55828: \ 6077 NXDomain* 0/1/0 (103) (DF) 02:05:53.886395 shadowman.example.com.netbios-ns > \ 172.16.59.255.netbios-ns: NBT UDP PACKET(137): QUERY; BROADCAST 02:05:54.103355 802.1d config c000.00:05:74:8c:a1:2b.8043 root \ 0001.00:d0:01:23:a5:2b pathcost 3004 age 1 max 20 hello 2 fdelay 15 02:05:54.636436 konsole.example.com.netbios-ns > 172.16.59.255.netbios-ns:\ NBT UDP PACKET(137): QUERY; REQUEST; BROADCAST 02:05:56.323715 pinky.example.com.1013 > heavenly.example.com.860:\ udp 56 (DF) 02:05:56.323882 heavenly.example.com.860 > pinky.example.com.1013:\ udp 28 (DF)Notice that packets that were not intended for our machine ( pinky.example.com) are still being scanned and logged by tcpdump.
snort
While tcpdump is a useful auditing tool, it is not considered a true IDS because it does not analyze and flag packets for anomalies. tcpdump prints all packet information to the output screen or to a log file without any analysis or elaboration. A proper IDS will analyze the packets, tag potentially malicious packet transmissions, and store it in a formatted log.
Snort is an IDS designed to be comprehensive and accurate in successfully logging malicious network activity and notifying administrators when potential breaches occur. Snort uses the standard libcap library and tcpdump as a packet logging backend.
The most prized feature of Snort in addition to its functionality, is its flexible attack signature subsystem. Snort has a constantly updated database of attacks that can be added to and updated via the Internet. Users can create signatures based on new network attacks and submit them to the Snort signature mailing lists. so that all Snort users can benefit. This community ethic of sharing has developed Snort into one of the most up-to-date and robust network-based IDS available.
Snort is not included with Red Hat Linux and is not supported. It has been included in this document as a reference to users who may be interested in evaluating it.
Hardware Security
According to a study released in 2000 by the FBI and the Computer Security Institute (CSI), over seventy percent of all attacks on sensitive data and resources reported by organizations occurred from within the organization itself. Implementing an internal security policy appears to be just as important as an external strategy. The following sections explain some of the common steps administrators and users can take to safeguard their systems from internal malpractice.
Employee workstations, for the most part, are not as likely to be targets for remote attack, especially those behind a properly configured firewall. However, there are some safeguards that can be implemented to avert an internal or physical attack on individual workstation resources.
Modern workstation and home PCs have BIOSes that control system resources on the hardware level. Workstation users can also set administrative passwords within the BIOS to prevent malicious users from accessing the system. BIOS passwords prevent malicious users from booting the system at all, deterring the user from quickly accessing or stealing information stored on the hard drive.
However, if the malicious user steals the PC (the most common case of theft frequent travelers who carry laptops and other mobile devices) and takes it to a location where they can disassemble the PC, the BIOS password does not prevent the attacker from removing the hard drive, installing it in another PC without BIOS restriction, and mount the hard drive to read any contents within. In these cases, it is recommended that workstations have locks to restrict access to internal hardware. Hardware such as lockable steel cables can be attached to PC and laptop chassis to prevent theft, as well as key locks on the chassis itself to prevent internal access. Such hardware is widely available from manufacturers such as Kensington and Targus.
Server hardware, especially production servers, are typically mounted on racks in server rooms. Server cabinets usually have lockable doors; and individual server chassis also are available with lockable front bezels for increased security from errant (or intentional) shutdown.
Enterprises can also use co-location providers to house their servers, as co-location providers offer higher bandwidth, 24x7 technical support, and expertise in system and server security. This can be an effective means of outsourcing security and connectivity needs for HTTP transactions or streaming media services. However, co-location can be cost-prohibitive, especially for small to medium-sized businesses. Co-location facilities are known for being heavily guarded by trained security staff and tightly monitored at all times.
Implementing the Incident Response Plan
Once a plan of action is created, it must be agreed upon and actively implemented. Any aspect of the plan that is questioned during active implementation will most likely result in poor response time and downtime in the event of a breach. This is where practice exercises become invaluable. Unless something is brought to attention before the plan is actively set in production, the implementation should be agreed upon by all directly connected parties and executed with confidence.
If a breach is detected while the CERT is present for quick reaction, potential responses can vary. The team can decide to pull the network connections, disconnect the affected systems, patch the exploit, and then reconnect quickly without further potential complication. The team can also watch the perpetrators and track their actions. The team could even redirect the perpetrator to a honeypot — a system or segment of a network containing intentionally false data — in order to track incursion safely and without disruption to production resources.
Responding to an incident should also be accompanied by information gathering whenever possible. Running processes, network connections, files, directories, and more should be actively audited in real-time. Having a snapshot of production resources for comparison can be helpful in tracking rogue services or processes. CERT members and in-house experts will be great resources in tracking such anomalies in a system. System administrators know what processes should and should not appear when running top or ps. Network administrators are aware of what normal network traffic should look like when running Snort or even tcpdump. These team members should know their systems and should be able to spot an anomaly quicker than someone unfamiliar with the infrastructure.
Investigating the Incident
Investigating a computer breach is like investigating a crime scene. Detectives collect evidence, note any strange clues, and take inventory on loss and damage. An analysis of computer compromise can either be done as the attack is happening or post-mortem (after the attack).
Although it is unwise to trust any system log files on an exploited system, there are other forensic utilities to aid in your analysis. The purpose and features of these tools vary, but they commonly create bit-image copies of media, correlate events and processes, show low level file system information, and recover deleted files whenever possible.
Collecting an Evidential Image
Create a bit-image copy of media is a feasible first step. If performing data forensic work, it is a requirement. It is recommended to make two copies: one for analysis and investigation, and a second to be stored along with the original for evidence in any legal proceedings.
You can use the dd command that is part of the fileutils package in Red Hat Linux to create a monolithic image of an exploited system as evidence in an investigation or for comparison with trusted images. Suppose there is a single hard drive from a system we want to image. Attach that drive as a slave to your system, and then use dd to create the image file, such as the following:
dd if=/dev/hdd bs=1k conv=noerror,sync of=/home/evidence/image1This command creates a single file named image1 using a 1k block size for speed. The conv=noerror,sync options force dd to continue reading and dumping data even if bad sectors are encountered on the suspect drive. It is now possible to study the resulting image file, or even attempt to recover deleted files.
Gathering Post-Breach Information
The topic of digital forensics and analysis itself is quite broad, yet the tools are mostly architecture specific and cannot be applied generically. However, incident response, analysis, and recovery are important topics. With proper knowledge and experience, Red Hat Linux can be an excellent platform for performing these types of analysis, as it includes several utilities for performing post-breach response and restoration.
Table 10-1 details some commands for file auditing and management. It also lists some examples that can be used to properly identify files and file attributes (such as permissions and access dates) so that we can collect further evidence or items for analysis. These tools, when combined with intrusion detection systems, firewalls, hardened services, and other security measures, can help in reducing the potential damage when an attack occurs.
For detailed information about each tool, refer to their respective manual pages.
Command Function Example dd Creates a bit-image copy (or disk dump) of files and partitions. Combined with a check of the md5sums of each image, administrators can compare a pre-breach image of a partition or file with a breached system to see if the sums match. dd if=/bin/ls of=ls.dd |md5sum ls.dd >ls-sum.txt grep Finds useful string (text) information on and inside files and directories such as permissions, script changes, file attributes, and more. Used mostly as a piped command of another command such as ls, ps, or ifconfig ps auxw |grep /bin strings Prints the strings of printable characters in a file. It is most useful for auditing executables for anomalies such as mail commands to unknown addresses or logging to a non-standard log file. strings /bin/ps |grep 'mail' file Determines the characteristics of files based on format, encoding, libraries that it links (if any), and file type (binary, text, and more). It is useful for determining whether an executable such as /bin/ls has been modified using static libraries, which are a sure sign that that the executable has been replaced with one installed by a malicious user. file /bin/ls find Searches directories for particular files. find is a useful tool for searching the directory structure by keyword, date and time of access, permissions, and more. This can be useful for administrators that perform general system audits of particular directories or files. find -atime +12 -name *log* -perm u+rw stat Displays various information about a file, including time last accessed, permissions, UID and GID bit settings, and more. Useful for checking when a breached system executable was last used and/or when it was modified. stat /bin/netstat md5sum Calculates the 128-bit checksum using the md5 hash algorithm. You can use the command to create a text file that lists all crucial executables that could be modified or replaced in a security compromise. Redirect the sums to a file to create a simple database of checksums and then copy the file onto a read-only medium such as CD-ROM. md5sum /usr/bin/gdm >>md5sum.txt
Create an Incident Response Plan
It is important that an incident response plan is formulated, supported throughout the organization, put into action, and regularly tested. A good incident response plan may minimize the effects of a breach. Furthermore, it may even reduce the negative publicity and focus attention on quick reaction time.
From a security team perspective, it does not matter whether a breach occurs (as such occurrences are an eventual part of doing business using an untrusted carrier network such as the Internet), but rather, when a breach will occur. Do not think of a system as weak and vulnerable; it is important to realize that given enough time and resources someone will breach even the most security-hardened system or network. You do not need to look any further than the Security Focus website at http://www.securityfocus.com for updated and detailed information concerning recent security breaches and vulnerabilities, from the frequent defacement of corporate webpages to the attacks on the 13 root DNS nameservers in 2002 that attempted to cripple Internet access around the world.
The positive aspect of realizing the inevitability of a system breach is that it allows the security team to develop a course of action that minimizes any potential damage. Combining a course of action with expertise allows the team to respond to adverse conditions in a formal and responsive manner.
The incident response plan itself can be separated into four sections:
- Immediate action
- Investigation
- Restoration of resources
- Reporting the incident to proper channels
An incident response must be decisive and executed quickly. There is little room for error in most cases. By staging practice emergencies and measuring response times, it is possible to develop a methodology that fosters speed and accuracy. Reacting quickly may minimize the impact of resource unavailability and the potential damage caused by system compromise.
An incident response plan has a number of requirements, including;
- Appropriate personnel (in-house experts)
- Financial support
- Executive support
- A feasible plan of action
- Physical resources (redundant storage, standby systems, and backup services)
The Computer Emergency Response Team (CERT)
The term appropriate personnel refers to people who will comprise a Computer Emergency Response Team (CERT). Finding the core competencies for a CERT can be a challenge. The concept of appropriate personnel goes beyond technical expertise and includes logistics such as location, availability, and desire to put the organization ahead of ones personal life when an emergency occurs. An emergency is never a planned event; it can happen at any moment, and all CERT members must be willing to accept the responsibility that is required of them to respond to an emergency at any hour.
Assembling the CERT
Typical CERT members include system and network administrators as well as members from the information security department. System administrators will provide the knowledge and expertise of system resources, including data backups, backup hardware available for use, and more. Network administrators provide their knowledge of network protocols and the ability to re-route network traffic dynamically. Information security personnel are useful for thoroughly tracking and tracing security issues as well as performing post-mortem analysis of compromised systems.
It may not always be feasible, but there should be personnel redundancy within a CERT. If depth in core areas is not applicable to an organization, then cross-training should be implemented wherever possible. Note that if only one person owns the key to data safety and integrity, then the entire enterprise becomes helpless in that person's absence.
Legal Issues
Some important aspects of incident response to consider are legal issues. Security plans should be developed with members of legal staff or some form of general counsel. Just as every company should have their own corporate security policy, every company has its own way of handling incidents from a legal perspective. Local, state, and federal regulatory issues are beyond the scope of this document, but are mentioned because the methodology for performing a post-mortem analysis, at least in part, will be dictated by (or in conjunction with) legal counsel. General counsel can alert technical staff of the legal ramifications of breaches; the hazards of leaking a client's personal, medical, or financial records; and the importance of restoring service in mission-critical environments such as hospitals and banks.
http://www.gcn.com/21_32/web/20404-1.html
Reporting the Incident
The last part of the incident response plan is reporting the incident. The security team should take notes as the response is happening to properly report the issue to organizations such as local and federal authorities, or multi-vendor software vulnerability portals, such as the Common Vulnerabilities and Exposures site (CVE) at http://cve.mitre.org. Depending on the type of legal counsel your enterprise employs, a post-mortem analysis may be required. Even if it is not a functional requirement to a compromise analysis, a post-mortem can prove invaluable in helping to learn how a cracker thinks and how the systems are structured so that future compromises can be prevented.
Restoring and Recovering Resources
During active incident response, there should be considerations toward working on recovery. The actual breach will dictate the course of recovery. This is when having backups or offline, redundant systems will prove invaluable. For recovery, the response team should be planning to bring back online any downed systems or applications, such as authentication servers, database servers, and any other production resources.
Having production backup hardware ready for use is highly recommended, such as extra hard drives, hot-spare servers, and the like. Ready-made systems should have all production software loaded and ready for immediate use. Perhaps only the most recent and pertinent data would need to be imported. This ready-made system should be kept isolated from the rest of the potentially affected network. If a compromise occurs and the backup system is a part of the network, then the purpose of having a backup system is defeated.
System recovery can be a tedious process. In many instances there are two courses of action from which to choose. Administrators can perform a clean reinstallation of the operating system followed by restoration of all applications and data. Alternatively, administrators can patch the system of the offending vulnerability and bring the affected system(s) back into production.
Reinstalling the System
Performing a clean reinstallation ensures that the affected system will be cleansed of any trojans, backdoors, or malicious processes. Reinstallation also ensures that any data (if restored from a trusted backup source) is cleared of any malicious modification. The drawback to total system recovery is the time involved in rebuilding systems from scratch. However, if there is a hot backup system available for use where the only action to take is to dump the most recent data, then system downtime is greatly reduced.
Patching the System
The alternate course to recovery is to patch the affected system(s). This method of recovery is more dangerous to perform and should be undertaken with great caution. The danger with patching a system instead of reinstalling is determining whether or not you have sufficiently cleansed the system of trojans, holes, and corrupted data. If using a modular kernel, then patching a breached system can be even more difficult. Most rootkits (programs or packages that a cracker leaves to gain root access to the system), trojan system commands, and shell environments are designed to hide malicious activities from cursory audits. If the patch approach is taken, only trusted binaries should be used (for example, from a mounted, read-only CD-ROM).
Threats to Network Security
Bad practices when configuring the following aspects of a network can increase the risk of attack.
Insecure Architectures
A misconfigured network is a primary entry point for unauthorized users. Leaving a trust-based, open local network vulnerable to the highly-insecure Internet is much like leaving a door ajar in a crime-ridden neighborhood — nothing may happen for an arbitrary amount of time, but eventually someone will exploit the opportunity.
Broadcast Networks
System administrators often fail to realize the importance of networking hardware in their security schemes. Simple hardware such as hubs and routers rely on the broadcast or non-switched principle; that is, whenever a node transmits data across the network to a recipient node, the hub or router sends a broadcast of the data packets until the recipient node receives and processes the data. This method is the most vulnerable to address resolution protocol (arp) or media access control (MAC) address spoofing by both outside intruders and unauthorized users on local nodes.
Centralized Servers
Another potential networking pitfall is the use of centralized computing. A common cost-cutting measure for many businesses is to consolidate all services to a single powerful machine. This can be convenient because it is easier to manage and costs considerably less than multiple-server configurations. However, a centralized server introduces a single point of failure on the network. If the central server is compromised, it may render the network completely useless or worse, prone to data manipulation or theft. In these situations a central server becomes an open door, allowing access to the entire network.
Threats to Server Security
Server security is as important as network security because servers often hold a good deal of an organization's vital information. If a server is compromised, all of its contents may become available for the cracker to steal or manipulate at will. The following sections detail some of the main issues.
Unused Services and Open Ports
A full installation of Red Hat Linux contains up to 1200 application and library packages. However, most server administrators do not opt to install every single package in the distribution, preferring instead to install a base installation of packages, including several server applications.
A common occurrence among system administrators is to install the operating system without paying attention to what programs are actually being installed. This can be problematic because unneeded services might be installed, configured with the default settings, and and possibly turned on by default. This can cause unwanted services, such as Telnet, DHCP, or DNS, to be running on a server or workstation without the administrator realizing it, which in turn can cause unwanted traffic to the server, or even, a potential pathway into the system for crackers.
Unpatched Services
Most server applications that are included in a default Red Hat Linux installation are solid, thoroughly tested pieces of software. Having been in use in production environments for many years, their code has been thoroughly refined and many of the bugs have been found and fixed.
However, there is no such thing as perfect software, and there is always room for further refinement. Moreover, newer software is often not as rigorously tested as one might expect, because of its recent arrival to production environments or because it may not be as popular as other server software.
Developers and system administrators often find exploitable bugs in server applications and publish the information on bug tracking and security-related websites such as the Bugtraq mailing list ( http://www.securityfocus.com) or the Computer Emergency Response Team (CERT) website ( http://www.cert.org). Although these mechanisms are an effective way of alerting the community to security vulnerabilities, it is up to system administrators to patch their systems promptly. This is particularly true because crackers have access to these same vulnerability tracking services and will use the information to crack unpatched systems whenever they can. Good system administration requires vigilance, constant bug tracking, and proper system maintenance to ensure a more secure computing environment.
Inattentive Administration
Administrators who fail to patch their systems are one of the greatest threats to server security. According to the System Administration Network and Security Institute (SANS), the primary cause of computer security vulnerability is to "assign untrained people to maintain security and provide neither the training nor the time to make it possible to do the job." [1] This applies as much to inexperienced administrators as it does to overconfident or amotivated administrators.
Some administrators fail to patch their servers and workstations, while others fail to watch log messages from the system kernel or network traffic. Another common error is to leave unchanged default passwords or keys to services. For example, some databases have default administration passwords because the database developers assume that the system administrator will change these passwords immediately after installation. If a database administrator fails to change this password, even an inexperienced cracker can use a widely-known default password to gain administrative privileges to the database. These are only a few examples of how inattentive administration can lead to compromised servers.
Inherently Insecure Services
Even the most vigilant organization can fall victim to vulnerabilities if the network services they choose are inherently insecure. For instance, there are many services developed under the assumption that they are used over trusted networks; however, this assumption fails as soon as the service becomes available over the Internet — which is itself inherently untrusted.
One type of insecure network service are ones which require usernames and passwords for authentication, but fail to encrypt this information as it is sent over the network. Telnet and FTP are two such services. Packet sniffing software monitoring traffic between a remote user and such a server can then easily steal the usernames and passwords.
The services noted above can also more easily fall prey to what the security industry terms the man-in-the-middle attack. In this type of attack, a cracker redirects network traffic by tricking a cracked name server on the network to point to his machine instead of the intended server. Once someone opens a remote session to that server, the attacker's machine acts as an invisible conduit, sitting quietly between the remote service and the unsuspecting user capturing information. In this way a cracker can gather administrative passwords and raw data without the server or the user realizing it.
Another example of insecure services are network file systems and information services such as NFS or NIS, which are developed explicitly for LAN usage but are, unfortunately, extended to include WANs (for remote users). NFS does not, by default, have any authentication or security mechanisms configured to prevent a cracker from mounting the NFS share and accessing anything contained therein. NIS, as well, has vital information that must be known by every computer on a network, including passwords and file permissions, within a plain text ACSII or DBM (ASCII-derived) database. A cracker who gains access to this database can then access every user account on a network, including the administrator's account.
Source: http://www.sans.org/newlook/resources/errors.html
Threats to Workstation and Home PC Security
Workstations and home PCs may not be as prone to attack as networks or servers, but since they often contain sensitive information, such as credit card information, they are targeted by system crackers. Workstations can also be co-opted without the user's knowledge and used by attackers as "slave" machines in coordinated attacks. For these reasons, knowing the vulnerabilities of a workstation can save users the headache of reinstalling the operating system.
Bad Passwords
Bad passwords are one of the easiest ways for an attacker to gain access to a system.
Vulnerable Client Applications
Although an administrator may have a fully secure and patched server, that does not mean remote users are secure when accessing it. For instance, if the server offers Telnet or FTP services over a public network, an attacker can capture the plain text usernames and passwords as they pass over the network, and then use the account information to access the remote user's workstation.
Even when using secure protocols, such as SSH, a remote user may be vulnerable to certain attacks if they do not keep their client applications updated. For instance, v.1 SSH clients are vulnerable to an X-forwarding attack from malicious SSH servers. Once connected to the server, the attacker can quietly capture any keystrokes and mouse clicks made by the client over the network. This problem was fixed in the v.2 SSH protocol, but it is up to the user to keep track of what applications have such vulnerabilities and update them as necessary.
Using the Errata Website
When security errata reports are released, they are published on the Red Hat Linux Errata website available at http://www.redhat.com/apps/support/errata/. From this page, select the product and version for the system, and then select security at the top of the page to display only Red Hat Linux Security Advisories. If the synopsis of one of the advisories describes a package used on the system, click on the synopsis for more details.
The details page describes the security exploit and any special instructions that must be performed in addition to updating the package to fix the security hole.
To download the updated package(s), click on the package name(s) and save to the hard drive. It is highly recommended that you create a new directory such as /tmp/updates and save all the downloaded packages to it.
All Red Hat Linux packages are signed with the Red Hat, Inc. GPG key. The RPM utility in Red Hat Linux 9 automatically tries to verify the GPG signature of an RPM before installing it. If we do not have the Red Hat, Inc. GPG key installed, install it from a secure, static location such as an Red Hat Linux distribution CD-ROM.
Assuming the CD-ROM is mounted in /mnt/cdrom, use the following command to import it into the keyring:
rpm --import /mnt/cdrom/RPM-GPG-KEYTo display a list of all keys installed for RPM verification, execute the command:
rpm -qa gpg-pubkey*For the Red Hat, Inc. key, the output will include:
gpg-pubkey-db42a60e-37ea5438To display details about a specific key, use the rpm -qi followed by the output from the previous command:
rpm -qi gpg-pubkey-db42a60e-37ea5438It is extremely important that you verify the signature of the RPM files before installing them. This step ensures that they have not been altered (such as a trojan horse being inserted into the packages) from the FORMAL-RHI; release of the packages. To verify all the downloaded packages at once:
rpm -K /tmp/updates/*.rpmFor each package, if the GPG key verifies successfully, it should return gpg OK in the output.
After verifying the GPG key and downloading all the packages associated with the errata report, install them as root at a shell prompt. For example:
rpm -Uvh /tmp/updates/*.rpmIf the errata reports contained any special instructions, remember to execute them accordingly. If the security errata packages contained a kernel package, be sure to reboot the machine to enable the new kernel.
Enable/Disable FTP
The File Transport Protocol (FTP) is an older TCP protocol designed to transfer files over a network. Because all transactions with the server, including user authentication, are unencrypted, it is considered an insecure protocol and should be carefully configured.
To turn on ftp, edit /etc/xinetd.d/*ftp*
# default: on # description: The telnet server serves telnet sessions; it uses # \ # unencrypted username/password pairs for authentication. service telnet { flags = REUSE socket_type = stream wait = no user = root server = /usr/sbin/in.telnetd log_on_failure += USERID disable = yes }Change "disable = yes" to "disable = no", then restart xinetd:
/etc/rc.d/init.d/xinetd restart
Red Hat Linux 9 does not ship with the xinetd-based wu-ftpd service. However, instructions for securing it remain in this section for legacy systems.
Red Hat Linux provides three FTP servers.
- gssftpd — A kerberized xinetd-based FTP daemon which does not pass authentication information over the network.
- Red Hat Content Accelerator ( tux) — A kernel-space Web server with FTP capabilities.
- vsftpd — A standalone, security oriented implementation of the FTP service.
The following security guidelines are for setting up the wu-ftpd and vsftpd services.
If you activate both the wu-ftpd and vsftpd services, the xinetd-based wu-ftpd service will handle FTP connections.
FTP Greeting Banner
Before submitting a user name and password, all users are presented with a greeting banner. By default, this banner includes version information useful to crackers trying to identify weaknesses in a system.
To change the greeting banner for vsftpd, add the following directive to /etc/vsftpd/vsftpd.conf:
ftpd_banner=<insert_greeting_here>Replace <insert_greeting_here> in the above directive with the text of your greeting message.
To change the greeting banner for wu-ftpd, add the following directives to /etc/ftpusers:
greeting text <insert_greeting_here>Replace <insert_greeting_here> in the above directive with the text of your greeting message.
For mutli-line banners, it is best to use a banner file. To simplify management of multiple banners, we will place all banners in a new directory called /etc/banners/. The banner file for FTP connections in this example will be /etc/banners/ftp.msg. Below is an example of what such a file may look like:
#################################################### # Hello, all activity on ftp.example.com is logged.# ####################################################
It is not necessary to begin each line of the file with 220 as specified in Section 5.1.1.1 TCP Wrappers and Connection Banners.
To reference this greeting banner file for vsftpd, add the following directive to /etc/vsftpd/vsftpd.conf:
banner_file=/etc/banners/ftp.msgTo reference this greeting banner file for wu-ftpd, add the following directives to /etc/ftpusers:
greeting terse banner /etc/banners/ftp.msgIt also is possible to send additional banners to incoming connections using TCP wrappers as described in Section 5.1.1.1 TCP Wrappers and Connection Banners.
Anonymous Access
For both wu-ftpd and vsftpd, the presence of the /var/ftp/ directory activates the anonymous account.
The easiest way to create this directory is to install the vsftpd package. This package sets a directory tree up for anonymous users and configures the permissions on directories to read-only for anonymous users.
For releases before Red Hat Linux 9, install the anonftp package to create the /var/ftp/ directory.
By default the anonymous user cannot write to any directories.
If enabling anonymous access to an FTP server, be careful where you store sensitive data.
Anonymous Upload
If we want to allow anonymous users to upload, it is recommended you create a write-only directory within /var/ftp/pub/.
To do this type:
mkdir /var/ftp/pub/uploadNext change the permissions so that anonymous users cannot see what is within the directory by typing:
chmod 730 /var/ftp/pub/uploadA long format listing of the directory should look like this:
drwx-wx--- 2 root ftp 4096 Feb 13 20:05 upload
Administrators who allow anonymous users to read and write in directories often find that their server become a repository of stolen software.
Additionally, under vsftpd, add the following line to /etc/vsftpd/vsftpd.conf:
anon_upload_enable=YES
User Accounts
Because FTP passes unencrypted usernames and passwords over insecure networks for authentication, it is a good idea to deny system users access to the server from their user accounts.
To disable user accounts in wu-ftpd, add the following directive to /etc/ftpusers:
deny-uid *To disable user accounts in vsftpd, add the following directive to /etc/vsftpd/vsftpd.conf:
local_enable=NO
Restricting User Accounts
The easiest way to disable a specific group of accounts, such as the root user and those with sudo privileges from accessing an FTP server is to use a PAM list file as described in Section 4.4.2.4 Disabling Root Using PAM. The PAM configuration file for wu-ftpd is /etc/pam.d/ftp. The PAM configuration file for vsftpd is /etc/pam.d/vsftpd.
It is also possible to perform this test within each service directly.
To disable specific user accounts in wu-ftpd, add the username to /etc/ftpusers:
To disable specific user accounts in vsftpd, add the username to /etc/vsftpd.ftpusers:
Use TCP Wrappers To Control Access
You can use TCP wrappers to control access to either FTP daemon as outlined in Section 5.1.1 Enhancing Security With TCP Wrappers.
Use xinetd To Control the Load
If using wu-ftpd, we can use xinetd to control the amount of resources the FTP server consumes and to limit the effects of denial of service attacks. See Section 5.1.2 Enhancing Security With xinetd for more on how to do this.
Secure Apache HTTP Server
The Apache HTTP Server is one of the most stable and secure services that ships with Red Hat Linux. There are an overwhelming number of options and techniques available to secure the Apache HTTP Server — too numerous to delve into deeply here.
It is important if you are configuring Apache HTTP Server to read the documentation available for the application. This includes the the chapter titled Apache HTTP Server in the Red Hat Linux Reference Guide, the chapter titled Apache HTTP Secure Server Configuration in the Red Hat Linux Customization Guide, and the Stronghold manuals, available at http://www.redhat.com/docs/manuals/stronghold/.
Below is a list of configuration options administrators should be careful using.
FollowSymLinks
This directive is enabled by default, so be careful where you create symbolic links to in the document root of the Web server. For instance, it is a bad idea to provide a symbolic link to /.
The Indexes Directive
This directive is enabled by default, but may not be desirable. If you do not want users to browse files on the server, it is best to remove this directive.
The UserDir Directive
The UserDir directive is disabled by default because it can confirm the presence of a user account on the system. If you wish to enable user directory browsing on the server, use the following directives:
UserDir enabled UserDir disabled rootThese directives activate user directory browsing for all user directories other than /root. If you wish to add users to the list of disabled accounts, add a space delimited list of users on the UserDir disabled line.
Do Not Remove the IncludesNoExec Directive
By default, the server-side includes module cannot execute commands. It is ill advised to change this setting unless you absolutely have to, as it could potentially enable an attacker to execute commands on the system.
Restrict Permissions for Executable Directories
Be certain to only assign write permissions only to the root user for any directory containing scripts or CGIs. This can be accomplished by typing the following commands:
chown root <directory_name> chmod 755 <directory_name>Also, always verify that any scripts you are running work as intended before putting them into production.
Secure Sendmail
Sendmail is a Mail Transport Agent (MTA) that uses the Simple Mail Transport Protocol (SMTP) to deliver electronic messages between other MTAs and to email clients or delivery agents. Although many MTAs are capable of encrypting traffic between one another, most do not, so sending email over any public networks is considered an inherently insecure form of communication.
It is recommended that anyone planning to implement a Sendmail server address the following issues.
Limiting Denial of Service Attack
Because of the nature of email, a determined attacker can flood the server with mail fairly easily and cause a denial of service. By setting limits to the following directives to /etc/mail/sendmail.mc the effectiveness of such attacks will be limited.
- confCONNECTION_RATE_THROTTLE — The number of connections the server can receive per second. By default, Sendmail does not limit the number of connections. If a limit is set and reached, further connections are delayed.
- confMAX_DAEMON_CHILDREN — The maximum number of child processes that can be spawned by the server. By default, Sendmail does not assign a limit to the number of child processes. If a limit is set and reached, further connections are delayed.
- confMIN_FREE_BLOCKS — The minimum number of free blocks which must be available for the server to accept mail. The default is 100 blocks.
- confMAX_HEADERS_LENGTH — The maximum acceptable size (in bytes) for a message header.
- confMAX_MESSAGE_size — The maximum acceptable size (in bytes) for any one message.
NFS and Sendmail
Never put the mail spool directory, /var/spool/mail/, on an NFS shared volume.
Because NFS does not maintain control over user and group IDs, two or more users can have the same UID and therefore receive and read each other's mail.
Mail-only Users
To help prevent local user exploits on the Sendmail server, it is best for mail users to only access the Sendmail server using an email program. Shell accounts on the mail server should not be allowed and all user shells in the /etc/passwd file should be set to /sbin/nologin (with the possible exception of the root user).
Secure NFS
The Network File System or NFS is an RPC service used in conjunction with portmap and other related services to provide network accessible file systems for client machines.
It is recommended that anyone planning to implement an NFS server first secure the portmap service as outlined in Section 5.2 Secure Portmap, before addressing the following issues.
Carefully Plan the Network
Because NFS passes all information unencrypted over the network, it is important the service be run behind a firewall and on a segmented and secure network. Any time information is passed over NFS on an insecure network, it risks being intercepted. Careful network design in these regards can help prevent security breaches.
Beware of Syntax Errors
The NFS server determines which file systems to export and which hosts to export these directories to via the /etc/exports file. Be careful not to add extraneous spaces when editing this file.
For instance, the following line in the /etc/exports file shares the directory /tmp/nfs/ to the host bob.example.com with read and write permissions.
/tmp/nfs/ bob.example.com(rw)This line in the /etc/exports file, on the other hand, shares the same directory to the host bob.example.com with read-only permissions and shares it to the world with read and write permissions due to a single space character after the hostname.
/tmp/nfs/ bob.example.com (rw)It is good practice to check any configured NFS shares by using the showmount command to verify what is being shared:
showmount -e <hostname>
Do Not Use the no_root_squash Option
By default, NFS shares change root-owned files to user nfsnobody. This prevents uploading of programs with the setuid bit set.
Secure NIS
NIS stands for Network Information Service. It is an RPC service called ypserv which is used in conjunction with portmap and other related services to distribute maps of usernames, passwords, and other sensitive information to any computer claiming to be within its domain.
An NIS server is comprised of several applications. They include the following:
- /usr/sbin/rpc.yppasswdd — Also called the yppasswdd service, this daemon allows users to change their NIS passwords.
- /usr/sbin/rpc.ypxfrd — Also called the ypxfrd service, this daemon is responsible for NIS map transfers over the network.
- /usr/sbin/yppush — This application propagates changed NIS databases to multiple NIS servers.
- /usr/sbin/ypserv — This is the NIS server daemon.
NIS is rather insecure by todays standards. It has no host authentication mechanisms and passes all of its information over the network unencrypted, including password hashes. As a result, extreme care must be taken to set up a network that uses NIS. Further complicating the situation, the default configuration of NIS is inherently insecure.
It is recommended that anyone planning to implement an NIS server first secure the portmap service as outlined in Section 5.2 Secure Portmap, then address following issues.
Carefully Plan the Network
Because NIS passes sensitive information unencrypted over the network, it is important the service be run behind a firewall and on a segmented and secure network. Any time NIS information is passed over an insecure network, it risks being intercepted. Careful network design in these regards can help prevent severe security breaches.
Use a Password-Like NIS Domain Name and Hostname
Any machine within an NIS domain can use commands to extract information from the server without authentication, as long as the user knows the NIS server's DNS hostname and NIS domain name.
For instance, if someone either connects a laptop computer into the network or breaks into the network from outside (and manages to spoof an internal IP address) the following command will reveal the /etc/passwd map:
ypcat -d <NIS_domain> -h <DNS_hostname> passwdIf this attacker is a root user, they can obtain the /etc/shadow file by typing the following command:
ypcat -d <NIS_domain> -h <DNS_hostname> shadow
If Kerberos is used, the /etc/shadow file is not stored within an NIS map.
To make access to NIS maps harder for an attacker, create a random string for the DNS hostname, such as o7hfawtgmhwg.domain.com. Similarly, create a different randomized NIS domain name. This will make it much more difficult for an attacker to access the NIS server.
Edit the /var/yp/securenets File
NIS will listen to all networks if the /var/yp/securenets file is blank or does not exist (as is the case after a default installation). One of the first things you should do is put a netmask/network pairs in the file so that ypserv will only respond to requests from the proper network.
Below is a sample entry from a /var/yp/securenets file:
255.255.255.0 192.168.0.0
Never start an NIS server for the first time without creating the /var/yp/securenets file.
This technique does not provide protection from an IP spoofing attack, but it does at least place limits on what networks the NIS server will service.
Assign Static Ports and Use iptables Rules
All of the servers related to NIS can be assigned specific ports except for rpc.yppasswdd — the daemon that allows users to change their login passwords. Assigning ports to the other two NIS server daemons, rpc.ypxfrd and ypserv, allows you to create firewall rules to further protect the NIS server daemons from intruders.
To do this, add the following lines to /etc/sysconfig/network:
YPSERV_ARGS="-p 834" YPXFRD_ARGS="-p 835"The following iptables rules can be issued to enforce which network the server will listen to for these ports:
iptables -A INPUT -p ALL -s! 192.168.0.0/24 --dport 834 -j DROP iptables -A INPUT -p ALL -s! 192.168.0.0/24 --dport 835 -j DROP
Use Kerberos Authentication
One of the most glaring flaws inherent when NIS is used for authentication is that whenever a user logs into a machine, a password hash from the /etc/shadow map is send over the network. If an intruder gains access to an NIS domain and sniffs network traffic, usernames and password hashes can be quietly collected. With enough time, a password cracking program can guess weak passwords, and an attacker can gain access to a valid account on the network.
Since Kerberos using secret-key cryptography, no password hashes are ever sent over the network, making the system far more secure. For more about Kerberos, refer to the chapter titled Kerberos in the Red Hat Linux Reference Guide.
Secure Portmap
The portmap service is a dynamic port assignment daemon for RPC services such as NIS and NFS. It has weak authentication mechanisms and has the ability to assign a wide range of ports for the services it controls. For these reasons, it is difficult to secure.
If you are running RPC services, you should follow some basic rules.
Protect portmap With TCP Wrappers
It is important to use TCP wrappers to limit which networks or hosts have access to the portmap service since it has no built-in form of authentication.
Further, use only IP addresses when limiting access to the service. Avoid these hostnames, as they can be forged via DNS poisoning and other methods.
Protect portmap With iptables
To further restrict access to the portmap service, it is a good idea to add iptables rules to the server restricting access to specific networks.
Below is are two example iptables commands that allow TCP connections to the portmap service (listening on port 111) from the 192.168.0/24 network and from the localhost (which is necessary for the sgi_fam service used by Nautilus). All other packets are dropped.
iptables -A INPUT -p tcp -s! 192.168.0.0/24 --dport 111 -j DROP iptables -A INPUT -p tcp -s 127.0.0.1 --dport 111 -j ACCEPTTo similarly limit UDP traffic, use the following command.
iptables -A INPUT -p udp -s! 192.168.0.0/24 --dport 111 -j DROP
ports
Verifying Which Ports Are Listening
Once you have configured services on the network, it is important to keep tabs on which ports are actually listening on the system's network interfaces. Any open ports can be evidence of an intrusion.
There are two basic approaches for listing the ports that are listening on the network. The less reliable approach is to query the network stack by typing commands such as netstat -an or lsof -i. This method is less reliable since these programs do not connect to the machine from the network, but rather check to see what is running on the system. For this reason, these applications are frequent targets for replacement by attackers. In this way, crackers attempt to cover their tracks if they open unauthorized network ports.
A more reliable way to check which ports are listening on the network is to use a port scanner such as nmap.
The following command issued from the console determines which ports are listening for TCP connections from the network:
nmap -sT -O localhostThe output of this command looks like the following:
Starting nmap V. 3.00 ( www.insecure.org/nmap/ ) Interesting ports on localhost.localdomain (127.0.0.1): (The 1596 ports scanned but not shown below are in state: closed) Port State Service 22/tcp open ssh 111/tcp open sunrpc 515/tcp open printer 834/tcp open unknown 6000/tcp open X11 Remote OS guesses: Linux Kernel 2.4.0 or Gentoo 1.2 Linux 2.4.19 rc1-rc7) nmap run completed -- 1 IP address (1 host up) scanned in 5 secondsThis output shows the system is running portmap due to the presence of the sunrpc service. However, there is also a mystery service on port 834. To check if the port is associated with the official list of known services, type:
cat /etc/services | grep 834This command returns no output. This indicates that while the port is in the reserved range (meaning 0 through 1023) and requires root access to open, it is not associated with a known service.
Next, we can check for information about the port using netstat or lsof. To check for port 834 using netstat, use the following command:
netstat -anp | grep 834The command returns the following output:
tcp 0 0 0.0.0.0:834 0.0.0.0:* LISTEN 653/ypbindThe presence of the open port in netstat is reassuring because a cracker opening a port surreptitiously on a hacked system would likely not allow it to be revealed through this command. Also, the [p] option reveals the process id (PID) of the service which opened the port. In this case the open port belongs to ypbind (NIS), which is an RPC service handled in conjunction with the portmap service.
The lsof command reveals similar information since it is also capable of linking open ports to services:
lsof -i | grep 834Below is the relevant portion of the output for this command:
ypbind 653 0 7u IPv4 1319 TCP *:834 (LISTEN) ypbind 655 0 7u IPv4 1319 TCP *:834 (LISTEN) ypbind 656 0 7u IPv4 1319 TCP *:834 (LISTEN) ypbind 657 0 7u IPv4 1319 TCP *:834 (LISTEN)
Security Controls
Computer security is often divided into three distinct master categories, commonly referred to as controls:
- Physical
- Technical
- Administrative
These three broad categories define the main objectives of proper security implementation. Within these controls are sub-categories that further detail the controls and how to implement them.
Physical Controls
The physical control is the implementation of security measures in a defined structure used to deter or prevent unauthorized access to sensitive material. Examples of physical controls are:
- Closed-circuit surveillance cameras
- Motion or thermal alarm systems
- Security guards
- Picture IDs
- Locked and dead-bolted steel doors
Technical Controls
The technical control uses technology as a basis for controlling the access and usage of sensitive data throughout a physical structure and over a network. Technical controls are far-reaching in scope and encompass such technologies as:
- Encryption
- Smart cards
- Network authentication
- Access control lists (ACLs)
- File integrity auditing software
Administrative Controls
Administrative controls define the human factors of security. It involves all levels of personnel within an organization and determines which users have access to what resources and information by such means as:
- Training and awareness
- Disaster preparedness and recovery plans
- Personnel recruitment and separation strategies
- Personnel registration and accounting
Configuring Clients for CIPE
After successfully configuring the CIPE server and testing for functionality, we can now deploy the connection on the client machine.
The CIPE client should be able to connect and disconnect the CIPE connection in an automated way. Therefore, CIPE contains built-in mechanisms to customize settings for individual uses. For example, a remote employee can connect to the CIPE device on the LAN by typing the following:
/sbin/ifup cipcb0The device should automatically come up; firewall rules and routing information should also be configured along with the connection. The remote employee should be able to terminate the connection with the following:
/sbin/ifdown cipcb0Configuring clients requires the creation of localized scripts that are run after the device has loaded. The device configuration itself can be configured locally via a user-created file called /etc/sysconfig/network-scripts/ifcfg-cipcb0. This file contains pieces of parameters that determine whether the CIPE connection occurs at boot-time, what the name of the CIPE device is, among other things. The following is the ifcfg-cipcb0 file for a remote client connecting to the CIPE server:
DEVICE=cipcb0 ONBOOT=yes BOOTPROTO=none USERCTL=no # This is the device for which we add a host route to our CIPE peer through. # You may hard code this, but if left blank, we will try to guess from # the routing table in the /etc/cipe/ip-up.local file. PEERROUTEDEV= # We need to use internal DNS when connected via cipe. DNS=192.168.1.254The CIPE device is named cipcb0. The CIPE device will be loaded at boot-time (configured via the ONBOOT field) and will not use a boot protocol (for example, DHCP) to receive an IP address for the device. The PEERROUTEDEV field determines the CIPE server device name that connects to the client. If no device is specified in this field, one will be determined after the device has been loaded.
If your internal networks are behind a firewall (always a good policy), you need to set rules to allow the CIPE interface on the client machine to send and receive UDP packets.
Clients should be configured such that all localized parameters are placed in a user-created file called /etc/cipe/ip-up.local. The local parameters should be reverted when the CIPE session is shut down using /etc/cipe/ip-down.local.
Firewalls should be configured on client machines to accept the CIPE UDP encapsulated packets. Rules may vary widely, but the basic acceptance of UDP packets is required for CIPE connectivity. The following iptables rules allow UDP CIPE transmissions on the remote client machine connecting to the LAN; the final rule adds IP Masquerading to allow the remote client to communicate to the LAN and the Internet:
/sbin/modprobe iptables /sbin/service iptables stop /sbin/iptables -P INPUT REJECT /sbin/iptables -F INPUT /sbin/iptables -A INPUT -j ACCEPT -p udp -s 10.0.1.1 /sbin/iptables -A INPUT -j ACCEPT -i cipcb0 /sbin/iptables -A INPUT -j ACCEPT -i lo /sbin/iptables -t nat -A POSTROUTING -s 192.168.0.0/24 -o eth0 -j MASQUERADEYou must also add routing rules to the client machine to access the nodes behind the CIPE connection as if they were on the local network. This can be done by running the route command. For our example, the client workstation would need to add the following network route:
route add -net 192.168.1.0 netmask 255.255.255.0 gw 10.0.1.2The following shows the final /etc/cipe/ip-up.local script for the client workstation:
#!/bin/bash -v if [ -f /etc/sysconfig/network-scripts/ifcfg-$1 ] ; then . /etc/sysconfig/network-scripts/ifcfg-$1 else cat <<EOT | logger Cannot find config file ifcfg-$1. Exiting. EOF exit 1 fi if [ -n ${PEERROUTEDEV} ]; then cat <<EOT | logger Cannot find a default route to send cipe packets through! Punting and hoping for the best. EOT # Use routing table to determine peer gateway export PEERROUTEDEV=`/sbin/route -n | grep ^0.0.0.0 | head -n 1 \ | awk '{ print $NF }'` fi #################################################### # Add The routes for the remote local area network # #################################################### route add -host 10.0.1.2 dev $PEERROUTEDEV route add -net 192.168.1.0 netmask 255.255.255.0 dev $1 #################################################### # IP TABLES Rules to restrict traffic # #################################################### /sbin/modprobe iptables /sbin/service iptables stop /sbin/iptables -P INPUT REJECT /sbin/iptables -F INPUT /sbin/iptables -A INPUT -j ACCEPT -p udp -s 10.0.1.2 /sbin/iptables -A INPUT -j ACCEPT -i $1 /sbin/iptables -A INPUT -j ACCEPT -i lo /sbin/iptables -t nat -A POSTROUTING -s 192.168.0.0/24 -o eth0 -j MASQUERADE
Customizing CIPE
CIPE can be configured in numerous ways, from passing parameters as command line arguments when starting ciped to generating new shared static keys. This allows a security administrator the flexibility to customize CIPE sessions to ensure security as well as increase productivity. The following chart details some of the command-line parameters when running the ciped daemon.
The most common parameters should be placed in the /etc/cipe/options.cipcbx file for automatic loading at runtime. Be aware that any parameters passed at the command line as options will override respective parameters set in the /etc/cipe/options.cipcbx configuration file.
Parameter Description arg Passes arguments to the /etc/cipe/ip-up initialization script cttl Sets the Carrier Time To Live (TTL) value; recommended value is 64 debug Boolean value to enable debugging device Names the CIPE device ipaddr Publicly-routable IP address of the CIPE machine ipdown Choose an alternate ip-down script than the default /etc/cipe/ip-down ipup Choose an alternate ip-up script than the default /etc/cipe/ip-down key Specifies a shared static key for CIPE connection maxerr Number of errors allowable before the CIPE daemon quits me UDP address of the CIPE machine mtu Set the device maximum transfer unit nokey Do not use encryption peer The peer's CIPE UDP address ping Set CIPE-specific (non-ICMP) keepalive ping interval socks IP address and port number of the SCIPE Installation
CIPE Installation
The installation of CIPE is equivalent to installing a network interface under Linux. The CIPE RPM package contains configuration files found in /etc/cipe/, the CIPE daemon ( /usr/sbin/ciped-cb), network scripts that load the kernel module and activates/deactivates the CIPE interface ( if*-cipcb), and sample configuration files found in /usr/share/doc/cipe-<version>/samples/. There is also a detailed texinfo page explaining the CIPE protocol and various implementation details.
The following guide details a sample configuration involving a workstation client that wants to connect securely to a remote LAN with a CIPE gateway. The workstation uses a dynamic IP address from a cable modem connection, while the CIPE-enabled gateway machine employs the 192.168.1.0/24 range. This is what is known as a "typical" CIPE configuration. Figure 6-1 illustrates the typical CIPE setup.
Installing CIPE between the client and the CIPE server allows for a secured peer-to-peer connection using the Internet as a medium for transmission of WAN traffic. The client workstation then transfers a file through the Internet to the CIPE-enabled firewall, where each packet will be timestamped, encrypted, and given the peer address of the receiving CIPE-enabled firewall. The destination firewall then reads the header information, strips it, and sends it through to the remote LAN router to be then routed to its destination node. This process is seamless and completely transparent to end users. The majority of the transaction is done between the CIPE-enabled peers.
CIPE Key Management
As previously mentioned, CIPE incorporates a secure combination of static link keys and encrypted traffic to create a secure tunnel over carrier networks such as the Internet. The use of static link keys provide a common point of reference for two CIPE-enabled networks to pass information securely. Therefore, it is imperative that both CIPE-enabled network gateways share the exact same key, or CIPE communication will not be possible.
Generating CIPE Keys
Generating CIPE keys requires knowledge of what kind of keys are compatible. Random alphanumeric generators do not work. Static keys must be 128-bit, 32-character strings. These can be created by piping an arbitrary file or outputted process through the md5sum command. For example:
ps -auxw | md5sumPlace this key in the /etc/cipe/options.cipcb0 file for all CIPE servers and clients.
CIPE Server Configuration
To setup the CIPE server, install the RPM package from the Red Hat Linux CD-ROM or via Red Hat Network.
If you are using an older version of Red Hat Linux and/or have an older version of CIPE, you should upgrade to the latest version.
The next step is to copy the sample configuration files from /usr/share/doc/cipe-version/samples/ (where version is the version of CIPE installed on the system) to /etc/cipe/. Once they are copied, you will need to edit the /etc/cipe/options.cipcbx ( x is incremental starting from 0, for those who want to have more than one CIPE connection on the CIPE server) file to include your LAN subnet addresses and publicly routable firewall IP addresses. The following is the example options file included with the Red Hat Linux CIPE RPM which, for this example, is renamed to options.cipbcb0:
# Surprise, this file allows comments (but only on a line by themselves) # This is probably the minimal set of options that has to be set # Without a "device" line, the device is picked dynamically # the peer's IP address ptpaddr 6.5.4.3 # our CIPE device's IP address ipaddr 6.7.8.9 # my UDP address. Note: if you set port 0 here, the system will pick # one and tell it to you via the ip-up script. Same holds for IP 0.0.0.0. me bigred.inka.de:6789 # ...and the UDP address we connect to. Of course no wildcards here. peer blackforest.inka.de:6543 # The static key. Keep this file secret! # The key is 128 bits in hexadecimal notation. key xxxxxxxxxxxxxxxxxxxxxxxxxxxxxThe ptpaddr is the remote LAN's CIPE address. The ipaddr is the workstation's CIPE IP address. The me address is the client's publicly routable IP address that sends the UDP packets over the Internet, while peer is the publicly routable IP address of CIPE server. Note that the client workstation's IP address is 0.0.0.0 because it uses a dynamic connection. The CIPE client will handle the connection to the host CIPE server. The key field (represented by x's; your key should be secret) is the shared static key. This key must be the same for both peers or connection will not be possible. See Section 6.8 CIPE Key Management for information on how to generate a shared static key for your CIPE machines.
Here is the edited /etc/cipe/options.cipcb0 that the client workstation will use:
ptpaddr 10.0.1.2 ipaddr 10.0.1.1 me 0.0.0.0 peer LAN.EXAMPLE.COM:6969 key 123456ourlittlesecret7890shhhhHere is the /etc/cipe/options.cipcb0 file for the CIPE server:
ptpaddr 10.0.1.1 ipaddr 10.0.1.2 me LAN.EXAMPLE.COM:6969 peer 0.0.0.0 key 123456ourlittlesecret7890shhhh
Why Use CIPE?
There are several reasons why CIPE would be a smart choice for security and systems administrators:
- Red Hat Linux ships with CIPE, so it is available to all Red Hat Linux edge machines (for example, firewalls or gateways) that we want to connect to your Intranet. Red Hat Linux also includes CIPE-supported encryption ciphers in its general distribution.
- CIPE supports encryption using either of the standard Blowfish or IDEA encryption algorithms. Depending on encryption export regulations in your country, you may use the default (Blowfish) to encrypt all CIPE traffic on your Intranet.
- Because CIPE is software based, any older or redundant machine that is able to run Red Hat Linux can become a CIPE gateway, saving an organization from having to purchase expensive dedicated VPN hardware to connect two LANs securely.
- CIPE is actively developed to work in conjunction with iptables, ipchains, and other rules-based firewalls. Peer acceptance of incoming CIPE UDP packets is all that is needed to coexist with existing firewall rules.
- CIPE configuration is done through text files, allowing administrators to configure their CIPE servers and clients remotely without the need for bulky graphical tools that can function poorly over a network.
Crypto IP Encapsulation (CIPE)
CIPE is a VPN implementation developed primarily for Linux. CIPE uses encrypted IP packets that are encapsulated, or "wrapped", in datagram (UDP) packets. CIPE packets are given destination header information and are encrypted using the default CIPE encryption mechanism. The packets are then transferred over IP as UDP packets via the CIPE virtual network device (cipcb x) over a carrier network to an intended remote node. The following figure shows a typical CIPE setup connecting two Linux-based networks:
A Network and Remote Client Connected by CIPE
This diagram shows a network running CIPE on the firewall, and a remote client machine acting as a CIPE-enabled node. The CIPE connection acts as a tunnel through which all Intranet-bound data is routed between remote nodes. All data is encrypted using dynamically-generated 128-bit keys, and can be further compressed for large file transfers or to tunnel X applications to a remote host. CIPE can be configured for communication between two or more CIPE-enabled Linux machines and has network drivers for Win32-based operating systems.
Defining Assessment and Testing
Outside Looking In provides the cracker's viewpoint. We see what a cracker sees — publicly-routable IP addresses, systems on the DMZ, external interfaces of the firewall, and more. Inside Looking Around assessments provide a logged in viewpoint We see print servers, file servers, databases, and other resources.
There are striking distinctions between these two types of vulnerability assessments. Being internal to your company gives you elevated privileges — more so than any outsider. Still today in most organizations, security is configured in such a manner as to keep intruders out. Very little is done to secure the internals of the organization (such as departmental firewalls, user-level access controls, authentication procedures for internal resources, and more). Typically, there are many more resources when inside looking around as most systems are internal to a company. Once you set yourself outside the company, you immediately are given untrusted status. The systems and resources available to you externally are typically much more limited.
Consider the difference between vulnerability assessments and penetration tests. Think of a vulnerability assessment as the first step to a penetration test. The information gleaned from the assessment will be used in the testing. Whereas, the assessment is checking for holes and potential vulnerabilities, the penetration testing actually attempts to exploit the findings.
Assessing network infrastructure is a dynamic process. Security, both information and physical, is dynamic. Performing an assessment shows an overview, which can turn up false positives and false negatives.
Security administrators are only as good as the tools they use and the knowledge they retain. Take any of the assessment tools currently available, run them against the system, and it is almost a guarantee that there will be at least some false positives. Whether by program fault or user error, the result is the same. The tool may find vulnerabilities which in reality do not exist (false positive); or, even worse, the tool may not find vulnerabilities that actually do exist (false negative).
Now that the difference between vulnerability assessment and penetration test are defined, it is often good practice to take the findings of the assessment and review them carefully before conducting a penetration test.
Attempting to exploit vulnerabilities on production resources can have adverse effects to the productivity and efficiency of the systems and network.
The following list examines some of the benefits to performing vulnerability assessments.
- Proactive focus on information security
- Finding potential exploits before crackers find them
- Typically results in systems being kept up to date and patched
- Promotes growth and aids in developing staff expertise
- Financial loss and negative publicity abated
Establishing a Methodology
To aid in the selection of tools for vulnerability assessment, it is helpful to establish a vulnerability assessment methodology. Unfortunately, there is no predefined or industry approved methodology at this time; however, common sense and best practices can act as a sufficient guide.
What is the target? Are we looking at one server, or are we looking at our entire network and everything within the network? Are we external or internal to the company? The answers to these questions are important as they will help you determine not only which tools to select but also the manner in which the they will be used.
To learn more about establishing methodologies, refer to the following websites:
- http://www.isecom.org/projects/osstmm.htm — The Open Source Security Testing Methodology Manual (OSSTMM)
- http://www.owasp.org — The Open Web Application Security Project
Evaluating the Tools
A typical assessment can start by using some form of information gathering tool. When assessing the entire network, map the layout first to find the hosts that are running. Once located, examine each host individually. Focusing on these hosts will require another set of tools. Knowing which tools to use may be the most crucial step in finding vulnerabilities.
Just as in any aspect of everyday life, there are many different tools that perform the same job. This concept applies to performing vulnerability assessments as well. There are tools specific to operating systems, applications, and even networks (based on protocols used). Some tools are free (in terms of cost) while others are not. Some tools are intuitive and easy to use, while others are cryptic and poorly documented but have features that other tools do not.
Finding the right tools may be a daunting task. In the end, experience counts. If possible, set up a test lab and try out as many tools as we can, noting the strengths and weaknesses of each. Review the README file or man page for the tool. In addition, look to the Internet for more information, such as articles, step-by-step guides, or even mailing lists specific to a tool.
The tools discussed below are just a small sampling of the available tools.
Scanning Hosts with nmap
nmap is a popular tool included in Red Hat Linux that can be used to determine the layout of a network. nmap has been available for many years and is probably the most often used tool when gathering information. An excellent man page is included that provides a detailed description of its options and usage. Administrators can use nmap on a network to find host systems and open ports on those systems.
nmap is a competent first step in vulnerability assessment. You can map out all the hosts within your network, and even pass an option that will allow it to attempt to identify the operating system running on a particular host. nmap is a good foundation for establishing a policy of using secure services and stopping unused services.
Using nmap
nmap can be run from a shell prompt or using a graphical frontend. At a shell prompt, type the nmap command followed by the hostname or IP address of the machine we want to scan.
nmap foo.example.comThe results of the scan (which could take up to a few minutes, depending on where the host is located) should look similar to the following:
Starting nmap V. 3.00 ( www.insecure.org/nmap/ ) Interesting ports on localhost.localdomain (127.0.0.1): (The 1591 ports scanned but not shown below are in state: closed) Port State Service 22/tcp open ssh 25/tcp open smtp 111/tcp open sunrpc 515/tcp open printer 950/tcp open oftep-rpc 6000/tcp open X11 nmap run completed -- 1 IP address (1 host up) scanned in 0 secondsIf you were to use the graphical frontend (which can be run by typing /usr/bin/nmapfe at a shell prompt), the results will look similar to the following:
Nessus
Nessus is a full-service security scanner. The plug-in architecture of Nessus allows users to customize it for their systems and networks. As with any scanner, Nessus is only as good as the signature database it relies upon. Fortunately, Nessus is frequently updated. It features full reporting, host scanning, and real-time vulnerability searches. Remember that there could be false positives and false negatives, even in a tool as powerful and as frequently updated as Nessus.
Nessus is not included with Red Hat Linux and is not supported. It has been included in this document as a reference to users who may be interested in using this popular application.
Whisker
Whisker is an excellent CGI scanner. Whisker has the capability to not only check for CGI vulnerabilities but do so in an evasive manner, so as to elude intrusion detection systems. It comes with excellent documentation which should be carefully reviewed prior to running the program. When you have found your Web servers serving up CGI scripts, Whisker can be an excellent resource for checking the security of these servers.
Whisker is not included with Red Hat Linux and is not supported. It has been included in this document as a reference to users who may be interested in using this popular application.
More information about Whisker can be found at the following URL:
http://www.wiretrip.net/rfp/p/doc.asp/i2/d21.htm
VLAD the Scanner
VLAD is a scanner developed by the RAZOR team at Bindview, Inc. that may be used to check for vulnerabilities. It checks for the SANS Top Ten list of common security issues (SNMP issues, file sharing issues, etc.). While not as full-featured as Nessus, VLAD is worth investigating.
VLAD is not included with Red Hat Linux and is not supported. It has been included in this document as a reference to users who may be interested in using this popular application.
More information about VLAD can be found on the RAZOR team website at the following URL:
http://razor.bindview.com/tools/vlad/index.shtml
Anticipating Your Future Needs
Depending upon your target and resources, there are many tools available. There are tools for wireless networks, Novell networks, Windows systems, Linux systems, and more. Another essential part of performing assessments may include reviewing physical security, personnel screening, or voice/PBX network assessment. New concepts, such as war walking — scanning the perimeter of your enterprise's physical structures for wireless network vulnerabilities — are some emerging concepts that you can investigate and, if needed, incorporate in your assessments. Imagination and exposure are the only limits of planning and conducting vulnerability assessments.
BIOS and Boot Loader Security
Password protection for the BIOS and the boot loader can prevent unauthorized users who have physical access to the systems from booting from removable media or attaining root through single user mode. But the security measures one should take to protect against such attacks depends both on the sensitivity of the information the workstation holds and the location of the machine.
For instance, if a machine is used in a trade show and contains no sensitive information, than it may not be critical to prevent such attacks. However, if an employee's laptop with private, non-password protected SSH keys for the corporate network is left unattended at that same trade show, it can lead to a major security breech with ramifications for the entire company.
On the other hand, if the workstation is located in a place where only authorized or trusted people have access, then securing the BIOS or the boot loader may not be necessary at all.
BIOS Passwords
The following are the two primary reasons for password protecting the BIOS of a computer [1]:
- Prevent Changes to BIOS Settings — If an intruder has access to the BIOS, they can set it to boot off of a diskette or CD-ROM. This makes it possible for them to enter rescue mode or single user mode, which in turn allows them to seed nefarious programs on the system or copy sensitive data.
- Prevent System Booting — Some BIOSes allow you to password protect the boot process itself. When activated, an attacker is forced to enter a password before the BIOS to launch the boot loader.
Because the methods for setting a BIOS password vary between computer manufacturers, consult the manual for your computer for instructions.
If you forget the BIOS password, it can often be reset either with jumpers on the motherboard or by disconnecting the CMOS battery. For this reason it is good practice to lock the computer case if possible. However, consult the manual for the computer or motherboard before attempting this procedure.
Boot Loader Passwords
The following are the primary reasons for password protecting a Linux boot loader:
- Prevent Access to Single User Mode — If an attacker can boot into single user mode, he becomes the root user.
- Prevent Access to the GRUB Console — If the machine uses GRUB as its boot loader, an attacker can use the use the GRUB editor interface to change its configuration or to gather information using the cat command.
- Prevent Access to Non-Secure Operating Systems — If it is a dual-boot system, an attacker can select at boot time an operating system, such as DOS, which ignores access controls and file permissions.
There are two boot loaders that ship with Red Hat Linux for the x86 platform, GRUB and LILO. For a detailed look at each of these boot loaders, consult the chapter titled Boot Loaders in the Red Hat Linux Reference Guide.
Password Protecting GRUB
You can configure GRUB to address the first two issues listed in Section 4.2.2 Boot Loader Passwords by adding a password directive to its configuration file. To do this, first decide on a password, then open a shell prompt, log in as root, and type:
/sbin/grub-md5-cryptWhen prompted, type the GRUB password and press
[Enter] . This will return an MD5 hash of the password.Next, edit the GRUB configuration file /boot/grub/grub.conf. Open the file and below the timeout line in the main section of the document, add the following line:
password --md5 <password-hash>Replace <password-hash> with the value returned by /sbin/grub-md5-crypt [2].
The next time you boot the system, the GRUB menu will not let you access the editor or command interface without first pressing
[p] followed by the GRUB password.Unfortunately, this solution does not prevent an attacker from booting into a non-secure operating system in a dual-boot environment. For this you need to edit a different part of the /boot/grub/grub.conf file.
Look for the title line of the non-secure operating system and add a line that says lock directly beneath it.
For a DOS system, the stanza should begin similar to the following:
title DOS lock
You must have a password line in the main section of the /boot/grub/grub.conf file for this to work properly. Otherwise an attacker will be able to access the GRUB editor interface and remove the lock line.
If you wish to have a different password for a particular kernel or operating system, add a lock line to the stanza followed by a password line.
Each stanza you protect with a unique password should begin with lines similar to the following example:
title DOS lock password --md5 <password-hash>Finally, remember that the /boot/grub/grub.conf file is world-readable by default. It is a good idea to change this, as it has no affect on the functionality of GRUB, by typing the following command as root:
chmod 600 /boot/grub/grub.conf
Password Protecting LILO
LILO is a much simpler boot loader than GRUB and does not offer a command interface, so you need not worry about an attacker gaining interactive access to the system before the kernel is loaded. However, there is still the danger of attackers booting in single-user mode or booting into an insecure operating system.
You can configure LILO to ask for a password before booting any operating system or kernel on the system by adding a password directive in to the global global section of its configuration file. To do this, open a shell prompt, log in as root, and edit /etc/lilo.conf. Before the first image stanza, add a password directive similar to this:
password=<password>In the above directive, replace the word <password> with your password.
Anytime you edit /etc/lilo.conf, run the /sbin/lilo -v -v command for the changes to take affect. If you have configured a password and anyone other than root can read the file, LILO will install, but will alert you that the permissions on the configuration file are wrong.
If we do not want a global password, we can apply the password directive to any stanza corresponding to any kernel or operating system to which you wish to restrict access in /etc/lilo.conf. To do this, add the password directive immediately below the image line. When finished, the beginning of the password-protected stanza will resemble the following:
image=/boot/vmlinuz-<version> password= <password>In the previous example, replace <version> with kernel version and <password> with the LILO password for that kernel.
If we want to allow booting a kernel or operating system without password verification, but do not want to allow users to add arguments without a password, we can add the restricted directive on the line below the password line within the stanza. Such a stanza begins similar to this:
image=/boot/vmlinuz-<version> password= <password> restrictedAgain, replace <version> with kernel version and <password> with the LILO password for that kernel.
If you use the restricted directive, also have a password line in the stanza.
The /etc/lilo.conf file is world-readable. If you are password protecting LILO, it essential that you only allow root to read and edit the file since all passwords are in plain text. To do this, type the following command as root:
chmod 600 /etc/lilo.conf
Notes
Since system BIOSes differ between manufacturers, some may not support password protection of either type, while others may support one type and not the other.
[2] GRUB also accepts plain text passwords, but it is recommended you use the md5 version because /boot/grub/grub.conf is world-readable by default.
Personal Firewalls
Once the necessary network services are configured, it is important to implement a firewall.
Firewalls prevent network packets from accessing the network interface of the system. If a request is made to a port that is blocked by a firewall, the request will be ignored. If a service is listening on one of these blocked ports, it will not receive the packets and is effectively disabled. For this reason, care should be taken when configuring a firewall to block access to ports not in use, while not blocking access to ports used by configured services.
For most users, the best tools for configuring a simple firewall are the two straight-forward, graphical firewall configuration tools which ship with Red Hat Linux: the Security Level Configuration Tool and GNOME Lokkit.
Both of these tools perform the same task — they create broad iptables rules for a general-purpose firewall. The difference between them is in their approach to performing this task. The Security Level Configuration Tool is a firewall control panel, while GNOME Lokkit presents the user with a series of questions in a wizard-type interface.
Password Security
Passwords are the primary method Red Hat Linux uses to verify a users identity. This is why password security is enormously important for protection of the user, the workstation, and the network.
For security purposes, the installation program configures the system to use the Message-Digest Algorithm (MD5) and shadow passwords. It is highly recommended that we do not alter these settings.
If you deselect MD5 passwords during installation, the older Data Encryption Standard (DES) format is used. This format limits passwords to eight alphanumeric character passwords (disallowing punctuation and other special characters) and provides a modest 56-bit level of encryption.
If you deselect shadow passwords, all passwords are stored as a one-way hash in the world-readable /etc/passwd file, which makes the system vulnerable to offline password cracking attacks. If an intruder can gain access to the machine as a regular user, he can copy the /etc/passwd file to his own machine and run any number of password cracking programs against it. If there is an insecure password in the file, it is only a matter of time before the password cracker discovers it.
Shadow passwords eliminate this type of attack by storing the password hashes in the file /etc/shadow, which is readable only by the root user.
This forces a potential attacker to attempt password cracking remotely by logging into a network service on the machine, such as SSH or FTP. This sort of brute-force attack is much slower and leaves an obvious trail as hundreds of failed login attempts are written to system files. Of course, if the cracker starts an attack in the middle of the night and you have weak passwords, the cracker may have gained access before day light.
Beyond matters of format and storage is the issue of content. The single most important thing a user can do to protect his account against a password cracking attack is create a strong password.
Create Strong Passwords
When creating a password, it is a good idea to follow these guidelines:
- Do Not Do the Following:
- Do Not Use Only Words or Numbers — You should never use only numbers or words in a password.
Some examples include the following:
- 8675309
- michael
- hackme
- Do Not Use Recognizable Words — Words such as proper names, dictionary words, or even terms from television shows or novels should be avoided, even if they are bookended with numbers.
- john1
- DS-9
- mentat123
- Do Not Use Words in Foreign Languages — Password cracking programs often check against word lists that encompass dictionaries of many languages. Relying on foreign languages for secure passwords is of little use.
Some examples include the following:
- cheguevara
- bienvenido1
- 1dumbKopf
- Do Not Use Hacker Terminology — If you think you are elite because you use hacker terminology — also called l337 (LEET) speak — in your password, think again. Many word lists include LEET speak.
Some examples include the following:
- H4X0R
- 1337
- Do Not Use Personal Information — Steer clear of personal information. If the attacker knows who you are, they will have an easier time figuring out your password if it includes information such as:
- Your name
- The names of pets
- The names of family members
- Any birth dates
- Your phone number or zip code
- Do Not Invert Recognizable Words — Good password checkers always reverse common words, so inverting a bad password does not make it any more secure.
Some examples include the following:
- R0X4H
- nauj
- 9-DS
- Do Not Write Down Your Password — Never store your password on paper. It is much safer to memorize it.
- Do Not Use the Same Password For All Machines — It is important that you make separate passwords for each machine. This way if one system is compromised, all of your machines will not be immediately at risk.
- Do the Following:
- Make the Password At Least Eight Characters Long — The longer the password is, the better. If you are using MD5 passwords, it should be 15 characters long or longer. With DES passwords, use the maximum length — eight characters.
- Mix Upper and Lower Case Letters — Red Hat Linux is case sensitive, so mix cases to enhance the strength of the password.
- Mix Letters and Numbers — Adding numbers to passwords, especially when added to the middle (not just at the beginning or the end), can enhance password strength.
- Include Non-Alphanumeric Characters — Special characters such as &, $, and > can greatly improve the strength of a password.
- Pick a Password You Can Remember — The best password in the world does you little good if we cannot remember it. So use acronyms or other mnemonic devices to aid in memorizing passwords.
With all these rules, it may seem difficult to create a password meeting all of the criteria for good passwords while avoiding the traits of a bad one. Fortunately, there are some simple steps one can take to generate a memorable, secure password.
Secure Password Creation Methodology
There are many methods people use to create secure passwords. One of the more popular methods involves acronyms. For example:
- Think of a memorable phrase, such as:
"over the hills and far away, to grandmother's house we go."
- Next, turn it into an acronym (including the punctuation).
othafa,tghwg.
- Add complexity by substituting numbers and symbols for letters in the acronym. For example, substitute 7 for t and the at symbol ( @) for a:
o7h@f@,7ghwg.
- Add more complexity by capitalizing at least one letter, such as H.
o7H@f@,7gHwg.
- Finally, do not use the example password above on any of the systems.
While creating secure passwords is imperative, managing them properly is also important, especially for system administrators within larger organizations. The next section will detail good practices for creating and managing user passwords within an organization.
Create User Passwords Within an Organization
If there are a significant number of users in an organization, the system administrators have two basic options available to force the use of good passwords. They can create passwords for the user, or they can let users create their own passwords, while verifying the passwords are of acceptable quality.
Create the passwords for the users ensures that the passwords are good, but it becomes a daunting task as the organization grows. It also increases the risk of users writing their passwords down.
For these reasons, system administrators prefer to have the users create their own passwords, but actively verify that the passwords are good and, in some cases, force users to change their passwords periodically through password aging.
Force Strong Passwords
To protect the network from intrusion it is a good idea for system administrators to verify that the passwords used within an organization are strong ones. When users are asked to create or change passwords, they can use the command line application passwd, which is Pluggable Authentication Manager (PAM) aware and will therefore check to see if the password is easy to crack or too short in length via the pam_cracklib.so PAM module. Since PAM is customizable, it is possible to add further password integrity checkers, such as pam_passwdqc (available from http://www.openwall.com/passwdqc/) or to write your own module. For a list of available PAM modules, see http://www.kernel.org/pub/linux/libs/pam/modules.html.
It should be noted, however, that the check performed on passwords at the time of their creation does not discover bad passwords as effectively as running a password cracking program against the passwords within the organization.
There are many password cracking programs that run under Linux, although none ship with the operating system. Below is a brief list of some of the more popular password cracking programs:
- John The Ripper — A fast and flexible password cracking program. It allows the use of multiple word lists and is capable of brute-force password cracking. It is available at http://www.openwall.com/john/.
- Crack — Perhaps the most well known password cracking software, Crack is also very fast, though not as easy to use as John The Ripper. It can be found at http://www.users.dircon.co.uk/~crypto/index.html.
- Slurpie — Slurpie is similar to John The Ripper and Crack except it is designed to run on multiple computers simultaneously, creating a distributed password cracking attack. It can be found along with a number of other distributed attack security evaluation tools at http://www.ussrback.com/distributed.htm.
Always get authorization in writing before attempting to crack passwords within an organization.
Password Aging
Password aging is another technique used by system administrators to defend against bad passwords within an organization. Password aging means that after a set amount of time (usually 90 days) the user is prompted to create a new password. The theory behind this is that if a user is forced to change his password periodically, a cracked password is only useful to an intruder for a limited amount of time. The downside to password aging, however, is that users are more likely to write their passwords down.
Their are two primary programs used to specify password aging under Red Hat Linux: the chage command or the graphical User Manager ( redhat-config-users) application.
The -M option of the chage command specifies the maximum number of days the password is valid. So, for instance, if we want a user's password to expire in 90 days, type the following command:
chage -M 90 <username>In the above command, replace <username> with the name of the user. If we do not want the password to expire, it is traditional to use a value of 99999 after the -M option (this equates to a little over 273 years).
If want to use the graphical User Manager application to create password aging policies, go to the Main Menu Button (on the Panel) => System Settings => Users & Groups or type the command redhat-config-users at a shell prompt (for example, in an XTerm or a GNOME terminal). Click on the Users tab, select the user from the user list, and click Properties from the button menu (or choose File => Properties from the pull-down menu).
Then click the Password Info tab and enter the number of days before the password expires:
Administrative Controls
When administering a home machine, the user has to perform some tasks as the root user or by acquiring effective root privileges via a setuid program, such as sudo or su. A setuid program is one that operates with the user ID (UID) of the owner of program rather than the user operating the program. Such programs are denoted by a lower case s in the owner section of a long format listing.
For the system administrators of an organization, however, choices must be made as to how much administrative access users within the organization should have to their machine. Through a PAM module called pam_console.so, some activities normally reserved only for the root user, such as rebooting and mounting removable media are allowed for the first user that logs in at the physical console. However, other important system administration tasks such as altering network settings, configuring a new mouse, or mounting network devices are impossible without administrative access. As a result system administrators must decide how much administrative access the users on their network should receive.
Allow Root Access
Allowing root access by users means that minor issues like adding devices or configuring network interfaces can be handled by the individual users, leaving system administrators free to deal with network security and other important issues.
Giving root access to individual users can lead to the following issues...
Machine Misconfiguration Users with root access can misconfigure their machines and require assistance or worse, open up security holes without knowing it. Run Insecure Services Users with root access may run insecure servers on their machine, such as FTP or Telnet, potentially putting usernames and passwords at risk as they pass over the network in the clear. Running Email Attachments As Root Although rare, email viruses that effect Linux do exist. The only time they are a threat, however, is when they are run by the root user.
Disallow Root Access
The root password should be kept secret. Access to runlevel one or single user mode should be disallowed through boot loader password protection.
Method Description Effects Does Not Effect Changing the root shell. Edit /etc/passwd and change the shell from /bin/bash to /sbin/nologin. k Prevents access to the root shell and logs the attempt. The following programs are prevented from accessing the root account:
- login
- gdm
- kdm
- xdm
- su
- ssh
- scp
- sftp
Programs that do not require a shell, such as FTP clients, mail clients, and many setuid programs. The following programs are not prevented from accessing the root account:
- sudo
- FTP clients
- Email clients
Disable root access via any console device (tty). An empty /etc/securetty file prevents root login on any devices attached to the computer. Prevents access to the root account via the console or the network. The following programs are prevented from accessing the root account:
- login
- gdm
- kdm
- xdm
- Other network services that open a tty
Programs that do not log in as root, but perform administrative tasks through through setuid or other mechanisms. The following programs are not prevented from accessing the root account:
- sudo
- ssh
- scp
- sftp
Disable root SSH logins. Edit the /etc/ssh/sshd_config file and set the PermitRootLogin parameter to no. Prevents root access via the OpenSSH suit of tools. The following programs are prevented from accessing the root account:
- ssh
- scp
- sftp
Since this only effects the OpenSSH suite of tools, no other programs are effected by this setting. Use PAM to limit root access to services. Edit the file for the target service in the /etc/pam.d/ directory. Make sure the pam_listfile.so is required for authentication. See Section 4.4.2.4 Disabling Root Using PAM for details. Prevents root access to network services that are PAM aware. The following services are prevented from accessing the root account:
- FTP clients
- Email clients
- login
- gdm
- kdm
- xdm
- ssh
- scp
- sftp
- Any PAM aware services
Programs and services that are not PAM aware. Table 4-1. Methods of Disabling the Root Account
Disabling the Root Shell
To prevent users from logging in directly as root, the system administrator can set the root account's shell to /sbin/nologin in the /etc/passwd file. This will prevent access to the root account through commands that require a shell, such as the su and the ssh commands.
Programs that do not require access to the shell, such as email clients or the sudo command, can still access the root account.
Disabling Root Logins
To further limit access to the root account, administrators can disable root logins at the console by editing the /etc/securetty file. This file lists all devices the root user is allowed to log into. If the file does not exist at all, the root user can log in through any communication device on the system, whether it by via the console or a raw network interface. This is dangerous because a user can Telnet into his machine as root, sending his password in plain text over the network. By default, Red Hat Linux's /etc/securetty file only allows the root user to log at the console physically attached to the machine. To prevent root from logging in, remove the contents of this file by typing the following command:
echo > /etc/securetty
A blank /etc/securetty file does not prevent the root user from logging in remotely using the OpenSSH suite of tools because the console is not opened until after authentication.
Disabling Root SSH Logins
To prevent root logins via the SSH protocol, edit the SSH daemon's configuration file: /etc/ssh/sshd_config. Change the line that says:
# PermitRootLogin yesTo read as follows:
PermitRootLogin no
Disabling Root Using PAM
PAM, through the /lib/security/pam_listfile.so module, allows great flexibility in denying specific accounts. This allows the administrator to point the module at a list of users who are not allowed to log in. Below is an example of how the module is used for the vsftpd FTP server in the /etc/pam.d/vsftpd PAM configuration file (the \ character at the end of the first line in the following example is not necessary if the directive is on one line):
auth required /lib/security/pam_listfile.so item=user \ sense=deny file=/etc/vsftpd.ftpusers onerr=succeedThis tells PAM to consult the file /etc/vsftpd.ftpusers and denying any user listed access to the service. The administrator is free to change the name of this file and can keep separate lists for each service or use one central list to deny access to multiple services.
If the administrator wants to deny access to multiple services, a similar line can be added to the PAM configuration services, such as /etc/pam.d/pop and /etc/pam.d/imap for mail clients or /etc/pam.d/ssh for SSH clients.
Limit Root Access
Rather than completely deny access to the root user, the administrator may wish to allow access only via setuid programs, such as su or sudo.
The su Command
Upon typing the su command, the user is prompted for the root password and, after authentication, given a root shell prompt.
Once logged in via the su command, the user is the root user and has absolute administrative access to the system. In addition, once a user has attained root, it is possible in some cases for them to use the su command to change to any other user on the system without being prompted for a password.
Because this program is so powerful, administrators within an organization may wish to limit who has access to the command.
One of the simplest ways to do this is to add users to the special administrative group called wheel. To do this, type the following command as root:
usermod -G wheel <username>In the previous command, replace <username> with the username being added to the wheel group.
To use the User Manager for this purpose, go to the Main Menu Button (on the Panel) => System Settings => Users & Groups or type the command redhat-config-users at a shell prompt. Select the Users tab, select the user from the user list, and click Properties from the button menu (or choose File => Properties from the pull-down menu).
Then select the Groups tab and click on the wheel group.
Groups
PaneNext open the PAM configuration file for su, /etc/pam.d/su, in a text editor and remove the comment
[#] from the following line:auth required /lib/security/pam_wheel.so use_uidDoing this will permit only members of the administrative group wheel to use the program.
The root user is part of the wheel group by default.
The sudo Command
The sudo command offers another approach for giving users administrative access. When a trusted user precedes an administrative command with sudo, he is prompted for his password. Then, once authenticated and assuming that the command is permitted, the administrative command is executed as if by the root user.
The basic format of the sudo command is as follows:
sudo <command>In the above example, <command> would be replaced by a command normally reserved for the root user, such as mount.
Users of the sudo command should take extra care to log out before walking away from their machines since sudoers can use the command again without being asked for a password for a five minute period. This setting can be altered via the configuration file, /etc/sudoers.
The sudo command allows for a high degree of flexibility. For instance, only users listed in the /etc/sudoers configuration file are allowed to use the sudo command and the command is executed in the user's shell, not a root shell. This means the root shell can be completely disabled.
The sudo command also provides a comprehensive audit trail. Each successful authentication is logged to the file /var/log/messages and the command issued along with the issuer's user name is logged to the file /var/log/secure.
Another advantage of the sudo command is that an administrator can allow different users access to specific commands based on their needs.
Administrators wanting to edit the sudo configuration file, /etc/sudoers, should use the visudo command.
To give someone full administrative privileges, type visudo and add a line similar to the following in the user privilege specification section:
michael ALL=(ALL) ALLThis example states that the user, michael, can use sudo from any host and execute any command.
The example below illustrates the granularity possible when configuring sudo:
%users localhost=/sbin/shutdown -h nowThis example states that any user can issue the command /sbin/shutdown -h now as long as it is issued from the console.
The man page for sudoers has a detailed listing of options for this file.
Security Enhanced Communication Tools
As the size and popularity of the Internet has grown, so has the threat from communication interception. Over the years, tools have been developed to encrypt communications as they are transferred over the network.
Red Hat Linux ships with two basic tools that use high-level, public-key-cryptography-based encryption algorithms to protect information as it travels over the network.
- OpenSSH — A free implementation of the SSH protocol for encrypting network communication.
- Gnu Privacy Guard (GPG) — A free implementation of the PGP (Pretty Good Privacy) encryption application for encrypting data.
OpenSSH is a safer way to access a remote machine and replaces older, unencrypted services like telnet and rsh. OpenSSH includes a network service called sshd and three command line client applications:
- ssh — A secure remote console access client.
- scp — A secure remote copy command.
- sftp — A secure pseudo-ftp client that allows interactive file transfer sessions.
It is highly recommended that any remote communication with Linux systems occur using the SSH protocol.
Although the sshd service is inherently secure, the service must be kept up-to-date to prevent security threats.
GPG is a great way to keep private data private. It can be used both to email sensitive data over public networks and to protect sensitive data on hard drives.
Available Network Services
While user access to administrative controls is an important issue for system administrators within an organization, keeping tabs on which network services is of paramount importance to anyone who installs and operates a Linux system.
Many services under Linux behave as network servers. If a network service is running on a machine, then a server application called a daemon is listening for connections on one or more network ports. Each of these servers should be treated as potential avenue of attack.
Risks To Services
Buffer Overflow Attacks Services which connect to ports numbered 0 through 1023 must run as an administrative user. If the application has an exploitable buffer overflow, an attacker could gain access to the system as the user running the daemon. Because exploitable buffer overflows exist, crackers will use automated tools to identify systems with vulnerabilities, and once they have gained access, they will use automated rootkits to maintain their access to the system. Denial of Service Attacks (DoS) By flooding a service with requests, a denial of service attack can bring a system to a screeching halt as it tries to log and answer each request. Script Vulnerability Attacks If a server is using scripts to execute server-side actions, as Web servers commonly do, a cracker can mount an attack improperly written scripts. These script vulnerability attacks could lead to a buffer overflow condition or allow the attacker to alter files on the system. To limit exposure to attacks over the network, switch unused services off. Most network services installed with Red Hat Linux are turned off by default. Exceptions include:
>
cupsd The default print server for Red Hat Linux. lpd An alternate print server. portmap A necessary component for the NFS, NIS, and other RPC protocols. xinetd A super server that controls connections to a host of subordinate servers, such as wu-ftpd, telnet, and sgi-fam (which is necessary for the Nautilus file manager). sendmail The Sendmail mail transport agent is enabled by default, but only listens for connections from the localhost. sshd The OpenSSH server (secure replacement for Telnet). When determining whether or not to leave these services running, it is best to use common sense and err on the side of caution. For example, if we do not own a printer, do not leave cupsd running with the assumption that one day we might buy one. The same is true for portmap. If we do not mount NFS volumes or use NIS (the ypbind service), then portmap should be disabled.
To switch services on or off...
Services Configuration Tool
If you are not sure what purpose a service has, the Services Configuration Tool has a description field, illustrated in Figure 4-3, that may be of some use.
But checking to see which network services are available to start at boot time is not enough. Good system administrators should also check which ports are open and listening. See Section 5.8 Verifying Which Ports Are Listening for more on this subject.
Insecure Services
Potentially, any network service is insecure. This is why turning unused services off is so important. Exploits for services are revealed and patched routinely. So it is important to keep packages associated with any network service updated.
Some network protocols are inherently more insecure than others. These include any services which do the following things:
- Pass Usernames and Passwords Over a Network Unencrypted — Many older protocols, such as Telnet and FTP, do not encrypt the authentication session and should be avoided whenever possible.
- Pass Sensitive Data Over a Network Unencrypted — Many protocols pass data over the network unencrypted. These protocols include Telnet, FTP, HTTP , and SMTP.
Many network file systems, such as NFS and SMB, also pass information over the network unencrypted. It is the user's responsibility when using these protocols to limit what type of data is transmitted.
Also, remote memory dump services, like netdump, pass the contents of memory over the network unencrypted. Memory dumps can contain passwords or, even worse, database entries and other sensitive information.
Other services like finger and rwhod reveal information about users of the system.
Examples of inherently insecure services includes the following:
- rlogin
- rsh
- telnet
- vsftpd
- wu-ftpd
All remote login and shell programs ( rlogin, rsh, and telnet) should be avoided in favor of SSH.
FTP is not as inherently dangerous to the security of the system as remote shells, but FTP servers must be carefully configured and monitored to avoid problems.
Services which should be carefully implemented and behind a firewall include: