Here are 23 security tips to guide you through hardening your Linux operating system.
This guide refers to a Linux system as a server, computer, or client. These terms should be read interchangeably as all tips apply to any system running Linux.
Linux Server Security Hardening Tips
It is extremely important that the operating system and various packages installed be kept up to date as it is the core of the environment. Without a stable and secure operating system most of the following security hardening tips will be much less effective. To perform an update of all packages installed you can make use of these commands which will list all available updates for installation and prompt you to proceed.
RHEL Based OS:
Debian Based OS:
These commands will install all available package updates from the repository, which may include the Linux kernel. Check the list of updates to be installed to see if there is a kernel update as this will require a reboot to apply.
In Linux the kernel is the core component of the operating system, it manages components such as memory, the CPU, process scheduling and more. Due to this central role the kernel cannot be restarted without a reboot of the whole operating system, so to complete a kernel update the system will need to be rebooted. There are third party options available to avoid system reboot, such as those offered by Ksplice or KernelCare.
Other packages that run in user space can simply be restarted to make use of the updated version without system reboot.
It is advisable that security updates be installed as soon as possible, this can either be done manually or automatically via crontab. It is also suggested that you subscribe to the mailing list for your operating system as these will keep you updated on any security updates to the kernel and other common packages as they become available.
Any other custom applications that you have installed that are not maintained by a package manager must also be patched frequently so that the latest security updates can be applied. Some examples of such applications include popular web applications like WordPress, Joomla or Drupal. These types of applications are installed outside of the package manager, so a yum update or apt-get upgrade will not update them.
Some applications may update automatically such as WordPress, while others may require a manual process to update such as WordPress plugins. The update process for the particular application will differ on a case by case basis, so if you are unsure check the official documentation from the vendor and schedule regular updates. It is recommended that you subscribe to any mailing lists or alerts provided by the application vendor to keep up to date with any vulnerabilities that become disclosed so that you can update in a timely manner.
In Linux the root user has full unrestricted access to the system, by disabling logging in directly as the root user we can improve security as attackers typically attempt to compromise the root account. This can be done by editing the /etc/passwd file and changing the root shell from /bin/bash to /sbin/nologin
Default /etc/passwd for root
After disabling root login
This will prevent root access through the GUI, SSH, SCP, SFTP and with su. It will not disable sudo or console access however.
Services can also be explicitly configured to disallow root login. Remote access through SSH for instance can be disabled for the root user by modifying the ‘/etc/ssh/sshd_config’ file as below. After editing the file, restart the service to apply the change.
Root privileges can be delegated out to other user accounts as required. As a best practice you do not want to provide the root password to multiple users as it makes auditing and tracking who is doing what with the account more difficult. To provide root access to other users, the user account can be added to the sudoers file which will grant them root privileges. This file can be modified with the ‘visudo’ command.
[root@centos ~]# visudo ... root ALL=(ALL) ALL bob ALL=(ALL) ALL ...
The root account will be there by default, other accounts can also be specified. In this instance the ‘bob’ account has been added to also have full sudo privileges and can run all commands as root by prefixing them with ‘sudo’ and correctly entering their password.
The previous step disables remote access for the root account, however it will still be possible for root to log in through any console device. Depending on the security of your console access you may wish to leave root access in place, otherwise it can be removed by clearing the /etc/securetty file as shown below.
echo > /etc/securetty
This file lists all devices that root is allowed to login to, the file must exist otherwise root will be allowed access through any communication device available whether that be console or other.
With no devices listed in this file root access has been disabled. It is important to note that this does not prevent root from logging in remotely with SSH for instance, that must be disabled as outlined in point 3 – Disable remote root access above.
Access to the console itself should also be secured, a physical console can be protected by the information covered in point 13 – Physical security.
As mentioned above users that require root privileges can be added to the sudoers file, however we can further restrict what the users can run as root rather than simply providing full access by explicitly specifying the commands in the sudoers file. For instance with bob removed from the sudoers file, he is not able to reboot the server.
[bob@centos ~]$ sudo reboot [sudo] password for bob: bob is not in the sudoers file. This incident will be reported.
However after running ‘visudo’ and editing the sudoers file as below, this becomes possible.
After this change bob is now able to perform the reboot as root but nothing else.
A firewall such as iptables or firewalld should be used to restrict inbound and outbound traffic to and from your Linux server. While it is ideal to restrict both inbound and outbound traffic, it is more common for a server to allow any outbound traffic and only restrict incoming traffic. This is generally because attacks initiate externally especially from the Internet, these external networks are therefore less trustworthy than the server itself. If a service on the server is compromised and the server is capable of connecting out to the Internet without restriction then it could cause further compromise of the system.
The firewall should be used to specify source and destination IP addresses and ports if possible. For example, we can allow SSH access to TCP port 22 to only come in from trusted IP address 126.96.36.199 which will prevent anyone else from attempting to connect to the server via SSH. Destination addresses can also be limited, for instance we may only want our server to connect out to a particular trusted repository for package updates over TCP port 443. By allowing this outbound access and no other external connections to the Internet we can prevent the server from downloading other files, or at least restrict what is available.
Restricting based on source and destination IP addresses with ports is much better than simply changing the port that a service is listening on. For instance you could change the SSH port to 4567 rather than the default of 22. While this may stop some automated attacks it will be trivial for a port scanner such as nmap to detect as it’s only simple security through obscurity, as soon as the secrecy has been lost so has the security. Once firewall rules are in place nmap can also be used to scan a system for open ports, allowing you to confirm the rules are working as intended.
While a firewall will determine the allowed inbound and outbound traffic it is important that you encrypt all inbound and outbound data communications to keep them secure. This involves using tools that support encryption such as SSL/TLS. For example if your Linux server is running a web server such as Apache and the website has a login page where users enter a username and password, rather than configuring this to use plain text HTTP it should be set to use HTTPS which will ensure the communication between the server and the client is encrypted. This will prevent data transfer over the wire from being seen by anyone else, assuming the private key on the web server is secure of course.
To achieve this essentially you need to actively make the choice to use tools and protocols that support encryption when communicating over the network, for instance using SSH rather than telnet, using SFTP rather than FTP, or using IMAPS rather than IMAP. Other tools such as VPN can be used to establish secure and encrypted tunnels between two hosts.
Two factor authentication can be implemented for SSH access or other application login, it will improve login security by adding a second factor of authentication, that is the password is typically known as something you know, while the second factor may be a physical security token or mobile device which acts as something you have. The combination of something you know and something you have ensures that you are more likely who you say you are.
There are custom applications available for this such as Duo Securityand Google Authenticator as well as many others. These typically involve installing an application on a smart phone and then entering the generated code alongside your username and password when you authenticate.
Google Authenticator can be used for many other applications than just SSH, such as for WordPress login with third party plugin support.
SELinux was originally developed by the NSA as a set of patches to the Linux kernel. SELinux reduces vulnerability to privilege escalation, provides fine grain access control and separates processes from each other. Processes run in their own domain which prevents them from accessing files used by other processes.
For example the Apache (httpd) web server runs with the context system_u:system_r:httpd_t:s0, if this process is compromised by an attacker their access to further resources and causing potential damage is limited by SELinux policy. By default Apache can access files labelled with the httpd_sys_content_t type, files created within /var/www/html will be labelled with this by default so that Apache can serve web files. Other files and directories elsewhere in the file system will not have this by default so Apache will not be able to access non web files stored elsewhere due to SELinux policy restrictions. You can view the SELinux type of a file or directory with ‘ls -z’.
SELinux comes enabled with RHEL based operating systems such as CentOS by default and it is recommended to use. Over time I have seen lots of Linux based guides simply advise that SELinux be disabled and discarded rather than configured correctly. This is not ideal, SELinux should always be enabled preferably in enforcing mode however you can instead set it to permissive mode which will not enforce anything but instead log anything that would have otherwise been blocked in enforcing mode, which is better than having it disabled entirely.
To get detailed logs in plain English that will give you suggested commands on how to resolve SELinux problems, install the ‘setroubleshoot’ and ‘setroubleshoot-server’ packages. This will provide the ‘sealert’ command, which can be run against the audit.log file and will provide advice on fixing any problems that have been logged.
SELinux can be quite intimidating at first however it’s definitely worth learning to take advantage of the increased security that it can offer, I have found the official Red Hat documentation SELinux Users and Administrators Guide to be an excellent resource.
Every time you install another package or start an additional service on your Linux server you are effectively increasing the attack surface. By doing this there are more things available for an attacker to attempt to target as there is more code and moving parts available, increasing the likelihood of a vulnerability.
When installing Linux by default a graphical user interface (GUI) is usually installed, in a server environment it is highly recommended that you do not install this to help reduce the attack surface and resource usage. If the GUI is already installed it is possible to uninstall it.
Another good option when installing Linux is to select a minimal installation, this will install a base set of important packages that are required with not much extra bloat. This is preferable as you will have fewer packages installed, you can easily install anything that you require after installation from the repository rather than having a bunch of packages preinstalled that may never even be used.
Your attack surface can be reduced by disabling services that are not needed and by uninstalling or removing packages and software that are not required. The following commands can be used to view the status of installed services, they will list services that are configured to start up on boot.
CentOS 6 and earlier
systemctl list-unit-files --type=service
Should you find a service that you know is not required it can be disabled so that it does not start on boot, only do this if you are sure that the service is not required as some may be needed by the operating system or other services that you make use of.
CentOS 6 and earlier
chkconfig off service-name
systemctl disable service-name
To view a full list of installed packages you can run ‘yum list installed’, should you determine that there are packages that are no longer required you can remove them with ‘yum remove package-name‘.
With the netstat command you can list the ports that a process on the server is actively listening for connections on. This can help identify something malicious that is running waiting to accept an external connection, or may show an already established connection that should not be allowed. This is why you would want to restrict the connectivity in the firewall as outlined in point 6 – Enable and configure firewall. If you find something malicious you can try to stop the service or kill the listed PID, though this likely will not stop it from starting up again.
[root@centos ~]# netstat -antup Active Internet connections (servers and established) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1218/sshd tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 1736/master tcp 0 64 192.168.0.100:22 188.8.131.52:29667 ESTABLISHED 2716/sshd: root@pts tcp6 0 0 :::80 :::* LISTEN 2859/httpd tcp6 0 0 :::22 :::* LISTEN 1218/sshd tcp6 0 0 ::1:25 :::* LISTEN 1736/master tcp6 0 0 :::443 :::* LISTEN 2859/httpd tcp6 0 0 :::8443 :::* LISTEN 2859/httpd udp 0 0 0.0.0.0:123 0.0.0.0:* 701/chronyd udp 0 0 127.0.0.1:323 0.0.0.0:* 701/chronyd udp6 0 0 :::123 :::* 701/chronyd udp6 0 0 ::1:323 :::* 701/chronyd
In this example sshd is listening on TCP port 22 for SSH connections and there is one established connection from 184.108.40.206. There are many other services listening on various ports such as httpd listening for port 80 and 443 connections to serve HTTP and HTTPS requests.
Another good method of reducing the attack surface is to segregate important roles between different servers. Rather than having one server doing everything it is preferable to split important roles up into different instances. For example you may have one server that acts as the web server, another that acts as the database server, another that acts as the email server and another that acts as the DNS server.
With the increase in virtualization technologies over the recent years this is becoming cheaper and easier to take advantage of. By splitting the different roles into different servers you’re reducing the attack surface, if one of the services is vulnerable to compromise then you only have to worry about that server running that particular vulnerable service – at least initially until they work their way in deeper!
While not directly hardening your Linux server, by reviewing the logs you can identify possible problems that should be resolved, such as unauthorized user access. Security events and other messages are stored in the log files for a reason and should be reviewed. It can be difficult and time consuming to manually review the log files on each server, so you could instead look at implementing a system such as logstash or a syslog server to centrally collect all logs.
Access logs should be monitored so that unauthorized access attempts are made aware and dealt with as required. Even successful access attempts should be logged as that will provide visibility over what user accounts are doing in the case of an attacker that has gained access, or a legitimate user that is misbehaving. Logwatch can also be installed and configured to email periodic summaries of events that are logged such as packages installed or users that have logged in. Being aware of what is happening on your systems will help you detect any potential attacks.
By default any user that you create on a Linux server with the default /bin/bash shell is capable of logging in remotely by SSH once it has had a password set. SSH access can be restricted to a defined set of users or groups by using AllowUsers or AllowGroups in /etc/ssh/sshd_config respectively. Not all users on a server will typically need SSH access so this can be restricted to only those that need access to manage the server.
For example, the below configuration in /etc/ssh/sshd_config will only allow users root and bob SSH access, any other user will be denied access when they attempt to login via SSH.
AllowUsers root bob
Users that share a common attribute can be grouped together and allowed instead with AllowGroups which is more scalable with a larger number of users. Be sure to restart the sshd service to apply any changes made here.
Physical access to your server can easily undo a lot of the steps outlined here, so it is equally important to ensure that your server is physically secure so that it can’t be accessed by an unauthorized user.
If your server is hosted within a data center environment they will likely already restrict physical access and have various security measures in place, it is recommended that you discuss the protection measures in place for your server and ensure that these meet your requirements.
Alternatively if you host your server on premises at home or in an office it should be locked securely in a dedicated server room if possible in a central location of the building, with access only granted to those that need to physically maintain the server. Keeping any server racks or cases locked is also recommended.
Password protecting the BIOS can help slow down an attacker with physical access from changing BIOS settings or booting the system from CD or USB drive. As the BIOS will differ between manufacturers you should refer to your specific documentation regarding setting this.
It is important to note that this does not protect the system very well if full physical access is possible, as the BIOS password can usually be easily reset with jumpers on the motherboard or by removing the CMOS battery. You would therefore be better off protecting system data with encryption as covered in point 16 – Encrypt data. This also further enforces the importance of point 13 – Physical security.
While there are better methods of securing a Linux system as suggested, BIOS passwords can still be useful for public computers such as in a public library for example. In this instance a BIOS password will stop random people from inserting a CD or USB drive and trying to boot into a different operating system, yes they could get around this protection by opening the computer and resetting the BIOS password however they are probably less likely to start doing this in a public place as it’s more difficult to do undetected compared to simply inserting a CD or USB drive.
By securing the boot loader we can prevent access to single user mode which logs in automatically as root. This is done with GRUB 2 by setting a password which is stored in plain text by default, it is recommended that you set an encrypted password instead so that the grub password can not be easily retrieved off the disk.
Sensitive data stored on the Linux server should be encrypted. If physical access to the server is somehow obtained and the unencrypted hard drives are stolen it will be possible for all data to be read by an attacker. Although physical security can help protect against this scenario you should plan for the worst and encrypt the data. Physical security may be more difficult to implement for portable devices such as a laptops and tablets, strong encryption can help protect data on stolen devices.
This problem also exists with virtual machines however possibly to a lesser extent, if the virtual hard disk file is copied then it is possible to access the data contained within. To prevent this and secure the data it needs to be encrypted at rest and not stored on disk in clear text. There are many ways to handle encryption, Linux Unified Key Setup-on-disk-format (LUKS) is one of them and works quite well. Once data has been encrypted if you lose the password or key to access that data you will no longer be able to access the data.
Using a central directory to maintain user accounts is typically more secure and much more scalable as you increase the amount of clients/servers that you need to access. Examples of such a directory include Microsoft’s Active Directory and Red Hat’s Identity Management, either can be used for authentication within a Linux environment.
A central directory provides several security advantages. By storing all user accounts in the directory, should a user account be locked out it will be locked out regardless of the client computer trying to log in. Without a directory server local accounts would be defined on a per server basis, so an attacker performing a brute force attack could simply lock out the account on one computer and then start the attack up again on another.
This centralized management also ensures that the users password meets a defined global password policy such as length and complexity requirements which would otherwise be manually defined on individual servers. Defining things like password policy on individual servers locally has the potential to reduce security if they are all not configured to the same level as it becomes possible for some to be incorrectly configured to have weaker security settings over time.
It can also be much more difficult to compromise the password hashes of an account as they are stored within the central directory server, rather than on each individual server. Although access to the /etc/shadow file requires root access to read it may be possible that an attacker has compromised one of your servers and does indeed have root access to the local server. The attacker would then be able to view the password hashes for all users on the server and perform an offline brute force attack which could result in them gaining further user credentials that may be required to access additional systems.
By enforcing strong passwords we can improve the security of an account as brute force attack becomes more difficult, stronger passwords require more time and computing power to discover. This is generally done through policy on the directory server where the accounts exist, but can also be configured locally on a per server basis. In CentOS 7 strong passwords are enforced by the pwquality PAM module rather than the cracklib module, however both use the same back end.
pwquality checks the strength of a password against a set of rules, first it checks if the password is a dictionary word and then if not it checks the custom set of rules defined within /etc/security/pwquality.conf.
To enable the pwquality module add the following line into the /etc/pam.d/passwd file.
password required pam_pwquality.so retry=3
The /etc/security/pwquality.conf file is then used to configure the checks such as minimum length, this file documents all available variables well, below is an example configuration.
minlen = 8 minclass = 4 maxsequence = 3 maxrepeat = 3
In this case the minimum acceptable size for the new password is length 8, the minimum number of required classes of characters is 4 (digits, uppercase, lowercase, and symbols), the maximum sequence is 3 (such as abc or 123), and the maximum number of allowed consecutive same characters is 3 (such as aaa or 111)
It’s also important to note that a root user is able to set any password for them self or any other user account regardless of this, they will be warned if they are using a weak password however they can avoid the password enforcement.
Password aging defends against bad passwords being discovered and reused by an attacker, even if a password is compromised it will only be usable for a set period of time. Accounts which are no longer required but have not been locked out will become inaccessible when the password expires. This needs to be configured as by default a password change will not be required for 99999 days.
Password aging can either be managed locally on a per server basis, or as mentioned above within a central directory such as Active Directory or Identity Management. Ideally you’re using a central directory to make these policy changes much easier, you would set them in one place and they would apply for all users that log in over a multitude of servers.
It is possible to manage password aging on a per server basis locally, it will just require more administration time and increases the likelihood of a configuration mistake which can result in less secure accounts. Below are some example commands to apply password aging.
Show account aging information, these values can be modified further with the chage command.
[root@centos ~]# chage -l bob Last password change : Aug 19, 2015 Password expires : never Password inactive : never Account expires : never Minimum number of days between password change : 0 Maximum number of days between password change : 99999 Number of days of warning before password expires : 7
To run chage interactively to set these values on a user account run ‘chage username‘.
While having strong passwords in place for user accounts can help thwart brute force attacks as mentioned previously in point 18 – Enforce strong passwords, this is only one way of slowing down this type of attack. A good indication of brute force attack is a user account that has failed to log in successfully multiple times within a short period of time, these sorts of actions should be blocked and reported. We can block these attacks by automatically locking out the account, either at the directory if in use or locally.
The pam_tally2.so PAM module can be used to lock out local accounts after a set number of failures. To get this working I have added the below line to the /etc/pam.d/password-auth file.
auth required pam_tally2.so file=/var/log/tallylog deny=3 even_deny_root unlock_time=1200
This will log all failures to the /var/log/tallylog file and lock out an account after 3 consecutive failures. By default it will not deny the root account however we can also lock out root by specifying even_deny_root (though this may not be required if you have disabled root access as per point 3 – Disable remote root access and point 4 – Disable root console access). The unlock time is the amount of seconds after a failed login attempt that an account will automatically unlock and become available again.
Failed logins can be viewed as below, to view all failures simply remove the –user flag.
[root@centos ~]# pam_tally2 --user=bob Login Failures Latest failure From bob 4 08/21/15 19:38:23 localhost
The failure count can be manually reset by appending –reset onto this command.
pam_tally2 --user=bob --reset
If a login is successful before the limit has been reached the failure count will reset to 0. For more details see the pam_tally2 manual page by typing ‘man pam_tally2’.
It’s worth noting that the manual page advises to configure this with the /etc/pam.d/login file, however I found that under CentOS 7 this did not work and needed to use the /etc/pam.d/password-auth file instead. I also tried using /etc/pam.d/system-auth which I found documented elsewhere but this also failed, so this may differ based on your operating system.
You can also manually lock and unlock local user accounts rather than waiting for the failure limit to be reached.
Lock the user account ‘bob’.
[root@centos ~]# passwd -l bob Locking password for user bob. passwd: Success
Unlock the user account ‘bob’.
[root@centos ~]# passwd -u bob Unlocking password for user bob. passwd: Success
Be careful when enabling account lockout, as automatic locks on accounts used by various services could possibly lead to outages.
Other tools such as Fail2Ban can also be used to block the source IP addresses the failed logins initiate from in the firewall. This has the advantage of blocking the attack without locking the account and preventing legitimate user access.
SSH keys can be used to increase the level of security for a user remotely authenticating to a Linux server through SSH. SSH keys are typically preferable in terms of security when compared to a password as they are far less vulnerable to brute force attack, there is simply a lot more entropy in a key than password.
SSH keys are based upon public-key cryptography, whereby you will generate a key pair which includes a public key and a private key. The public key is stored on the destination server that you wish to access and will allow only the corresponding private key access.
It is therefore extremely important that you protect your private key, if an attacker is able to access this key then they will be able to log in as your user. Best practices dictate that your private key be encrypted with a passphrase which can be configured when you create the key pair. It’s also important that the private key file be readable and writable only by the user that owns the key, this would be permissions 0600 and is set as default on creation.
Create the key pair with the ssh-keygen command, the -t specifies the type of key to create, here we are using rsa version 2.
[bob@centos root]$ ssh-keygen -t rsa Generating public/private rsa key pair. Enter file in which to save the key (/home/bob/.ssh/id_rsa): Created directory '/home/bob/.ssh'. Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/bob/.ssh/id_rsa. Your public key has been saved in /home/bob/.ssh/id_rsa.pub. The key fingerprint is: 8e:dc:08:bb:8d:0e:12:04:22:ae:5e:f5:0a:21:3e:b0 bob@centos The key's randomart image is: +--[ RSA 2048]----+ |+ | |= | |.+ . . | |=.. o . | |Eo o. .S | |..o .+.= | |... ..+ o | | . . + | | .+ . | +-----------------+ [bob@centos ~]$ ls -la /home/bob/.ssh/ -rw-------. 1 bob bob 1766 Aug 19 16:41 id_rsa -rw-r--r--. 1 bob bob 398 Aug 19 16:41 id_rsa.pub
In the above example we created the id_rsa private key file and corresponding id_rsa.pub public key file.
Next upload the public key to the remote server that you wish to access, this can be done manually or with the ssh-copy-id command as shown below.
[bob@centos .ssh]$ ssh-copy-id email@example.com The authenticity of host '220.127.116.11' can't be established. ECDSA key fingerprint is 97:b6:fc:11:49:20:3c:10:ac:16:49:46:e5:56:03:30. Are you sure you want to continue connecting (yes/no)? yes /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys firstname.lastname@example.org's password: Number of key(s) added: 1 Now try logging into the machine, with: "ssh 'email@example.com'" and check to make sure that only the key(s) you wanted were added.
This will place the id_rsa.pub public key file on the destination server, in this case ‘18.104.22.168’ within the ~/.ssh/authorized_keys file, you can then SSH to the destination by simply running ‘ssh firstname.lastname@example.org’ and you should be prompted for the passphrase for your private key.
Once an account has been set up to make use of SSH keys rather than a password you can optionally disable password authentication through /etc/ssh/sshd_config to increase security as shown below.
PasswordAuthentication no PubkeyAuthentication yes
Reload sshd to apply these changes.
Even after implementing additional security measures it is still possible that your server may become compromised, no server should ever be considered 100% secure. Should this happen you would want to be alerted so that you can investigate further. This can be done by using a host based intrusion detection system which is typically installed on the server as an agent which monitors the internals of the system and can alert if an attempted or successful intrusion is detected. While this definitely will not detect and alert for every possible intrusion it is a good protection measure to put in place.
OSSEC is a cross-platform open source HIDS that is capable of performing log analysis, file integrity checking, policy monitoring, rootkit detection and real time alerting and response.
In addition to detecting intrusion it is also important to frequently scan the file system, memory and running processes for known viruses or malware threats that may have made it onto your Linux server. The scan should be able to actively quarantine known bad files that are detected and send out a notification alert for further investigation.
It is a good idea to run such scans during periods of low resource usage so that the scan does not conflict with normal service. This will depend on the work load of your server, however scanning over night or on the weekend usually works well and most tools allow you to specify a load level threshold to pause at and continue after it drops back down.
ClamAV is a popular open source antivirus available for Linux to detect viruses, trojans, malware and other malicious threats and works quite well. A lot of other tools also incorporate ClamAV such as Maldet which is another great tool. Other options such as Config eXploit Scanner (CXS) also makes use of ClamAV and will actively scan files as they are uploaded or modified, for instance if an attacker is able to modify a file with known malicious code it will be detected and quarantined within seconds.
Although it is impossible to perfectly fully secure a Linux system, we can significantly reduce the amount of vulnerabilities within a system and by extension the chance of a compromise by being security conscious and implementing these hardening tips. There is always going to be a trade off between security and usability, where that line is drawn in your environment is up to you.
Do you have any other security tips that you use in your Linux environment? Let me know in the comments and I’ll be happy to update the post so that we can improve upon it and have a useful and up to date community resource.