Security in tech is growing, and so is the vocabulary associated with it. If you’re new to the security space, you may be wondering where to start, how to do it, what you need, why you need it, and so on. The truth of the matter is, before you do anything else you need to ensure you and your organization have a combination of these five fundamental best practices in place:
In most cases, simply doing these things well will greatly reduce your overall risk. Here’s what you need to know:
This is the bedrock and fundamental foundation of every successful security program. Without it, the other best practices in this article would be much less effective.
Having a solid asset inventory depends a few simple things: knowing what assets you have, where they are on your network, what software and configurations are on them, and which users and systems have access to them.
What counts as an “asset” from a security perspective? For starters, any kind of network-accessible electronic system, including: laptops, desktops, servers, firewalls, switches, routers, phones, printers, cloud applications, and more.
If your asset inventory has gaps, that means your security program will also have gaps. If you require that all laptops have full-disk encryption enabled on them before your IT team gives them to employees, but you and your IT team don’t know about the five new laptops that your HR team just purchased using a corporate credit card, then they likely won’t get encrypted (until someone finds out about it).
Network and vulnerability scanning solutions can help maintain and identify gaps in your organization’s asset inventory. Using a combination of network scans and endpoint agents will help provide rich, near real-time asset data for your asset inventory.
Any good security program starts with multi-factor authentication for accessing critical personal or business data. Forms of authentication fall into three categories:
Passwords are fundamentally flawed and can be easily stolen via phishing attacks, password guessing attacks, and malware, for example. By simply using a password to safeguard your data, an attacker only needs to jump through one hoop to compromise your account. Requiring multiple forms of authentication for users makes gaining user credentials (and therefore access) much more difficult and expensive for attackers.
One important thing to note here is that requiring two forms of authentication from the same category will not suffice from a security perspective. For example, if you require users to enter a password and then answer a security question, such as “What’s your mother’s maiden name?”, then that doesn’t count as two-factor authentication. Since those are both “something you know,” it’s simply single factor authentication, twice. Requiring a password (something you know) and then a 6-digit code generated by an app on a smartphone (something you have) does count, however.
Simply put, patch management means making sure all of your software is up to date, installed, and configured correctly. This involves obtaining, testing, and installing patches (i.e. software updates) to your organization’s systems and devices. To do this effectively, you’ll need to continuously stay aware of available patches, determine which patches are needed on what systems, oversee their installation, and test for issues after patch installation. This is typically handled as a partnership between IT and DevOps teams as opposed to the security team.
Patch management plays closely with vulnerability management, or the process of determining whether you have any vulnerabilities in your IT environment. There are three elements behind patch management: prioritizing vulnerability remediation, evaluating compensating controls (i.e. existing security techniques or systems that lower the risk of a vulnerability), and making sure any patch you implement is installed correctly.
Here’s why these elements matter: applying a patch will sometimes break another part of the software you’re using, causing more harm than good. Understanding this inherent risk will play a large role in how you prioritize which patches you want to apply. In the event a patch does break some software, requiring you to remove the patch, then having compensating controls in place will make it harder for an attacker to exploit any vulnerabilities that reemerge. An example of a compensating control that could be put in place includes implementing firewall rules that limit the number of systems that can communicate with a vulnerable system that can’t be easily patched.
To help mitigate potential fallout, test patches on non-critical systems or in test environments that mirror your production environment before installing patches across your entire fleet of systems.
Decentralization is a concept that involves keeping data spread out across your networks and cloud services to ensure that if one user or server in your organization’s network is compromised, the attacker won’t necessarily have access to additional company data that’s stored in the rest of the networks and cloud services your company uses. For example, if an attacker finds a way into one of your office’s internal file share systems in a decentralized environment, they’ll likely only be able to access that office’s shared files but not necessarily all of the files in your cloud storage provider. However, if you have a centralized environment and an attacker compromises one server, they may find ways to easily move from that server to additional company systems and data, such as email servers, financial statements, or user directories.
Decentralization provides two benefits:
1. The benefit of a decentralized security team, contingent on a good vendor management process: If you have a small or moderate-sized security team, it can be incredibly difficult to monitor the dozens of cloud applications your company uses. Luckily, well-established cloud service providers are usually investing heavily in their own security teams and program, focused on protecting their environment in depth. Keeping the vendor’s application separate from the rest of your network allows your security team to focus on your organization’s core environment, while the vendor’s security team can focus on protecting the application or service they host on your behalf.
2. The benefit of containing a breach’s impact if one specific application or user is compromised: If one vendor application is compromised in a decentralized environment, that means the breach’s impact is contained to that one application or vendor. Doing this makes it more difficult, but not impossible as seen in recent breaches, for an attacker to access the rest of your systems and information. The more difficult it is for an attacker to reach a central server, the more time and money they’ll need to invest in the attack, and the more likely they are to abandon it or get caught.
Network segmentation takes decentralization one step further: It’s the concept of figuring out which systems and devices on your network need to talk to each other, and then only allowing those systems to talk to each other and nothing else.
For example, consider a nurse working on a hospital laptop. In a securely segmented network, the laptop would only be able to talk to one or two other systems, such as a print server (for printing patient records) and the patient record application itself. However, in a “flat network,” i.e., a network with no segmentation between systems, this laptop could talk to every other system on the network. If an attacker compromises that laptop, they’ll be able to attack every other system on the network through completely unchecked lateral movement.
To segment your network effectively, it’s essential that you inventory your most critical assets, understand where they sit on your network, and which systems and users can access them. If the assets are accessible by more than the specific systems and users who actually need that access, then that should be remedied. Access should always be granted based on the principal of least privilege to minimize a system or application’s overall attack surface. You’ll also need to ensure nothing on the network is able to communicate directly to your database servers, which is where critical application data is typically stored.
Once you’ve incorporated these fundamental best practices into your environment, your security foundation is set. Not only will it be more difficult for an attacker to move around your network, but it’ll be more costly, too. The more expensive and time-intensive an attack is, the more likely the attacker will be to abandon their attempt or to get caught if they persist.