Designing an Effective IT Solution: Part 2

Judge Dan Hinde

In the previous article, we discussed that the ISP modem, the router(s) and the switch(es) make up the primary components of a small business network infrastructure. It is through these devices that information flows in and out of desktop PCs and laptops, as well as tablets and other mobile devices. The next step in the process of building out this infrastructure is to consider the security components necessary to control what traffic is allowed/disallowed through the business side of the network. In this article, we will delve more deeply into the layers of security required or optional.

When designing the security layers, one should be mindful of who needs access to the network and what assets they need permission to access. Access can then be controlled by how they login, determining who is logging in, and what they are trying to access. Tools to consider may include active directory for domain access, single sign-on (SSO) or multi-factor authentication (MFA) for identity management, or possibly a combination of all three.

Advertisement

Answering Legal Banner

Active Directory, as the name implies, keeps a list of users who have been identified as legitimate users of the network assets. It can also limit what devices, files and folders each user in the list may have access to.

Single sign-on (SSO) is an authentication method that enables users to securely authenticate with multiple applications and websites by using just one set of credentials.

Multi-factor Authentication (MFA) is an authentication method that requires the user to provide two or more verification factors to gain access to a resource or service such as an application, online account, or a VPN. MFA’s additional verification factors decrease the likelihood of a successful cyber-attack.

Advertisement

Eza Mediation

Once users have access to the network, it is imperative to ensure that all transmissions are clean and free of virus or malware. This is accomplished by means of anti-virus, anti-malware, and other endpoint protection. These tools scan the network traffic for anomalies and trap (quarantine) them when possible. Because bad actors are constantly updating their attacks, these software tools must also be updated with new definitions on a regular basis.

Some of these tools can send alerts across the internet to monitor locations so that alerts can be configured to draw immediate attention to known issues. Some of these locations monitor 24/7 and are known as security operations centers (SOC). They are set up to be able to take immediate mitigation steps to protect the critical assets on the network. This can be done by isolating the affected device from the rest of the network or taking steps to resolve the issue immediately.

In a reverse fashion, dark web monitoring can be used to identify if personal or business information is being transmitted across ‘dark’ networks. This is done by listing critical information, like credit accounts, and allowing the system to scan these foreign networks for that same information. This is where data encryption becomes an important tool for protecting critical data. Data encryption is a way of translating data from plaintext (unencrypted) to ciphertext (encrypted). Users can access encrypted data with an encryption key and decrypted data with a decryption key.

The most critical function to consider above all else is to backup sensitive data and applications. Identifying critical assets is the most important step in designing a backup and recovery plan. What data is critical to your business to be able to operate? How long can you afford to be without it? How much of the data is required to operate? How long will it take to restore that required data? What technology is available to expedite the process? These are the questions that should be answered to help design your disaster recovery response plan.

Additionally, the ability to validate the restoration process is just as critical as backing up the data. Nothing could be worse than thinking you have good backups but finding out they are corrupted when you need them most. It is always a good idea to plan a restoration test a couple of times a year to ensure the process is sound and that the data is good when you need it.

Larry McKinley

Larry McKinley earned a BSEE degree from Texas Tech University, where he began his career in industrial automation and controls, working primarily with agricultural, manufacturing, and petrochemical customers. After industrial automation, Larry became a process control engineer, working in the Pulp & Paper industry. While in the industry, he worked as a Process Information Engineer, IT Project Manager, IT Business Relationship Manager, and Business Application Support Manager. Larry also earned certification as a Six Sigma Black Belt, working for several years in Manufacturing Optimization. Interests outside of work include music, cooking, gardening, and sports.

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Posts