Security Support Done Right

In my previous life/position, I was in charge of all things Technology including Desktop Support. One of the challenges of good desktop support other than of course the obvious: Great Customer Service! was of course keeping the machines patched, clean, and in good working order. In the last 10 years this was made a little easier by better imaging software, better Malware support from AntiVirus vendors but the two biggest changes that helped tamper down the constant churn was first the Web Filter. Web Filtering is necessary to keep folks on the straight and narrow and to intercept the bad stuff people are exposed to either intentionally or by chance. You don’t want to be big bother but wouldn’t believe the stuff this technology catches. The other strong idea was VDI or Virtual Desktops. This was made possible by the VM revolution and the idea of accessing your applications and data everywhere! Smart Phones and corporate Google Drive/Drop Box solutions drove the idea that where you kept or accessed your stuff and on what device became trivial, it was everywhere! VDI isn’t for everyone but it has a lot of value both from a support, security, and usability stand point. The advances that have been made to performance really makes it hard to even tell you’re on a VM.

I’ve been out of that game for almost 2 years now but sometimes I do see an article or point of view that aligned with my thinking and wanted to share:

Any security tech worth their salt will tell you the same thing. The network needs to be protected from the users themselves. They are the primary way bad things enter the environment. To that end you need to do several things.

1. Segment off the entire gamut of user PCs and apply the same access restriction methodology you do to the Internet feed. Use a white list approach. Yes, they can reach more services internally. No, they cannot obtain administrative access. The user in front of the PC has no bearing on the PC’s access.

2. Remove the ability to administer anything directly. Create a set of ‘jump’ or ‘hop’ boxes which employ some form of two-factor authentication, from which all administrative functions originate. And this includes everything from networking gear to application administration. No PC should be able to obtain any form of administrative access to anything, anywhere.

3. Use end node segmentation. Every server and network device must have a separate, non-routable management interface. The primary IP address, the one with the configured default gateway, is the one used to provide services. The management interface has a disjoint IP address, as in it can’t be derived from the schema used to create the primary addresses. It has no routing capability, as in it can’t communicate outside of its configured subnet. The Hop-box through which it is managed is housed on the same subnet. Hop-boxes provide the service of ‘management’ to the environment and employ the same addressing and routing scheme. In this way remote, or off-site administration is accomplished through normal routing to the hop-box, not to the device’s management interface.

4. Management applications use a VDI methodology housed on the hop box. This includes even SSH clients to the networking devices. They only display on the PC, they don’t run in its memory space. As a best practice, all of your applications similarly run as VDI services for the same reason. The end PC becomes much closer to a ‘terminal’ or portal to the applications, and its memory space and CPU are used only to draw on the screen and communicate with the VDI service. There is a financial advantage as well to loading software only onto VDI servers, instead of a set of desktops. This also aids in writing the firewall rules for user PC’s as the only services they need are for Internet access, and the VDI protocol itself. This is a thin-client kind of design without using actual thin client hardware.

5. Eliminate the use of local storage. This includes thumb drives but is really focused on documents. For the most part laptop hard drives are not part of any backup process, and at some point some middle manager will complain about a key spreadsheet they lost because the only copy was on their laptop hard drive that just went belly up. Avoid that. Put everything onto a file server which has access controls and a backup schedule. If you need transfer capabilities, use any number of secured file transfer methodologies. Yes you will require a network connection to access your files. No this isn’t really a problem anymore, and why would you be updating your business critical spreadsheet held on a thumb drive you can lose?

Among other things this alleviates the need for draconian Internet filtering policies. Let the users browse Facebook or even dark web sites. They are treated as the security cesspool they are and they cannot achieve a secure stance no matter what is running one them.

Another thing this eliminates is the need to control local admin rights to the PC’s. Let anyone load whatever software they like. Heck, let the web link load malware. It won’t accomplish anything. You can keylog all you want, it won’t get you any access.

The final advantage this has is more operational in nature. Given that there is nothing critical contained on the PC, then any PC will do. If one goes belly up or is compromised by malware, then simply replace it with another from spares and the user continues on their way. Mean Time To Resolution becomes the time it takes to dispatch a replacement, and the failed/corrupted device can be examined offline and without impact to the user.

I copied it here because things have a way of disappear as people come and go.

Leave a Reply

Your email address will not be published. Required fields are marked *