For the past few years I was puzzled by the concept of Zero Trust (ZT). I thought it was this big, nebulous thing that I just could not wrap my mind around. Every time I asked a vendor partner for a definition I received a different one, and each one just lead me to having more questions. I finally sat down with a group of trusted practitioners and vendor partners and forced a meeting that lasted as long as it took for me to understand what is meant by ZT. Here is what I came up with. Hopefully, you will find it to be somewhat useful if you are just now starting your look into ZT for your organization, and if you are not – you should be.
ZT is not a single architecture but a set of guiding principles for workflow, system design and operations that can be used to improve security posture (NIST 800-207). ZT follows three core principles of: assume breach , explicitly verify, and least privilege. Core parts of ZT are privileged access management, placing cybersecurity controls at the perimeter, on the end points, and building out your defense in depth architecture. ZT is having items such as the following in place:
Mobile Device Security
Privileged Access Management
Clean Admin Workstations
Tools to protect or secure legacy infrastructure and apps
Going passwordless should be strongly considered. –
IPS and AV at the Perimeter level and server host level
Management of Special through their life-cycle
Removing Admin from the users that don’t need it: removing unnecessary services, ports, protocols or applications (principle of least privilege)
Just in Time (JIT) – this is providing elevated privileges only when required and then removing them.
In reading through this list, most of you have already put in place most or all of the items on this list. Which means that you are well on your way or have pretty much completed your ZT journey.
Other items of possible interest:
Continuously communicate and start communications with regional and country counsel as early as possible. Keep them informed of the ZT journey and how/why the environment is changing.
When first arriving at an organization that has not invested in a major cybersecurity program and then looking at the sheer number of computer vulnerabilities in the environment a sense of feeling overwhelmed is a common initial response. In many cases the numbers can be upwards of one million vulnerabilities that need to be remediated. My advice is to keep a couple of concepts in mind, kaizen or continuous improvement over time and the old saying that you eat an elephant one bite at a time. As a practitioner who has faced this task many times over the past 30 years, I can tell you that it is not as daunting to complete or to get alignment and buy-in from your Information Technology (IT) colleagues to complete as it first appears. The National Vulnerability Database (NVD) is part of US government repository of standard-based vulnerability data and is part of the National Institute of Standards and Technology (NIST). This vulnerability database enables automation of vulnerability management, security measurement, and ensures standard compliance. (Wang 2009) NVD contains a list of vulnerabilities, exposures, and an associated risk score for each vulnerability. This risk score is known as a CVE score is an integral part of the NVD and Common Vulnerability Scoring System. Most commonly used vulnerability scanning tools and services make use of this CVE scoring system to assign a risk number or indicator to the vulnerabilities it finds. The higher the CVE score the greater the risk and in general the more urgent it is for you to remediate. I recall arriving at one company in an operational IT security role and during my first morning one of the cybersecurity team members came by with a cart, in the cart was an extremely large number of papers, about 10,000 pages to be exact. The guy looked at me, said good morning and then started to leave without taking the cart, I asked him why he was leaving the cart and he said “oh, those are all the vulnerabilities you need to tell IT to fix”. I did take a quick look and there were about 30 per page of all criticalities. I took the cart to the loading dock, found a dumpster, and dumped the pages into the dumpster. I then scheduled a meeting with the cybersecurity vulnerability team to talk about a process for handling vulnerabilities that would actually work. The process we aligned on looked something like this, and it is one that has proven effective across multiple companies. As the ultimate goal is to reduce overall risk, we started by agreeing that the risk listed in our current tool as Catastrophic would be the most important ones to fix first as they would reduce the highest level of risk in the shortest amount of time. As a bonus, many of the patches that resolved those types of risk overlapped in resolving some of the lower-level risk as well. At first there were always a staggering number of catastrophic risks to resolve, though not as staggering as a 10,000-page hardcopy PDF manual. We then agreed that the patches which needed deployed would be deployed in the order that hit the greatest number of systems versus those a patch that only impacted one or two; in many cases a patch to resolve a vulnerability is needed across all the Windows servers for example vs only one older version of the Server Operating System. Once the Catastrophic vulnerabilities were eliminated, we agreed that we would move to Critical, then high, then medium and eventually low. As our tool provided an executive summary of the vulnerability and a recommended patching solution along with a breadth of details that were very interesting but not relevant to the IT patching team, we further aligned that the electronic report the vulnerability remediation team received would show: System name, IP Address, MAC Address, OS Level, Vulnerability, and the recommended patch to be deployed along with a status and comment column. Lastly, we aligned the number of patches that IT was willing to deploy during their weekly or monthly maintenance window. Following this approach, in all the cases that I participated in, the number of Catastrophic vulnerabilities across the infrastructure was reduced to near zero within 12 months. Some of you may question if that is good or not or if that took too long but keep in mind that also in every case these same patches had been left, in general, unattended and unpatched for years as the sheer volume of patching that IT was being asked to do was just overwhelming. Some readers are most likely questioning why IT would not just patch systems regularly and are most likely wondering why systems would be left unpatched and be left in states of vulnerability. Yes, automatic patching should be highly encouraged and should be used across the majority of organizational systems, but the simple truth is that system patching can and does break an IT system, not as much in modern days as in the past but I can recall reading recently where this OS patch, or that Application patch needed to be pulled back because it caused a major system failure. ITs job is to keep their Organization up and running so they are hesitant to complete broad based patching for every issue on a weekly basis. By taking and aligning on a risk-based approach that provides a manageable and workable solution for the IT patching team / support staff we are more likely to get the cooperation needed to drive vulnerabilities downwards and reduce overall risk. IT leaders in the organization are very security conscious and they really do want to be great security stewards, but they also are driven to keep the business systems up and running and therefore making money for the organization. To simply dictate an unreasonable course of action and then throw our hands up in disgust when they don’t engage and do what we ask is not being a trusted and helpful cybersecurity partner to IT or to the business. As Stephen Covey said in his book The Seven Habits of Highly Effective People, if we want to reduce risk (Vildal 2012)we need to work towards the win-win solution (Covey 1989) and then aggressively drive that forward while monitoring and adjusting the results as needed. By driving this one-team, one-goal approach to reducing risk through reduction of cybersecurity vulnerabilities the chances of a successful outcome are maximized. Do this work right and you end up with a metric that looks like the one below, which is a great story to share with your executive leadership team.
Covey (1989). The Seven Habits of Highly Effective People.
Vildal, M. (2012). “A systems thinking approach for project vulnerability managment.” Kybernetes Emerald.
Wang (2009). OVM: An Ontology for Vulnerability Management. Marietta, GA, Southern Polytech State University.
The bad guys are now starting to target mid-tier companies with their hacking activities. In many cases these companies have not yet come to realize that having a Certified Chief Information Security Officer (C|CISO) or a Virtual Cerified|CISO (vC|CISO) is something that needs to be accepted as a part of doing business. The recent Presidential report on how Cyber Attacks impact the economy of the U.S.A. makes it clear that the cost of having a security expert on staff or on retainer is well worth it if the smaller company wants to remain in business. Cyber crime is putting many of these smaller companies out of business simply because they cannot recover from the post breach losses. At the same time these companies may not have the dollars to pay for a full time C|CISO, so their alternative is to explore the vC|CISO concept. The vC|CISO model can be leveraged to effectively manage security risk for small to mid-tier companies.
If a company is going to explore the hiring of a vC|CISO they need to make sure to do their due diligence and ask about the certifications (such as the one in the graphic below) and experience of the vC|CISO they are going to have giving them advice. While there are a lot of good vC|CISO options available there are just as many vendors and security professionals touting themselves as vCISO’s who lack the background or experience to be of much help. I lack quantifiable data on this point but I would say you should expect to pay between 3 and 400.00 per hour for a vC|CISO, the good news is you can choose what % of their time you want them to work for you (10%, 20%, on retainer, etc.) and then only pay for that portion of time.
Successful ransomware attacks are at an all time high, we are losing the cyberwar, cyber criminals are making more money than ever before and it is only going to get worse, a cyber attack could be as damaging as a nuclear war – the headlines abound with comments such as this. Yet, there are still a lot of postings about how information security should not be the office of No. Really? Information security is a security function, as an information security professional are you really going to say – Yes, sure, go ahead and port that un-scanned software code into our production environment? I would hope you are going to say no you cannot port that un-scanned software code into our production environment. No is not a bad word, it is one that the job requires us to say. If you are offended when someone says you are the office of No- don’t be. To be a security professional no is a good response. I train my team members to say “yes, but” in order to soften the perceived impression of the word no ….. (here’s a clue – “yes, but” is still a no – no matter how you spin it).
However, I have yet to sit in a business meeting and see any of my peers simply say no without being willing to engage in dialogue that would lead to a good and secure solution. So for those who don’t like to say no then do it this way – yes you can put that server with 2 catastrophic exploitable vulnerabilities into the environment after we help you remediate the vulnerabilities or work on additional adequate controls that will allow us to use the server in a safe and secure manner. For me, information security professionals can and should say no, but after doing so we need to be helpful and smart enough to engage in conversations that help the business figure out how to do what they want to do in a way that is cost effective, safe and secure.
In many ways with the types and amount of successful attacks we are experiencing across the U.S. infrastructure being the office of Yes is a far more scary response than saying No to items that put your business at risk.
One item that really bugs me is to hear IT and Cyber Security professionals espouse that the perimeter is dead and that Cyber Security professionals should stop focusing on tools that protect the non-existing perimeter. I was at a lunch with a fellow CISO a few months back and he had invited his CIO to lunch with us. The CIO had recently attended a seminar where they talked about the perimeter no longer existing and he was truly wondering if he could just get rid of his firewalls. It was a fun conversation but also a bit scary to me that the conversation actually had to take place. The reality is that while the perimeter has changed we still host most of our systems and data in a data center or in multiple data centers and whether these data centers are on premise or in the cloud they still have a perimeter that needs protected. Bad actors, both external and internal need to be kept out of areas they have no business being in. To do that requires a strong perimeter consisting of next-generation firewalls such as the ones Palo Alto or CISCO provides. These first line perimeter defense tools should alert into your Security Alert tool (such as QRadar or Splunk) and should also be running Intrusion Prevention and WildFire type of technologies.
The Perimeter exists, heck in most organizations an argument could be made that multiple perimeters exist. Let’s quit saying that there is no Perimeter as the people we are tasked with protecting don’t need to be walking around thinking that they can get rid of their perimeter protection tools.
I have been lucky enough to spend most of my Cyber Security career doing startup operations for large companies. I thrive on the energy and passion that teams get when they are given the opportunity and support to design and implement security protections for their company.
One of the things that I am often asked is why do I focus some of my first efforts on locking down the end user system before locking down the servers and databases. This is a great question and one that can spark many hours of debate. Please don’t send me a lot of comments telling me that Databases and servers are where the information is – of course I know that.
For me, and remember my work has been mostly in very large global enterprises with a mix of blue collar and professional staff, it is a matter of evaluating risk. In large companies it is often hard to know where all the servers are and who owns them but it is a pretty safe bet that the person running the server is a techie and has been running and protecting these types of systems for a number of years ergo they have much more knowledge of how to protect a system and how not to fall for a bad actors attack than most end users. I need to consider whether 50,000 or 100,000 attack vectors (i.e. end user systems) , all with email accounts, usually with admin access to their systems and wanting to open attachments and PDFs pose more of a risk than 4-5,000 attack vectors (i.e. servers) that typically don’t surf the web or get email. My choice in the first 12-18 months is to say that the end user systems pose more overall risk than the servers. This leads to aggressively putting in controls and protections for those systems first. Please don’t take this out of context – of course in parallel I drive initiatives to patch server vulnerabilities, get servers logging into a Security Alerting system, setting up a SOC, etc. but given choices with limited resources and time I choose to deploy end point encryption, good AV and HIPs, taking away admin rights, software such as Tanium, 2 factor authentication for email, etc. on the end user system first.
Our job is to enable the business to do neat stuff such as this in a secure manner : our approach is to help them design their solution in a secure way and make recommendations that allow them to continue…. We need to be smart enough to help them while at the same time getting them to put reasonable controls in place – Kevin L. McLaughlin
One common item that information Security Professionals working in Critical Manufacturing environments have to deal with is that of legacy systems. You see, in Critical Manufacturing environments it is very common for the systems that run and control factory lines to remain in place for a very long time. Some of these systems can be running Operating Systems(OS) that are 10 to 15 years out of date. In many cases these OS are no longer vendor supported and cannot be patched to remediate known exploitable vulnerabilities. These older systems are often used to run production lines and they still do a great job at doing what they were purchased to do. It is difficult, rightly so, to convince the leadership team at a factory to spend money to replace something that is old but that is still doing the job it was purchased to do. It cost money and reduces potential profits to replace these old and outdated systems with new systems. Because they continue to do the job they were purchased to do justifying the new spend can be a difficult thing to do.
From an Information Security point of view these systems pose a large risk to the overall manufacturing environment and if hacked could cause a large scale production outage. In smaller companies this type of major Cyber attack can result in no longer being able to conduct business and permanently closing the doors. Legacy systems that are commonly found on the shop floor are often 3, 4 or even 10 years out-of-date when it comes to standard Information Technology patching. Information Security Professionals look at these systems as attack vectors while the people working in the Critical Manufacturing environment view them as cost effective work horses that are getting the job done. While cyber attacks on networks at Sony, Target, Home Depot and the US Government are getting all the press, the greatest cyber vulnerability is in manufacturing. “By raw numbers, and by the numerous manners of attacks, manufacturing is the most targeted area now, even compared to financial services,” Chet Namboodri, senior director of Global Private Sector Industries at Cisco, told Design News. “Financial services gets more press, but industrial networks get more attacks.” Attacks and warnings such as Stuxnet, Armaco, SolarWorld and U.S., Steel to U.S. regulators and security experts sending out an official warning that hackers could now access critical medical equipment including pacemakers and insulin pumps with potentially deadly results make the threat to Critical Manufacturing a real one. Determining what to do in order to lock down and protect the Legacy systems while at the same time allowing them to continue doing the work they have been doing is a major part of an Information Security professionals job.
“You can’t scan that system (or you can’t put AV on that system) because it is old and fragile and if you bring it down we will not be able to produce our product” Is way too common of a phrase in the Critical Manufacturing environment. In many cases Information Security Professionals are asked/told to please just leave the systems alone, do not run vulnerability scans, do not put antivirus on them, do not put a light firewall on them, do not patch them, do not put updates on them, etc. This type of thinking by Information Technology and Factory leadership teams is shortsighted and is putting their entire production capability at huge risk of catastrophic failure. As these legacy systems are outdated and no longer being supported by the vendor they are hugely exploitable to any blackhat or hacker that wants to take advantage of their exploitability. The reality is that the risk is real, the risk is great and from past events we know that these systems pose easy to use attack vectors for blackhats, fraudsters and competitors seeking to cause negative business impact to the company.
Information Security professionals working in Critical Manufacturing should take the approach shown in Table 1 for dealing with the Computer Systems residing in the factory environment and that are on the plant floor. By following this methodology the legacy systems will be protected while at the same time be able to continue doing the job they are good at and that they were purchased to do. In most cases this approach will also reduce the overall risk that these systems pose to an acceptable level.
This approach when combined with the network segmentation and smart firewall approach discussed in my previous blog on Critical Manufacturing is the start of a successful recipe in securing a Critical Manufacturing environment.