What the Heck is Zero-Trust Security?

Have you ever wondered why the state of cybersecurity is so screwed up?  Why is it so easy for bad actors and cyber-criminals to hijack systems and steal information?  Would you be surprised to learn the answer is because we designed it that way?  Computers, networks, operating systems and software were designed to work together as easily as possible, and were inherently  “trusted” by each other.  In the beginning, most systems were stand-alone affairs that worked in isolation and weren’t connected to any other computers.  What connections exisited were low speed dial-up modem connections or dedicated private line services such as ISDN BRI or T1,  Communications were sent over networks in clear text.  Computer users were “trusted” to follow the rules.  Basically, we had a happy little technology hamlet were everyone “trusted” everyone else and no one locked their doors.  When the barbarians got connected to the Internet, all hell broke loose.

By the late 20th and early 21st century, as we began to connect computers into networks, and connect networks to the Internet, we were inundated with a barrage of email and network worms (Melissa, ILOVEYOU, Anna Kournikova, Lirva, nimda, and Klez).  At Microsoft, all resources were diverted to fixing security holes in Windows XP.  Around 2003-2004, organized criminal enterprises began moving their activities from the real world to the virtual world, selling “Canadian” pharmaceuticals, other drugs, and pornography online.  They began deploying ever-more sophisticated cyber-exploits.  We really have been playing catch-up ever since.

All computer communications currently rely on some form of the Trust Model.   In the beginning, trust was implicit.  If you were connected, you were trusted.  Then we moved to more restrictive trust models.  We developed the concept of perimeter defense.  This is the moat and castle model that says everything outside the perimeter is untrusted until proven otherwise.  But the weakness was that everything inside the castle walls was trusted.    We created “defense-in-depth,” adding layers to the security stack.  Currently, most of us are using one of the following models:

  • Peer-to-peer – In the peer-to-peer or workgroup trust model, computers and users were basically untrusted unless they could authenticate to a resource.  “Sharing” provided access to shared resources such as file servers, printers, and Internet connections.  No one was trusted until they were identified, authenticated and authorized.  But once authorized, the trust was granted indefinitely or permanently.  This feature would allow and attacker with stolen credentials the ability to remotely log into systems and use them as if they were local to the network.
  • Client-server – Also known as a Windows Domain.  As in the peer-to-peer network, users and systems are untrusted until identified, authenticated, and authorized.  But by using tools such as LDAP, Active Directory, Users, Groups, and Group Policy, and the concept of Least Privilege it became possible to provide more granular levels of trust.  But again, with stolen credentials, and attacker can appear to be an authorized user and access resources on the network.  If the credential is for a Domain Admin, they can access and do anything.
  • Defined Trust – I am thinking specifically of trust models like Windows Homegroup.  Devices can connect to a homegroup using a special network identity key, and once connected are fully trusted by other devices already part of the Homegroup.  This trust model was developed for home users and consumers, and was providing the ability to share and stream content between Homegroup clients.  It has also replaced the Workgroup on small business networks.  The high level of implicit trust makes this a very insecure network.  Microsoft has decided to remove the Homegroup feature in its next big Windows 10 upgrade.

I recently came across the concept of the Zero-Trust Security Model,  The more I looked into it, the more it appears to offer a way out of the cybersecurity fire-swamp.  The basic premise is:  Trust nothing, Verify everything.  This model was originally created in 2010 by John Kindervag.  This cybersecurity framework says that nothing either outside or inside the organization should ever be trusted, and anything or anyone attempting to connect to a resource must be verified before granting access.

The advent of cloud computing has compromised the old model of fortified bastion networks.  You can’t really defend a perimeter if there really is no perimeter, or if you need to defend contracted services provisioned half a world away.  Cyber-attacks may start outside the network, but once inside a network, they may become persistent for months or even years.  So is that an outsider attack, or an insider attack?

The Zero-Trust Model makes insider attacks more difficult because, for a change, nothing and no one is trusted until verified.  IT employs some new security concepts such as:

  • Authentication and Authorization – Users or systems requesting access are verifies before access is granted.  Verification is achieved through multi-factor, authentication, geo-location, and similar technologies.  Perhaps a certain user is only granted access if working from a certain previously secured and identified computer.  Trust, if established, lasts for the duration of the session, and is withdrawn at the end, and must be re-established for subsequent sessions.
  • Network Information Secrecy – This means preventing the disclosure of IP addresses, fully qualified domain names, DNS records, and machine names of network assets.
  • Application Layer Access – Users get access to the application layer of the OSI model but not to the session, transport, datalink, or network layers.

Combining this model with other security concepts, such as “encrypt everything,” including data in motion across the network, data stored on hard drives and storage arrays, and even data in use, through s newer technology known as “homomorphic encryption.”  Currently, data has to be decrypted to be used by the CPU, and this plaintext data first is stored on the system RAM, where it is vulnerable to RAM scraping exploits.  Homomorphic encryption allows the data to be manipulated while still encrypted.

This should provide a security framework that has the potential finally protect computer users, businesses, and other organizations from the sorts of cyber-attacks we have had to deal with for the last two decades.  You can expect me to report more on this model as I continue researching this promising security framework.

More information:

0

About the Author:

I am a cybersecurity and IT instructor, cybersecurity analyst, pen-tester, trainer, and speaker. I am an owner of the WyzCo Group Inc. In addition to consulting on security products and services, I also conduct security audits, compliance audits, vulnerability assessments and penetration tests. I also teach Cybersecurity Awareness Training classes. I work as an information technology and cybersecurity instructor for several training and certification organizations. I have worked in corporate, military, government, and workforce development training environments I am a frequent speaker at professional conferences such as the Minnesota Bloggers Conference, Secure360 Security Conference in 2016, 2017, 2018, 2019, the (ISC)2 World Congress 2016, and the ISSA International Conference 2017, and many local community organizations, including Chambers of Commerce, SCORE, and several school districts. I have been blogging on cybersecurity since 2006 at http://wyzguyscybersecurity.com
  Related Posts

Add a Comment


This site uses Akismet to reduce spam. Learn how your comment data is processed.