Four critical cloud DLP requirements from the field: What’s easy and what’s hard

From where I do business in the Pacific Northwest and manage colleagues across the western region of the United States, I serve some of the most demanding customers in the retail, financial, high-technology, and manufacturing vertical industries. Many of these companies are at the forefront of cloud adoption, yet also have a lot to lose when it comes to sensitive data across a variety of data types.

Three out of four Netskope’s customers have adopted cloud DLP, and my customers are among the most sophisticated users of it. Here are four observations about what’s easy and hard to do, and why I believe Netskope customers are the most likely to address their sensitive data concerns.

Separate signal from noise

What’s easy: It’s easy to cast a wide net, catching anything that “could be” personally-identifiable information (PII), payment card information (PCI), protected health information (PHI), and so on. But when your false positives dwarf your actual detection, DLP becomes useless.

What’s hard: It’s hard to filter down to what’s important using context (user, group, device, location, app or category, activity, and so on) to separate signal from noise to dramatically reduce false positives and increase accuracy detection.

Find data when it’s hidden

What’s easy: It’s easy to detect sensitive data when it’s in simple files such as documents, spreadsheets, and text files.

What’s hard: It’s not so easy to find sensitive data when it’s housed in obscure file formats or in files that have been compressed or zipped. The prior point decreases my customers’ false positives, but this has enabled them to markedly decrease their false negatives.

See portions of files

What’s easy: It’s easy to look for “exact” sensitive files (such as a file hash match).

What’s hard: When you have to pick up “portions” of sensitive files is where things get tricky. Being able to do this right has enabled my customers to find sensitive content, even when it’s embedded somewhere else.

Be thorough in what you look for

What’s easy: It’s easy to look for specific “words” or “characters” in a file through simple regular expression. But this is useless if you don’t have an army of people to cull through the thousands (or millions) of detections. And it also means you’ll miss whole swaths of important data, such as source code.

What’s hard: It’s hard to detect more sophisticated data, such as source code structures, when they’re in a file. But if you’re a high-technology company or have any in-house development, this is a critical area that you don’t want to miss. This concept also applies to other important content such as CAD/CAM files and proprietary product or process diagrams.

Doing DLP in the cloud right will help you separate the signal from the noise, detect sensitive content no matter where it is, and be thorough with all of your sensitive data requirements.

The post Four critical cloud DLP requirements from the field: What’s easy and what’s hard appeared first on Netskope.

About the Author

Tony Luque serves as Netskope’s director of systems engineering covering the west region of the United States. Luque has consulted on information security and privacy for 15 years in the healthcare, financial services, and retail industries. He earned his bachelor of business administration from the Michael G. Foster School of Business at the University of Washington and holds CISSP, CISM, and CCSK certifications.

More Content by Tony Luque - CISSP, CISM, and CCSK
Previous Article
5 Ways that Steve Kerr would coach users on how to safely use the cloud
5 Ways that Steve Kerr would coach users on how to safely use the cloud

I was just a young kid in 1975 the last time the Golden State Warriors won...

Next Article
Logjam Attack — Day 2 Findings and Recommendations
Logjam Attack — Day 2 Findings and Recommendations

As a follow up to yesterday’s post on the Logjam attack vulnerability,...