Why data masking is now key for every business

Why data masking is now key for every business

Data is the lifeblood of every business today, but organisations need to balance increasing regulation and consumer concerns around privacy, compliance and security with their desire to extract maximum value from their growing stores of information.

This is particularly true when it comes to using copies of the production database in development, which most developers prefer because proposed changes and updates can be thoroughly and truly tested. To protect sensitive data in those database copies, some organisations provision them using a limited dataset of anonymous data.

This rarely works, however, because changes are tested against a database that is neither realistic, nor of a size where the impact on performance can be assessed.

Instead, it calls for the pseudonymising and masking of data in order to provide database copies that are truly representative of the original and retain the referential integrity and distribution characteristics for testing and development.

Perhaps unsurprisingly, Gartner’s 2018 Market Guide for Data Masking predicts that the percentage of companies using data masking or practices like it will increase from 15% in 2017 to 40% in 2021.

Data masking protects sensitive data by replacing it with fictitious, but still realistic data.

This protects it from insider and outsider threats, as well as enabling compliance with regulations such as GDPR, SOX and HIPAA, but developers can still be confident that they can work effectively and not miss potential issues when changes are deployed.

While big enterprises have relied on data masking for years, it is now crucial for every business working with customer data, particularly given the increased speed of development and deployments that DevOps enables.

Organisations adopting data masking need to focus on three key areas if they are to gain full advantage from the technology without risking their data or slowing down testing and development.

Look for the right data masking solution

In its recent market guide, Gartner outlined the two main families of data masking technologies – static data masking (SDM) which is applied ahead of data use, and dynamic data masking (DDM), performed as data is accessed.

Both have advantages in specific use cases – for example, SDM is particularly suited to development and test environments, where users need representative data, which doesn’t need to be ‘real.’

DDM applies masking in real-time to data in a repository. If users or applications have the correct authorisation they can access the sensitive data unmasked, but if they don’t have the necessary clearance they receive a masked version of the information.

DDM is particularly suited to production databases, where real data is necessary for business operations, but needs to be hidden from unauthorised eyes.

Increasingly companies require a combination of SDM and DDM and Gartner also recommends organisations focus on vendors that provide out of the box rules and templates that can be customised to their particular needs.

Manage the complete data masking lifecycle

In the past, many organisations have deployed data masking on an ad hoc and manual basis. That can lead to inefficiencies in processes that slow up testing and development, while potentially putting data at risk.

Instead, look at adopting an enterprise solution that provides tools to manage the full life cycle of masking data from a central user interface. That means rules can be defined in one place, rather than at a database level, ensuring consistency across the business.

The solution needs to be able to deploy rules, schedule or trigger a masking job, and then monitor the performance of data masking operations across the organisation. Given the high cost of data breaches in terms of revenue, reputation and regulatory fines, taking a end-to-end approach is vital for businesses of all sizes.

Use cloning to reduce overheads

The third piece of advice from Gartner is to take advantage of innovative data masking products at the data virtualisation or application tier. This is due to the enormous time and disk space overhead data masking can introduce to an organisation, with everyone from testers to developers frequently requesting updated, masked copies of the latest production database.

Creating these copies takes time, and, given the growing size of production databases, duplicating them occupies enormous amounts of disk space within the organisation.

As an alternative, look at data masking tools that have integrated disk cloning technologies. These can create masked copies of databases in seconds which use just a few megabytes of storage, even for a 1TB database, yet work in development just like the original database.

Summary

Software developers need realistic data if they are to ensure code is tested correctly and prevent breaking changes reaching production, while companies rightly have to protect sensitive data. Adopting an end-to-end data masking strategy using the right tools is therefore key to meeting both of these needs and deploying code faster, while keeping data safe.

Matt Hilbert is a technology writer at Red Gate

 

 

, , , ,

Related Posts

Menu