When the technology is out there to enable you to know more about your customer than your competitors, how do you harness the tides of the Big Data ocean for competitive advantage?
Over the past twenty years, the leading edge in IT has moved on from hardware and software to the data they process. As Big Data sets have grown exponentially in size, they have until recently outstripped the organisation’s ability to harness their power – the software, knowledge discovery and best practices tools necessary to get the most from the data. That’s changing quickly as the enterprise gets to grips with its data assets and, looking over its shoulder at the competition, where it needs to get to.
Fuelling these developments as well, particularly in the financial services sector, is deep regulatory change that is requiring rewriting of the processes underpinning large areas of business. As MiFID II extends the equity trading regime of MiFID I to most other financial asset classes, trading practices (and data they generate) will change in some cases beyond recognition, somewhat counterintuitively making it easier to carry out large Big Data projects.
The legal analysis and the legal team each have central roles in Big Data projects. In this Part II of our two-part blog on the legal aspects of Big Data, we’ll be applying the data legal analysis outlined in Part I to the legal team’s role in Big Data projects.
Diagram 2: The Big Data Engine - Input, Processing & Output
Please click here to view the diagram.
Organisations get their data from many sources (see diagram). Most of this data will be subject to a wide variety of legal rights and obligations of some kind – the kinds of IP, contractual and regulatory rights and obligations outlined in Part I. Breach of these duties (even if inadvertent) can give rise to extensive damages and other remedies (IP rights and contract) and fines and other sanctions (breach of regulatory duty): legally compliant use of data across the organisation is a key driver for Big Data projects.
The input data is then processed, using a mix of ‘secret sauce’ algorithmic and other self-developed software and third party applications. The output from these operations is then sent internally and externally to where it is used – to different parts of the business like R&D, product development, sales and marketing and for management, and externally, to customers, partners and for further analysis.
If the legal model outlined at part I provides the analytical framework for Big Data, then it is these three steps – input, processing and output – that provide the structure for the role of the legal group in Big Data management projects.
Here, the objective is a structured approach to legally compliant data use across the organisation in a technically enhanced way that allows the business to gain maximum advantage from its data assets. The structure is based on four work streams – risk assessment, strategy statement, policy statement, and process and procedures.
Risk assessment. The first work stream in a Big Data project is the risk assessment as to how the business is currently using its data along the normal lines of review > assess > report > remediate:
- reviewing where all data comes from and the terms under which it is supplied and how it’s being used;
- assessing where the business is acting outside the scope of any licences or non-compliantly with IP or regulatory duties for any of that data;
- reporting to senior management; and
- putting right any areas of non-compliance.
The key roles here are for the legal team and the CIO’s (Chief Information Officer) team. Many organisations are taking as the template for the Big Data risk assessment the work they have already done in the data protection area.
Strategy statement. In parallel with the risk assessment, and again in line with other legal, IP and regulatory policy work, the second work stream is around Big Data strategy. The strategy statement is a high level articulation of the organisation’s rationale, goals and governance for Big Data prepared by an inclusive group consisting of senior management, the legal team, the CIO’s team and all stakeholders.
Policy statement. The strategy group will generally name a steering group who will be responsible for the third work stream, preparation of the Big Data policy statement, basically a project plan setting out scope, responsibilities, dependencies, deliverables, timelines and the tools that the project will be using.
Processes and procedures. The policy statement will drill down to the level of the fourth work stream, the detailed processes and procedures around project methodology and the data modelling to be used. Here groups like TOGAF (The Open Group Architecture Forum) have developed comprehensive data models with tags potentially for any type of data the organisation uses or may want to use. It is at this level of data modelling in Big Data projects where the rubber hits the road as comprehensively modelling input, processing and output dataflows, and then tagging that data, requires significant organisation and work.
The processes and procedures will also include awareness training – the key ‘do’s’ and ‘don’ts’ of compliant data usage.
As gaining unique competitive insight from Big Data becomes an increasingly important strategic goal of large businesses, the effort and resources applied to Big Data projects will grow significantly over the next few years. A sound analytical legal model for understanding the rights and duties that arise in relation to data in order to manage risk, and the development of a structured approach to legally compliant and IT enhanced data input, processing and output will be essential for successful Big Data projec