Today, every time a customer wants to access a new financial service, sometimes even within the same company, they need to physically produce the same set of documentation, and answer many of the same questions. Passport, driver’s license, proof of address, years living at the address, transaction history, proof of insurance, etc. In a world where more and more services are provided digitally reintroducing analogue steps that need to be executed by the customer make little sense.
These requirements exist for good reason, but the net result is annoying and confusing while also leading to a high cost of service provision — $18 billion alone for anti-money laundering processes according to Goldman, and many $100s of billions more for data-centric financial analytics. Enabling a new kind of trusted identity data sharing is the key for banks to improve the customer experience while lowering costs and improving operational metrics.
These processes have not caught up with the digital age for a couple of reasons. From a technical perspective, the problem is that,
(1) identity data is currently stored in multiple databases across financial institutions,
(2) data does not translate easily across databases, and
(3) security and privacy concerns that manifest as regulations need to be addressed “somehow” in technical solutions.
From a non-technical, organizational perspective, it is also difficult and costly to create a bilateral legal framework for such a generalized problem as data sharing.
Theoretically, within a single FI, the solution should be easy: create a central data warehouse, or, to suit today’s data structure and analytics, a data lake. The beauty of this solution is that it makes data the central focus, and means that any process applied to the data could update all of it at once, lowering the cost of reconciliations and other similar check/cross-check exercises that are carried out today. Think about how Wikipedia works. In the old world, encyclopedias were the end product of a single oracle (trusted publisher) that was pushed out to users. The data was in a static state that was the truth as determined by a process controlled by a single entity. Any subsequent enrichment by a downstream user was only accessible to their users. In the new world, we have Wikipedia, a product that is developed by many oracles. The data is in a constant state of flux and the truth is contributed to and enriched by many users. The result is an always up-to-date set of information for all users to access and utilize that is incredibly cheap to maintain vis-a-vis an old-school bound book.
Although some banks have been somewhat successful at creating data lakes, it’s not a straightforward exercise. First, there is the execution problem of fixing the jet plane while it's in flight; the underlying data in each database keeps morphing in structure as new processes or requirements are bolted on. But second, and more importantly, the regulators want data separated and housed in locations where they can guarantee privacy & confidentiality for their citizens. This data separation problem is amplified across the financial ecosystem. Sharing between separate financial institutions requires competitive issues to be factored into regulatory ones.
Blockchain seemed to be the technical breakthrough that would allow the replication of the architecture of data as the central focus across many separate firms. And with this architecture comes the potential for massive savings through the elimination of unnecessary processes. Imagining the cost savings across multiple financial institutions is exactly why Bankers have become so animated about Blockchain technology (and also why non-bankers get frustrated that Bankers don’t seem to understand what Blockchain is and its key differences with other types of Databases — Bankers don’t care about blockchain purism, they care about cost savings!). However, the existing manifestations of blockchain don’t quite work for many reasons that are discussed elsewhere in the media, such as privacy, scalability, and so on.
Ohalo has approached this problem differently. We assumed that for the short- to medium-term the data needs to sit exactly where it is now due to privacy, regulation, or other reasons, in whatever existing format and structure it is currently held. If this is the case, but another entity wants to use that data, how can this other entity (whether internal or external) know who has the data, access it with the appropriate permissions (including the permission of the final end customer), and know if the source data changes over time? The solution is to leverage existing Permissioned Blockchains to centrally store hashes of either the data itself or combinations of the data, the data fields, and the formats that each entity uses to store the data.
Storing hashes solves the privacy problem and avoids issues around ownership and scalability. On top of the chain sits Ohalo Apps (similar to the Microsoft concept of “Cryptlets”), which provide the on-chain/off-chain interface. If any of the hashes change, then the Permissioned Blockchain would update the relevant hashes and entities relying on that data can know immediately that they have something to do: either checking the new data, pausing their services, or some other action. The methodology that Ohalo developed works both across firms and obviously, in the simpler case of inside a single firm. The advantage of using something like this inside a single firm is that it solves the problem where regulation or other concerns require the data to be physically separate and/or private, while also providing the optionality to participate in a system across firms as needs and data regulations evolve.
In addition to providing a method of sharing data from separate databases, the Ohalo ecosystem would also enable a new answer to consortia in the digital world. Through the use of contract Cryptlets (smart contracts), firms could bilaterally or multilaterally draw up agreements governing their data sharing that ensure the data is shared in the specified manner. These agreements could be valid for long or short periods of time and could be built on as agreements are reached with other corporations. In effect, consortia can be formed without everyone having to agree on everything in lockstep.
Finally, Ohalo enables a potential new revenue source for FIs. Ohalo technology will provide banks the ability to store valuable data for their customers, and effectively provide (for a cost) that data to other entities that need it at the request of the end customer. This is a win-win-win situation: the customer wins because they get a better experience not having to waste time manually assembling and presenting data repeatedly; the bank profits because they have the potential to monetize a cost they have already incurred, and the data-receiving firm also profits by avoiding costs collecting and processing the relevant data.
- Author: Rhomaios Ram
Top comments (0)