What’s the Big Deal About Privacy?

Harry Croydon

Henry Croydon is the CEO of TonkaBI, a software development company that specializes in artificial intelligence and computer vision with a focus on the insurance and automotive industries.

Jonathon Croydon is TonkaBI’s product director.

How Artificial Intelligence Is Making It Critical to Control Transactions of Data

By Adam Bialek, Henry Croydon, and Jonathon Croydon

With the rapid expansion of technology entering every field of business, manufacturers and service providers are being presented with previously unconsidered opportunities to reap value from the reuse and repurposing of data initially collected and harvested for other reasons. Learned intelligence through artificial intelligence (AI) systems provides value for the processor not previously realized or recognized in transactions. This is particularly true when considering how AI companies that work with insurers to optimize their claims processing are left with a valuable resource after the data collection is complete. This article addresses how the value of a neural network has been ignored and should be considered when an insurer considers outsourcing its claims processing.

The Unrealized Value of Learned Intelligence

Over the past decade, data security and privacy have become common subjects of discussion among the business community, civil liberties organizations, and even sovereign nations. Massive data breaches have exposed personal information, government hacks have resulted in the release of sensitive information, and website hacks have exposed photos of naked celebrities and revealed the use of discreet dating sites for extramarital activities. Facebook’s founder was called to testify before Congress on privacy. The European Union and California, among others, have enacted sweeping legislation to control individuals’ rights to privacy and the use of their data. Yet, the business community still strives for the nirvana of efficiency:  to use data for targeted tasks. The insurance industry, one of the highly regulated industries that have been monitored regarding privacy, has recognized the value of data and seeks to capitalize on it. With the growth of the use of AI in the processing of such data, the industry must take notice of various factors when entering into transactions that rely on the use of such data.

For more than 400 years, people have followed Sir Francis Bacon’s philosophy of scientia potentia est, a Latin aphorism meaning, “knowledge is power.” Now more than ever, companies rely on analytics to inform their processes, management, and strategy to increase their power in the market. The advancement of AI over the past 60 years is having a significant effect on our everyday lives. When we open a Facebook newsfeed, perform a Google search, or get a recommendation from a prominent website, AI is lurking in the background. Consumers who use robotic vacuums or mapping apps to assist in securing the best route are relying on AI. Amazon, a staple in the lives of most Americans, has grown its business in large part based on AI. So what’s the fuss? AI seems to be improving our lives and is poised to offer better health care, better financial management, and a better social environment.

The challenge arises when AI is combined with concerns over data privacy and data ownership. We are talking about the value of data. Data is the “new” oil. This is a concept —data as a commodity, comparable in value and utility to oil—that is generally credited to Clive Humby, the British mathematician behind Tesco’s Clubcard loyalty program and My Kroger Plus. Humby noted that while data is inherently valuable, in its raw state, it needs processing, just as oil needs refining, before it reaches its maximum potential. Similar to oil, which has powered industrial growth, data is used to power transformative technologies. Data has many advantages, too: it can be transported easily and reused at very low cost. In addition, unlike oil that is consumed, data can become more useful the more that it is used. So, in many ways, data is superior to and more valuable than oil.

With that said, until recently the idea of data exploitation has been somewhat tangible and measurable. Company data is generated through processes and siloed by departments and held in a database or in filing cabinets. This data generally is easy to find, audit, and control, especially when the data is used and managed via contracts and outsourcing arrangements. The data can be picked up, used again, and moved to a new process or outsourced to be used again. For example, the process of customers buying and renewing insurance via a website, with some planning, can be moved into an application for a better or different customer experience. While the substance of data has not changed, the idea of data being interrelated is promoting new uses of analytics, from equities trading to professional sports. Data has a primary purpose and the interrelationship with other data has a separate utility.

When a motorist has an automobile accident, the motorist’s insurer captures data about the type of vehicle, the damage, the location of the accident, the driver, the passenger, and the circumstances, to name a few of the data points collected. All of this data is used to process the property damage claim and the potential claim for personal injuries and medical benefits. But this data also can be used for other purposes when it is subjected to analytics and artificial intelligence.

Theoretically, when a motorist has an automobile accident, data can be aggregated to form certain conclusions when the data is related to other data. For example, the type of vehicle and the damage caused to it can be used to draw certain conclusions about the safety of vehicles involved in crashes generally, the cost of property damage, and even the likelihood of being in an “accident.” The location of the accident can help identify dangerous areas and road conditions. The identity of drivers and passengers and the injuries sustained can be used to determine whether vehicles are properly outfitted with safety mechanisms. And the list goes on. The point is, the data that is collected is not useful just to the specific accident, but it has numerous other purposes when interrelated with other data.

The information that is learned from these analytics can be useful to the collector and analyst. It has value and is the “oil.” Who owns the information and its utility, however, is up for grabs.

Computer Vision and AI: Who Owns the AI Neural Network?

In today’s business environment, businesses commonly outsource tasks. Businesses prefer to focus on what they are good at and outsource other tasks to companies that focus on certain aspects and perform those jobs well, often at a cost savings to the business. So, what is changing with the introduction of AI into these processes? Why is AI technology so different from the technology that has been developed up to now (web and mobile)? The fundamental change is the value that is derived from using the AI technology. AI technology is capable of changing businesses and improving processes dramatically, just as the steam engine and the assembly line did for manufacturing. The ability to scale up without huge human costs is where business is seeing potential value. This unrealized efficiency and “power of information” is becoming commoditized and recognized as a real asset.

Businesses need to be aware that they could be under threat if they outsource incorrectly and lose control over the data and process that they outsource. Today, it still is possible for these companies to retain control because the commoditization of data through AI is at a native stage. With the rapid understanding of the use of data and AI reaching the mainstream, waiting to address the control over such data, and its commoditization for one, two, or three years, could result in a huge setback. It could be too late. Businesses might never reclaim control from the very companies that they thought of as friends and within their support network.

It is a simple concept that when you have one item and someone gives you a second item, you have two items. If a person receives water and salt, that person can make salt water. If, however, the salt is taken away, while the person no longer has the ingredients to make salt water, the person still knows how to make salt water. Similarly, when using an AI system, if a neural network receives data, and the machine learns information from its analysis, even if the data is removed, the machine still has the benefit of the learned intelligence.

The ability to move data is changing and the ability to move and tailor a company’s processes easily to how it wants things done, or how its customers want things done, could be under threat from new outsourcing arrangements.

The Value of Learned Intelligence in the Insurance Industry

Let’s look at the journey of a typical insurance company over the past few years and how it has dealt with updating and changing the claims process, particularly the use, value, and management of data within that process. While every company has its own process, this analysis is based on a generic approach and understanding.

Typically, until around 60 years ago, insurance companies would deal with policy issuance and placement, in addition to handling all claims. Then, companies such as Crawford and Co. and Gallagher Bassett began to take over the management of the claims process. These third-party administrators (TPAs) brought process and scale to the settlement of claims. This also was coupled with the use of outsourcing to places such as India, the Philippines, and other areas where costs are lower. The concept of outsourcing has grown substantially since the 1980s.

Fortunly, a company “committed to demystifying financial procedures, interpreting terminology and reducing complex transactions into simple steps,” reports the following: almost 54 percent of all companies use third-party support teams to connect with customers, and more than 93 percent of organizations either have adopted or are considering cloud services to improve outsourcing. And while data security is a top concern for 68 percent of outsourcing companies that are considering moving to cloud technology, more than “44 percent of chief intelligence officers say that they are more likely to use outsourcing suppliers than they were just five years ago.” [1]

With respect to claims handling, on the whole, outsourcing to a TPA or offshore still allowed strong control over process. From an insurance company point of view, it still owned and could control the data, and it could make the process its own and leverage the benefits of scale and cost. The insurer could fly to India, or visit the TPA, and point at the team processing its claims. The data still was file based (in a database) and managed through teams within the TPA or outsourcing organization.

The TPA controlled the personnel and the core process, but the data could be picked up and moved to another TPA or even brought back in-house or outsourced again. Claims are being outsourced to cheaper locations or companies routinely. The insurance company at this point still has control over the data and the process, largely because it is manual, or at best, computer based, via some level of process automation. The ability of the insurance company to move the process or influence the process is still intact.

The main value that the TPA offers insurers are in cost, scale, and managing the people and team processing the claims and bringing value through their claim settlement skills. There is a separation between the value of the TPA and the value of the insurance company.

The incorporation of AI and machine learning into the claims process moves forward to a new breed of digital TPA, an AI company. On the face of it, the value proposition looks the same as it did before but with further potential for scale and greater cost reductions. The new company could provide opportunities to change drastically the way that an insurer looks at risk. However, there is a catch that may not have been seen until now. While the TPA-AI company may be performing a service, it is gaining an asset never realized before and perhaps not accounted for when initiating its relationship with the insurance company. Insurance companies should recognize the shifting value of data processing when the data is processed by an AI-powered TPA. The value of the knowledge learned from the data will be transferred to the TPA, and an insurer could be losing a valuable asset.

To become competent at a specific task, a TPA’s AI technology needs to be trained on an insurer’s claim data. This could be the process of taking in data in the form of photos, locations, and words, to train the AI, building knowledge into the neural network. The neural network is populated from the insurance company “data.” This training allows the AI technology to work and become successful. The more data, the better the AI will be at doing the specific job. This is highly scalable and very valuable technology. Similar to the adage, “Give a man a fish, and you feed him for a day; teach a man to fish, and you feed him for a lifetime,” the value in learned information could ultimately dwarf the value of the individual data.

The shift from building a process with a TPA (old school) to populating a neural network with an AI-powered TPA is subtle but very significant. Building a project with a TPA to train an AI on the promise of cheaper claims processing is fine and works well, as long as an insurance company understands the value transfer as well. The value transfer should be simple and very clear: the insurer gets an automated process, and the AI company gets to build AI software with the ability to scale on the insurer’s data, if this is what is clearly agreed in the contract. But does the insurer know that while it is benefitting from the AI, it is building a potentially powerful brain for the outsourced operation that could make it easier for others to compete with the insurer?

If the TPA uses a common neural network, then it is building this “brain” on the back of the insurance company data, and this learned intelligence can never be returned to the insurance company without the TPA disposing of its hard work and its neural network. When a TPA processes claims for more than one insurer or for self-insured entities, the knowledge and the process of one insurance company or self-insured entity is used to the benefit of the other companies. While the images and other data that were input and stored by the TPA can be returned or destroyed (which is often required in a service contract), the data that populated and trained the neural network leaves learned intelligence with the TPA that cannot easily be removed. If this learned intelligence is not returned or destroyed, then the processed information can be resold through the offer of services by the TPA to other insurers and self-insureds. When TPAs can boast that they have intelligence greater than any single insurer, perhaps it changes the leverage from a TPA merely offering an outsourced opportunity to an insurer to a TPA offering an insurer more insight and expertise using “industry” information that was created using the initial insurer’s processed information.

The existence of a common neural network also lowers the barrier of entry for new insurers who can simply engage an AI-powered TPA to process claims without providing any initial data. A new entrant no longer needs the skills necessary to build a large claims operation to inform its underwriting and claims resolution. This ease of entry for the insurance market can allow well-capitalized financial and technology companies to become new insurance players in the market to challenge well-established companies that have spent decades investing in their own processes.

Negotiate with Understanding

Many insurers may decide that they do not want to invest resources or their skilled staff to build an automated AI claims process. Therefore, they will need to outsource this function. An AI company without insurers’ large datasets and learned intelligence will struggle to provide the claims automation that an insurer needs and wants; as such, the AI company and the insurer will need to work together. An insurer’s use of an AI-powered TPA arrangement may be a perfectly acceptable approach. But the parties to this transaction should know whether the TPA is using a common neural network, whether the learned intelligence can or should be deleted, and whether it should be entitled to the residual value of the learned intelligence gained by the TPA as a result of the data processing. The parties to the transaction must know the right questions to ask, a joint understanding of the value of the learned intelligence should be recognized, and the known intentions of each party should be transparent and contractual.

Regardless of whether an insurer uses a common neural network or a segregated neural network, the benefits of AI in the claims-handling process and in underwriting may become even more critical in the years ahead. Rather than waiting for an arrangement to end, an insurer and its AI-powered TPA should be proactive and recognize these issues and address them up front.

About the Authors

Adam Bialek is a partner of Wilson Elser in New York City, where he co-chairs the firm’s intellectual property practice and is a member of the firm’s information governance leadership committee. His nationwide team of highly qualified attorneys offers clients a full range of intellectual property and cyber and media legal services. Mr. Bialek is chair of the DRI Intellectual Property Litigation Committee’s Trademarks Specialized Litigation Group.  Henry Croydon is the CEO of TonkaBI, a software development company that specializes in artificial intelligence and computer vision with a focus on the insurance and automotive industries.  Jonathon Croydon is TonkaBI’s product director.

Want to learn more about Big Data in insurance? Listen here to our podcast episode with the former Chief Data Officer at Axa.

Harry Croydon

Henry Croydon is the CEO of TonkaBI, a software development company that specializes in artificial intelligence and computer vision with a focus on the insurance and automotive industries.

Jonathon Croydon is TonkaBI’s product director.

Related Articles