In my last article, I discussed the subject of a new era of bots. This technology has advanced the area of automation greatly with more to come. Yet bots represent just one segment of a wider umbrella of technology involving artificial intelligence, or AI.

The term AI elicits different visions and has been a core subject of science fiction virtually since the genre began. To some it conjures up images of the end of the world (think The Terminator); to others that it will replace humans in the workplace (or just replace us); and in some cases, it’s simply regarded as mysterious.

AI can trigger a number of improbable and unrealistic beliefs but realistically, it is worthy to state that the technology’s true purpose is to improve the efficiency, speed, and accuracy of tasks to the benefit of all.

What is artificial intelligence?

Simply put, AI is ‘the use of technology to simulate human behavior by machines to accelerate the accurate execution of tasks and actions’.

This holds true especially in the realm of computer technology, where AI is an entire discipline devoted to the principle of non-human autonomous decision-making. Many of these decisions involve the execution of tasks — complex or simple, often highly repetitive, but with the key goal being to remove human error from the mix.

Fundamentally, the key tenets of AI include:

  • Evaluation: The ability to introspect information received and assign a relative value to such data.
  • Reasoning: The ability to perform analytical combinations leading to decisions based on supporting inputs.
  • Decision-Making: The choice of executing a specific action out of multiple options.
  • Correction: The ability to identify and resolve situations that have resulted from erroneous actions or decisions.
  • Learning: The ability to catalog and retain the various results of actions to be further utilized as guidance for the next similar objective.
  • Retrospectivity: The ability at a macro level to evaluate a history of actions and learnings to improve the next set of occurrences.

While the potential applications of artificial intelligence are numerous, virtually all share these characteristics. AI, however, can be further classified by focusing on the level of machine intelligence.

Simpler implementations focus on the application of a set of discrete rules for a task or set of tasks that, when confronted by ambiguity or a lack of data, often engages a human partner to cover the gap.

Rule-based systems have been part of information technology in virtually all industries for quite some time and are more commonplace that one might realize.

However, there is a significantly more complex form of AI that seeks what most think is the Holy Grail of the technology: general intelligence, or “the fluid ability to integrate multiple cognitive abilities in the service of solving a novel problem and thereby accumulating crystalized knowledge that facilitates further higher-level reasoning”.

The Human Factor

Much of the AI community agrees on one fundamental: that humans are imperfect. One must contemplate the challenge of building a technology based on such imperfection.

Humans are prone to errors in judgement, can be biased in our thinking, foster ulterior motives, and change our positions on subjects all the time. If we allow these biases to define AI and its applications, we invite the possibility of flawed results and perhaps harm to the very process we are trying to improve.

Additionally, any technology can be bent to nefarious goals. The ever-present fact is that there will always be people attempting to utilize technology like AI for the purposes of personal gain.

As such, abuse is a risk that comes with the human development of AI, the rules used to seed its function, and its operational implementation.

An example is in the ever-present discussion regarding personal data privacy. Today, virtually every interaction you have online involves a form of personally identifiable data.

Consumers provide their names, addresses, and billing information to third parties on a regular basis. The gathering, mining, re-use, and outright sale of this data could be used for fraud, especially if it were to fall into the wrong hands.

The potential implications get worse with healthcare information, intellectual property, and scientific data.

The Shared Responsibility Model

The good news is that most substantial organizations can’t afford to compromise their business with the illicit use of data that they gather from you without eventually being brought to justice, one way or another. To some degree, they can’t even appear to be misusing such data, even if they have your approval.

However, who you deal with and the diligence of making sure they are worthy of owning your data is your responsibility. Ultimately, you decide who receives your information.

Similarly, the ultimate responsibility for the safe handling of your data falls on the company you share it with. So too does the burden of implementing AI in a manner that’s in line with the ‘proper use of data’.

How, then, does a company implement AI in a responsible manner? The foundation is in the ethics of the organization.

The Ethical Mission Statement/Roadmap

Organizations need to be honest with themselves regarding the purposes of the AI solutions they foster. AI has multiple applications, most often at a simple level that focuses on the automation of tasks — even multi-step processes — to build efficiency. These solutions will require data to meet that objective.

The following efforts represent a roadmap for the proper governance of AI activities:

  • An ethical mission statement (EMS) for the use of data and the purpose of the associated AI solution should be clear, specific, and limiting.
  • If there is an alternative opportunity to use data that is gathered under such an EMS, that use should go through the same vetting process before simply adopting it for that new purpose.
  • Implement AI in accordance with highly governed implementation standards.
  • Audit the final implementation; verify it has been deployed to do what it was intended.
  • AI solution outputs and actions should be recorded and audited on a consistent basis. Ensure behavior is accurate.
  • Clarify with your users, partners, and other constituents the use of AI with respect to their data.
  • Be open to the necessity of showing how data has been used versus how it was intended to be used.

As is with most emerging technology, there’s a great amount of ethical and scientific analysis on the subject of AI.

But as such concepts enter mainstream industries, there have been very few regulatory and governance initiatives to provide a foundation for its use, other than the standard compliance frameworks already in existence.

The one silver lining is that for AI to work it must have data. And over the past decade, a significant number of data privacy and protection regulations have surfaced.

Thus, we all benefit from those data protections because of the relationship between AI and data.

The combination of data privacy regulation and the roadmap above lead to an ethical basis for the implementation and eventual use of AI technology, and sets a much-needed foundation of accountability and responsibility.

Justin Somaini is an experienced Global Security executive with a long record of transforming companies’ security functions and driving security as a competitive differentiator. He has served as Chief Security Officer at SAP and is an advisor to My Ally.