In the ‘Era of Bots’, when you automate tasks previously handled by humans, how can individuals and businesses keep data safe?
Depending on your exposure to recent technology trends – and, arguably, the daily news – you’ve probably encountered coverage of how ‘bots’ are taking over the internet. To put this into perspective, you have to apply some guidelines to define the term bot and the scope of what most bots do.
Essentially, a bot is a “semi-autonomous, persistent program resident within a network or technology platform that can interact with other systems or users”. For all the technical jargon above, it is simply a program that interacts with others (people or systems) in a targeted and apparently intelligent way.
To a degree, they fall into a segment of Artificial Intelligence (AI) whose actions are governed by a set of defined rules.
Bots have been prevalent across the internet age for decades, with most of their initial implementations designed to have them crawl the internet to identify, catalog, and index web resources. Today, however, the bot has been experiencing an evolution where the same logic is applied to a plethora of tasks – many very targeted and specific – for both good and bad purposes.
The latest statistics set bot traffic at approximately 52% of all internet traffic, which may be surprising to most. The reality is that this ‘trend’ is not going away. The reasons are simple.
As the Internet continues to grow, there are still 3.6 billion people in the world that do not have access to it; AI is trending upwards across all industries and walks of life; and most importantly, organizations are actively leveraging these technologies to more efficiently drive human performance by automating tasks once performed by people.
New Bots Have Arrived — And Are Here To Stay
Bots aren’t what they used to be, though; they’ve evolved greatly over the past two decades. There are now bots that interpret language and translate requests, regardless of their content, into likely actions. Many even know when there’s a high chance that they haven’t recognized your request accurately and ask clarifying questions, eventually even realizing that they don’t have the answer and get a human involved.
There are bots that coordinate tasks to ensure completion of complex objectives. We even let them manage parts of our lives through access to our calendars and email. Yet with all this good, there are also bots that actively seek private information to exploit systems and data.
So far, we’ve established that:
- Bots exist
- They can be utilized for good and bad
- This trend isn’t going to level off for some time (though the proportion of bot-to-human traffic may)
- And organizations are actively leveraging bots to interact with us in regular, everyday situations
Enter the issues of privacy and data security. If you’re already interacting with bots – which you likely do yet may not even know – and you’re doing so in the context of a commercial relationship, chances are you have registered yourself and already provided data.
Perhaps even confidential personal or personnel data. Imagine working with an organization that utilizes Personally Identifiable Information (PII) to help you fulfill one of your key business functions.
So if bots have access to your personal information, what about privacy? What’s to stop that data from being used in questionable ways?
In step corporate responsibility and privacy governance, both of which are championed by governments and organizations alike. Let’s be clear: the government (regardless of jurisdiction) cannot directly control how your data is used, but only establish rules for its protection and penalties for non-compliance.
This approach works for managing multinational enterprises that have vested interests in the monetary rewards of being present in a global market. But it has less of an impact on smaller organizations, and virtually none on the ‘bad bots’ on the Internet. Organizations take on most of the burden and responsibility for the proper and protected use of such data.
Identifying High-Quality Privacy Solutions Involving Bots
This is where it lies – like most technology in its march forward, there are going to be growing pains. These will manifest as issues that impact society in a broad range of ways. Assuming the goal is to have a ‘safe’ bot experience, the reality is that it’s up to the individual organizations you deal with to not only comply, but to advance the privacy goal.
A diligent and invested organization and their solution(s) will demonstrate five distinct behaviors:
- Their bots will be specific in intent and action. Bots should be designed to accomplish a succinct set of tasks with very specific goals. If you can quantify exactly how data is being used, you will know how it can be misused. Be willing to go the extra mile when evaluating vendors on this basis – most importantly, that they can explain themselves.
- They have purposefully aligned their solution with privacy regulations. When an organization is willing to unequivocally state that their solution is compliant with privacy regulations, you’ve got yourself a good candidate. But go further – ask them to show you.
- They will tell you what and how analytics, e.g. AI, is being applied. Once again, transparency is a good indicator of an organization that lacks a nefarious objective. Telling you where they use AI has little risk to an organization, but stating how it works is generally protected intellectual property; expect the former, not the latter. A good explanation will be easy to understand, not drowned in legal terminology.
- They will tell you how your data is being protected. Organizations dedicated to data privacy and security will be specific about how they guard your information. This transparency will include people, processes, and technology. Once again, don’t expect in-depth algorithms or intellectual property details, but do expect coverage and clarity.
Risk is a reality that accompanies almost every technological advancement, and there are always two collective positions on that risk: those who wish to take advantage of it, and those who want to use it to its fullest positive benefit. Additionally, technology cannot police itself; it is just a tool, and how it is used is still in the hands of humans.
Partners that meet the above five criteria will generally be worth working with, but this should not establish a false sense of security. “Trust but verify” is a key behavior that such organizations with best interests in mind will support. And in turn, they will accelerate your business in a manner that displays a concern for privacy that rivals that of your own organization.