Skip to content ↓ | Skip to navigation ↓

Governments, businesses and societies as a whole benefit enormously from Artificial Intelligence (AI).

AI assists organisations in reducing operational costs, boosting user experience, elevating efficiency and cultivating revenue. But it also creates a number of security challenges for personal data and forms many ethical dilemmas for organisations. Such challenges for information security professionals mean re-calibration of approaches to data security, data classification and privacy. For regulators, this is translated to the General Data Protection Regulation (GDPR).

AI’s lifeline is data, and one source of data is the Internet of Things (IoT), which feeds by personal data. Henceforth it is apparent that the use of AI has implications for privacy, data protection and the rights of individuals. Such rights are strengthened by GDPR and its stricter rules that apply to the collection and use of personal data. In addition to being transparent, organisations will need to be more accountable for what they do with personally identifiable information.

This article will discuss some challenges of AI such as ethical, cyber security and privacy issues that have arisen from the upsurge of AI.

What is Artificial Intelligence and its capabilities?

For some people, the term “AI” conjures up remembrances of scenes from science fiction movies such as Terminator. However, AI has more to offer than robots.

AI has been learnt and studied for decades and is still one of the most abstract subjects in computer science. One of the fathers of AI, John McCarthy defines the objectives of artificial intelligence as developing machines that behave intelligently by learning from experience and implementing tasks like human. The capabilities of AI increased greatly due to the massive expansion of the volumes of data and technologies’ data storage capabilities, steady increase of computing power, and most importantly the advancement research machine learning algorithms.

However, after years of research, the instinctive human intellect appears to be beyond the abilities of computers or reasoning machines. Computers and machines are yet to take over, but they influence and affect our life – from Siri to more vital technologies such as behavioural algorithms and autonomous self-driving cars. Some of the most powerful AI technologies which we currently use are Alexa, Netflix, Google and Tesla.

Why should we be worried about AI?

There are many people in various fields who are expressing their anxiety about AI. The late professor Stephen Hawking gave one of the most colourful warnings about AI in which he predicted that AI could wreak havoc on a world. He warned that we are not prepared to manage the potential devastating power of AI, which could end mankind forever and as we know it.

Soon, AI will be used in all systems and platforms, and we may see a world without systems administrators, software developers or business analysts. The problem arises if and when AI is applied without thorough and careful planning and consideration. The most important consideration is knowing the decision-making process and ensuring the responsibilities of people are understood. This can only be achieved if we provide an impartial source of data which feeds AI that will influence the decision made by an AI process. Imagine how this could go wrong if we use AI to control the safety of a nuclear power plant.

We may reach to the point in which human interference cannot reverse a decision, which may lead to a total disaster. For example, in recent years, many organisations use customer data to direct automatic calling systems. The AI works to gather and use customer data, such as location, phone numbers and date of birth.

Aren’t these familiar forms of personal identifiable information found in GDPR? How can organisations ensure the data and the process of decision-making of AI is not compromised by criminals? Adversaries are employing machine learning tools and predictive software to produce malware and Swarmbots which feed in automated vulnerability detection and complex data analysis, after all.

Whilst there are socio-technical issues with AI, organisations should also consider the psychological impacts of AI systems on their customers and employees. Large-scale use of personal data throughout social media and other platforms that use IoT and big data could create long-lasting psychological effects for individuals.

GDPR intended to form a regulatory regime in which personal data is kept safe and informed from powerful AI systems which used the big data to make certain decisions by processing such data. These decisions would be a lifeline for the businesses in their investments. However, the challenges are big, and some serious ethical and legal implications threaten both customers and businesses.

With no doubt, every industry, service and technology will be touched by Artificial Intelligence.

The self-learning algorithms carry a big advantage in that they have an unbounded scope because unlike computer programs which are designed to implement a task, they can learn from all types of data. Their behavior is based on the recognition of a pattern of data. Feeding them with more data will enable them to build a new program based on new data.

The benefits of such a phenomenon are great – from medicine to security and business. But there are questions and concerns. The role of adversaries and misuse of data, privacy and security are part of such concerns.

However, the main question is if the self-learning algorithm gaining control of societies and people. How much will the purpose of managing the explosion of data damage our ethics and morality? Is the current regulatory regime capable of dealing with big data, IoT and the issue of privacy? All of these questions remain open to discussion.


About the Author: Reza AlaviReza has been working in various IT positions in the last 27 years and currently working as an information security consultant. He worked as International Marketing Manager in two companies, which specialise in a wide range of consultancy services such as information security, risk management, business continuity and IT governance in the Middle East. His current work as security consultant includes, specialising in information security coaching, helping his clients to become more effective and efficient typically through the strategic of information systems, risk management and security governance.

Having significant experience of the commercial and financial sectors in various parts of the globe working with a variety of cultures and work ethics enables him to understand current security requirements and threat landscape to achieve a better outcome in GRC environment. Reza is the Managing Director of “Information Security and Audit Control Consultancy (ISACC)” whilst chairing the “Information Risk Management and Assurance (IRMA)” specialist group in BCS and sits on the RM/1 Risk Management Committee at “British Standard Institution (BSI)”.

Editor’s Note: The opinions expressed in this guest author article are solely those of the contributor, and do not necessarily reflect those of Tripwire, Inc.