Published 17:55 IST, July 18th 2024

Black Box AI: An ethical paradox?

Black Box AI is also prone to biases as it depends on the quality of data supplied to it and the developers can instil the bias.

Reported by: Shalini Verma
Follow: Google News Icon
  • share
Black Box AI: An ethical paradox? | Image: Freepik
Advertisement

Yes, Self-driving cars rely on complex AI systems including Black Box AI models.

Here we need to understand very carefully what a Black Box AI is. To explain it in the simplest terms: Black box AI refers to artificial intelligence systems whose internal mechanics are unclear or hidden from users and even developers. It is opaque, complex, difficult to interpret or understand and it is not transparent or explainable! These systems are based on neural networks or deep learning algorithms and they make decisions or predictions without revealing the underlying logic or decision making process. The inputs and outputs are visible but the process in between is a ‘Black Box’ making it challenging to understand why a particular decision was made. A black box, in a general sense, is an impenetrable system. Black box AI models arrive at conclusions or decisions without providing any explanations as to how they were reached!

Advertisement

Amazing isn’t it? And scary? We get results, the results are fast due to the complicated algorithms and also more accurate however there is still no way to trace their transparency!

Imagine how children pick up things by way of associating pictures and sounds to objects and slowly develop an understanding and reasoning. We do not know how they do it, but slowly they do and then they give correct answers to the objects around them. Black box AI is similarly fed large sets of data and uses similar unknown form of understanding/reasoning and arrives at its accurate decisions. Only issue is we have no clue how! The level of complexity is such that even the developers are unable to trace the path its deep neural networks take to arrive at these decisions! Maybe in future it can be trained to drop clues to its path of reasoning, but right now there has been no success in achieving it.

Advertisement

This however raises several trust issues and can be dangerous if its decisions result in wrong interpretations affecting human life!

Black Box AI is also prone to biases as it depends on the quality of data supplied to it and the developers can instil the bias knowingly to produce certain results or un knowingly over time something can cause it to hallucinate and give out biased decisions. But there will be no way even for the developers to trace how and when the bias or hallucination set in! Imagine the repercussions if this technology is used in criminal justice procedures or worse in arriving diagnosis to medical conditions. Financial institutions also can find it challenging to use this AI as how will they be able to justify why the loans were approved or not? They have no way of providing the reasoning behind the Black Box decision! Hence it is also found lacking in accountability!

Advertisement

Many have argued that the data is also not protected while using Black Box AI as its developers can share the data with third party vendors for analysis! Who knows if the third party vendors will secure the data appropriately or not. How will one trace that? Remember Black Box AI has traceability problem!

We started with Self-driven cars and know of several cases where the system has failed and caused accidents due to malfunction. Black Box AI is also prone to attacks from threat actors.

Advertisement

It can be argued then that this type of AI must be discarded and indeed some laws are being made to hold the developers responsible for the actions of the Black Box AI and how it uses and secures data. Do we then consider this advanced deep tech a wasted effort? Black box uses Deep learning modelling and the learning algorithm takes millions of data points, correlates specific data features to gives the output. It is extremely useful in ‘Fraud Detection’ as its machine learning models are used to detect fraudulent transactions. Its Neural networks are used for image classification, object detection and facial recognition but the complexity of the models makes them difficult to interpret. Language models (like BERT) and transformer based architectures are used for language translation, sentiment analysis and text summarization. Black Box AI models are also used by ‘Recommending Systems’ like online platforms to recommend products or content based on user behaviour. Just do not expect to understand how they arrive at the underlying logic. As mentioned before there is still no way of knowing. Some AI -powered diagnostic tools do use Black Box AI models to analyse medical images and patient data to diagnose diseases, but they do not give you the reasoning behind it. Rather they cannot give it. Self-driving cars still use it too to make navigation decisions , obstacle detection and control.

Let us now look at its fairer brother for a bit. The White Box AI also called Glass Box AI. It is linear and clear, transparent and highly traceable AI. This type of AI is used by data analysts who can easily visualise and prove how and why the White Box AI arrived at the decisions. Interestingly, it is also found that the results thus arrived are also not too far away from Black Box AI results. But White Box AI is not being employed for the same use cases. There are discussions in influential AI circles suggesting that enough effort has not been made to try and use White Box AI for similar use cases as Black Box AI. Though some feel that the use cases for both are different. Many are of the thought that White AI cannot handle the levels of complexities which Black AI can.

Explainable AI(XAI) is another AI which is the antithesis of Black Box AI. It aims to make complex AI decisions more interpretable and understandable, but it doesn’t necessarily require the entire system to be transparent. Its results are easy to understand by an average person too. This one would find more reliable and relatable. Arguably, the decision on choice of AI model falls into the hands of the developers who determine the purpose and the users who determine the ease of use.

When we look at the risks of Black Box AI also labelled as ‘Emerging Vulnerabilities’ in the US, we have to consider its ethical risks. Governments in the US and the latest European AI Act has come up with several legally binding rules that require tech companies to disclose the use of Chatbots, Biometric categorization, Emotion recognition, Deep fakes and AI- generated content. The higher the risk the stricter are the rules. The EU is bound to become the premier AI police and will set up a new European AI Office to coordinate compliance.

The next logical question is ‘What about Chat GPT and similar apps, what type of AI are they?

Interestingly, I took the question to the horse itself. And prompt was the answer, “I am Grey Box AI, I use elements of both White Box AI and Black Box AI. I use complex algorithms and neural networks, making me partially a Black Box. However, my responses are generated based on patterns and associations learned from large datasets, making my decision-making process somewhat transparent and interpretable. While I am not fully transparent, I am designed to provide helpful and informative responses and my inner workings are continually being improved to make me more explainable and trustworthy!”

Did you know that? Interesting, answer from it. Enlightening too.

So then is it ethical to use Black Box AI? We have been using it occasionally have we not, even if we use just the ‘Grey’ part! I feel the companies using Black Box AI will have to make the decision and so do the users. We must keep in mind though we use cars for ease of travel despite knowing it can cause accidents, we use the internet exposing ourselves to many vulnerabilities, we use all kinds of tech despite knowing the risks, we just use it as carefully as possible. Maybe this is the correct answer to Blackbox AI too. We need to be careful and cautious and make sure we control it and not the other way around. Especially where National defence is concerned or human impact is at high risk. Maybe the answer lies neither in Black nor White but in Grey Box AI! We might have to wait a bit though ! Recall, it’s still learning!

17:55 IST, July 18th 2024

undefined