top of page

Inherent Risks of Artificial Intelligence

Updated: Apr 22, 2023




Artificial Intelligence - Revolutionary and world-changing but not without its challenges.


Frontier technologies no doubt promise to fundamentally be a game changer in this war againt economic crime. It potentially will increase significantly the efficiencies and effectiveness in detection, prevention and risk management.

Algorithmic learning, artificial intelligence (AI), and big data analytics have been employed to improve efficiency and accuracy in predicting and identifying corruption risks or regulatory loopholes.


https://www.statista.com/chart/29174/time-to-one-million-users/


The rapid development of Openai has attracted attention from lawmakers in several countries around the world. The chatbot is currently banned in Italy also unavailable in mainland China, Hong Kong, Iran and Russia and parts of Africa. Many experts say new regulations are needed to govern AI because of its potential impact on national security, jobs and education. Earlier this week key figures in tech, including Elon Musk and over 1000 tech experts called for these types of AI systems to be suspended amid fears the race to develop them was out of control.


⚡ ROME, April 6 (Reuters) - OpenAI plans to present measures to Italy's authorities on Thursday to remedy concerns that led to a ban last week on the ChatGPT chatbot in the country, Italy's Data Protection Authority said. https://lnkd.in/efsDu4Bh

⚡ According to The Independent, Germany and Ireland considers ChatGPT ban -After Italy OpenAI is facing ban in multiple European countries fundamentally due to privacy concerns. https://www.independent.co.uk/tech/chatgpt-ban-germany-ai-privacy-b2314487.html

⚡Victoria, April 7 (The Independent) ChatGPT faces world’s first defamation lawsuit in Australia. "Victorian Mayor Brian Hood claims the AI chatbot was telling users that he had served time in prison as a result of a foreign bribery scandal. He said he would sue OpenAI if the incorrect information was not removed, which would mark the first defamation lawsuit against an artificial intelligence chatbot" https://www.independent.co.uk/tech/chatgpt-lawsuit-australia-openai-b2315525.html



The advent of new technologies not just artificial intellience but also web3, quantum computing, metaverse to mention a few is ushering in the 4th Industrial Revolution Technology.

According to Sebastian Buckup, Head of Networks and Partnerships, C4IR - Rather than simply waiting for regulators to set the guardrails, businesses must move beyond compliance and engrain responsible and human-centred technology principles into their DNA to remain competitive, withstand disruption and build resilience.

Although AI has been hailed as revolutionary and world-changing, it is not without drawbacks. Some few concerns amongst others are;


📌Privacy

📌Freedom of speech

📌Biases

📌Freedom of choice

📌Ethics

📌AI fueled autonomous weapon



OpenAI acknowledged the “real risks” inherent with the technology but claimed its artificial intelligence systems were subject to “rigorous safety evaluations” in a blog published recently. “Despite extensive research and testing, we cannot predict all of the beneficial ways people will use our technology, nor all the ways people will abuse it,” the blog post stated.



Technologies throughout centuries have supported humanity, this is not different. We all have an obligation to mitigate adverse effect of frontier technologies by engaging at National level (Regulations and standardization), Corporate level (Strategy, research, operation and trainings) and all of us.

Example is the EU AI Act: The regulation propose to lay down harmonised riles on AI and amend centain union legislative acts. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=celex:52021PC0206

"The cornerstone of the AI Act is a classification system that determines the level of risk an AI technology could pose to the health and safety or fundamental rights of a person. The framework includes four risk tiers: unacceptable, high, limited and minimal. AI systems with limited and minimal risk—like spam filters or video games—are allowed to be used with little requirements other than transparency obligations. Systems deemed to pose an unacceptable risk—like government social scoring and real-time biometric identification systems in public spaces—are prohibited with little exception".

It is our duty to shape our world!


Maycode is a boutique consulting firm offering unique services that combines AI technologies with human expertise to provide a comprehensive solutions. What is your organisation strategy on AI, Maycode experts are happy to support you all the way.



0 comments
bottom of page