AI in the Wrong Hands: How Unregulated Technology Could Fuel Cybercrime

Artificial Intelligence, Cybersecurity
Share


AI’s potential as a cybersecurity threat is being overlooked amid regulatory debates and innovation hype. As AI becomes more integrated into business operations, it also creates new vulnerabilities that existing security measures may not be prepared to handle. Aras Nazarovas of Cybernews reports…

The recent AI summit in Paris pushed an optimistic vision of the technology’s potential, focusing on how AI can solve big problems in medicine, climate science, and beyond instead of prioritizing security. But the world can’t just be blissfully excited. It’s crucial to remember that AI is also a powerful tool for malicious actors – one that’s already being used in cyberattacks and could evolve into a much bigger threat.

Today, AI is being deployed to amplify cyberattacks in various ways. A study from the University of Cambridge showed how AI-driven cyberattacks are becoming more sophisticated. Attackers are increasingly using machine learning algorithms to automate phishing attacks, targeting individuals and organizations with highly personalized content. These AI-driven systems can analyze vast amounts of data – on social media profiles, browsing history, and even email patterns – to create convincing attacks that are harder to detect than traditional ones. 

AI tools lower the barrier to entry for cybercrime by enabling less experienced attackers to launch attacks they wouldn’t otherwise have the skills or knowledge to carry out. For instance, individuals who lack programming skills, can now simply ask AI tools like ChatGPT to write bots that automate the process of breaching servers. While these attacks may not be novel, they still increase the volume of potential threats companies need to defend against, draining the resources of already underfunded security teams.

Striking Balance Between Innovation and Security 

As AI tools become more embedded in business operations, the stakes grow even higher. For instance, KPMG’s recent survey of financial leaders revealed that 84% plan to increase their investments in generative AI (GenAI). 

While they and presumably other industries are in the process of accelerating  the adoption of AI tools, the World Economic Forum reports that nearly 47% of surveyed organizations have already noticed adversarial advances powered by GenAI as their primary concern, enabling more sophisticated and scalable attacks. Moreover, the same report states that only 37% of organizations have processes in place to assess the security of AI tools before deployment. 

Meanwhile, the EU’s AI Act, which aims to regulate high-risk AI systems, is being phased in over several years, with full implementation not expected until 2027. However, there is a growing debate in Europe about how to balance regulation with fostering innovation. During the Paris AI summit, French President Emmanuel Macron remarked that Europe might reduce regulatory burdens to allow AI to flourish in the region. 

This presents a potential challenge: while Europe struggles with over-regulation concerns, its wait-and-see approach might cause them to miss the boat as AI technology evolves at an incredible speed. By the time the AI Act is fully in place, we could be facing an entirely new wave of AI-powered cyberattacks, many beyond the scope of current regulations. 

So, what does this mean for cybersecurity if AI is regulated by a light-touch regulatory framework? While innovation is essential, the absence of security-focused regulation means AI tools are already in the hands of cybercriminals who can weaponize them with minimal oversight.

At the moment, the capacity of AI systems for automating and optimizing cyberattacks already extends far beyond phishing. AI-powered tools can be used to exploit vulnerabilities in critical infrastructure systems, launch bigger Distributed Denial of Service (DDoS) attacks, or even manipulate financial markets. In 2023, the US Department of Homeland Security issued a warning that AI-powered systems could soon be capable of launching autonomous cyberattacks that are difficult to counteract using conventional defense mechanisms. Such threats present a security nightmare that policymakers can’t afford to ignore.

If AI systems evolve to the point where they can autonomously compromise digital infrastructure, we could see an escalation in both the frequency and severity of cyberattacks, potentially crippling global systems.


Cybersecurity Must Evolve – Now

Whether AI is robustly regulated or not, businesses should do more than a bare minimum for cybersecurity. First, it’s essential to invest in additional, AI-driven security tools rather than replacing existing ones with AI-powered solutions. While AI and machine learning can be incredibly useful for detecting and preventing attacks in real time, they can also make incorrect decisions.

AI should serve as an additional resource to enhance cybersecurity efforts, not as a replacement for traditional tools. By analyzing patterns in network traffic, AI can identify anomalies that may signal a breach. As cyberattacks become more automated, AI can help security teams identify threats faster and more efficiently, allowing them to do more with the same amount of resources.

Another step is to start incorporating AI threat modeling into security protocols. AI can be leveraged to predict and prevent attacks. Security teams need to think like attackers, using AI to simulate how their systems might be breached and proactively patching those vulnerabilities before they can be exploited.

Finally, companies must invest in continuous training for their security teams. As AI-driven attacks evolve, it’s not enough to simply rely on firewalls and antivirus software. Security professionals need to be prepared to deal with more sophisticated, AI-powered threats. This includes staying ahead of trends, understanding how AI tools are being used against them, and developing strategies that go beyond traditional defenses.

Undoubtedly, AI has the potential to revolutionize cybersecurity and every other industry, but it also introduces a new wave of risks. While policymakers may be caught up in the AI race, cybersecurity professionals must act now. AI can be an ally in the fight against cybercrime and in enabling business operations, but it can also become an adversary if left unchecked. As we race toward a future shaped by AI, securing our systems against its darker side should be a top priority.

ABOUT THE EXPERT

Aras Nazarovas is an Information Security Researcher at Cybernews, a research-driven online publication. Aras specializes in cybersecurity and threat analysis. He investigates online services, malicious campaigns, and hardware security while compiling data on the most prevalent cybersecurity threats. 

Chris Price
For latest tech stories go to TechDigest.tv