AI worm exposes security flaws in AI tools like ChatGPT

AI worm exposes security flaws in AI tools like ChatGPT

If you use AI assistant tools, you'll want to follow this new update from researchers

by Hana LaRock

You’d think keeping things secure would be easy, with AI getting sharper every day. I mean, if it can crank out intricate code in no time, fending off cyber crooks should be a breeze, right? But hey, it’s not all black and white.

It’s easy to overlook that AI-assistant tools like ChatGPT and Gemini are vulnerable to malware threats, but this may precisely be one reason that malware worms can get through more easily, which might just be the welcome mat for malware to waltz right in, catching you off guard.

So, if you’re using ChatGPT or Gemini, here’s what you need to know about this new malware worm. Though not an actual threat right now, a new research study and report tell us a lot about the potential security issues and headaches facing AI down the road.

The researchers did disclose the paper with OpenAI and Google and the fact that “the worm exploits bad architecture design for the GenAI ecosystem and is not a vulnerability in the GenAI service.”

 

What is the Morris II computer worm?

The particular computer worm in question is a type of malware called Morris II, named after the Morris worm, a malware discovered in 1988 after crashing about 10% of all computers connected to the internet at that time.

To back up a bit, though, it’s important to understand that a computer worm is a type of standalone malware that can replicate itself to spread to other computers, poisoning everything in its path.

In this circumstance, the worm we’re talking about was designed by researchers to understand some of the vulnerabilities that AI-assistant tools—like AI booking calendars or email services—have. Although it’s not a direct threat right now, it could be coming for your AI tools sooner than you think.

 

MORE: HOW SCAMMERS USE AI TOOLS TO FILE PERFECT-LOOKING TAX RETURNS IN YOUR NAME

 

How does this computer worm work?

Morris II is a “zero-click” worm that infects Generative AI (GenAI) systems without requiring user interaction. GenAI platforms rely on prompts, which are essentially instructions given in text format.

However, Morris II can manipulate these prompts. It injects malicious prompts that trick the GenAI system into performing harmful actions without the user or even the GenAI itself being aware. For instance, the worm might use a compromised GenAI email assistant to send phishing emails or spam, potentially stealing or compromising your data.

 

MORE: CREEPY EMBODIED AI AVATAR GIVES A FACE AND A VOICE TO CHATGPT INTERACTION

 

 Steps to shield against the Morris II cyber threat

To protect yourself from potential cybersecurity threats like the Morris II computer worm, here are some steps you can take:

Be cautious with emails: Avoid opening email attachments or clicking on links from unknown or untrustworthy sources.

Use antivirus software: Invest in reliable antivirus software that can detect and remove malware, including computer worms. The best way to protect yourself from clicking malicious links that install malware that may get access to your private information is to have antivirus protection installed on all your devices. This can also alert you of any phishing emails or ransomware scams. 

My top pick is TotalAV, and you can get a limited-time deal for CyberGuy readers: $19 your first year (80% off) for the TotalAV Antivirus Pro package.  

Get my picks for the best 2024 antivirus protection winners for your Windows, Mac, Android & iOS devices.

Best Antivirus Protection 2024

Keep systems updated: Regularly update your operating system and applications to patch any security vulnerabilities.

Use strong passwords: Create complex passwords that are difficult to guess and use different passwords for different accounts. Consider using a password manager to generate and store complex passwords.

Backup your data: Regularly back up important data on an external drive or cloud storage to prevent loss in case of an infection.

Limit file-sharing: Be wary of downloading files from peer-to-peer networks or file-sharing platforms, as they can be sources of malware.

Enable security features: Turn on security features like two-factor authentication for an added layer of protection.

Remember, while AI tools can be incredibly helpful, they are not immune to cyber threats. It’s essential to be proactive about your digital security to safeguard your personal information and devices.

 

MORE: HOW AI COULD MANIPULATE VOTERS AND UNDERMINE ELECTIONS, THREATENING DEMOCRACY  

 

Kurt’s key takeaways

While there’s no need to abandon these AI tools yet, these researchers have taken it upon themselves to understand what type of threats we may be seeing with them in the very near future. With this information, we can prepare for potential malware threats in the future and thereby mitigate them.

Considering the potential vulnerabilities in AI tools, what measures do you think users and developers should take? Let us know in the comments below. 

FOR MORE OF MY SECURITY ALERTS, SUBSCRIBE TO MY FREE CYBERGUY REPORT NEWSLETTER HERE

 

 

Copyright 2024 CyberGuy.com.  All rights reserved.  CyberGuy.com articles and content may contain affiliate links that earn a commission when purchases are made.


   

🛍️ SHOPPING GUIDES:


KIDS   |    MEN    |    WOMEN    |   TEENS   |    PETS   | 


FOR THOSE WHO LOVE:

COOKING    |    COFFEE   |    TOOLS    |    TRAVEL    |    WINE    |


DEVICES:

 

LAPTOPS    |    TABLETS    |    PRINTERS    |    DESKTOPS    |    MONITORS  |   EARBUDS   |   HEADPHONES   |     KINDLES    |    SOUNDBARS    | KINDLES    |    DRONES    |


ACCESSORIES:

CAR   |    KITCHEN    |   LAPTOP    |   KEYBOARDS   |    PHONE   |    TRAVEL    | KEEP IT COZY    |


PERSONAL GIFTS:

PHOTOBOOKS    |   DIGITAL PHOTO FRAMES    |


SECURITY

ANTIVIRUS    |    VPN   |    SECURE EMAIL    |


CAN'T GO WRONG WITH THESE:

GIFT CARDS



   

1 comment

Joe April 2, 2024 - 5:40 am

AI should have never been used and should be stop before it’s to late. We can’t even stop data theft or id theft why on Earth should we let this AI stuff continue.

Reply

Leave a Comment

GET MY FREE CYBERGUY REPORT
Subscribe to receive my latest Tech news, security alerts, tips and deals newsletter. (We won't spam or share your email with anyone else.)

By signing up, you agree to our Terms of Service and Privacy Policy. You may unsubscribe at any time.

Tips to avoid our newsletters going to your junk folder