## **The most popular AI software today is making malware creation easier than ever.** According to *TechRadar*, (ChatGPT)(https://www.24h.com. vn/chagpt-c55e7117.html) is being harnessed by bad actors to create new types of malware. Cybersecurity firm WithSecure has confirmed that it has found several examples of malware created using OpenAI’s chatbot in action. What makes ChatGPT especially dangerous is that it can create countless variants of malware, making them very difficult to detect. The bad guys just need to provide ChatGPT with illustrative examples of the existing malware’s source code and instruct the chatbot to generate new lines of code based on the existing ones, which makes it possible for the malware to last longer without the time, effort, and expertise it used to. This news comes as there is a lot of talk about regulating AI to prevent it from being used for malicious purposes. Essentially, there have been no regulations governing the use of ChatGPT since the service’s launch in late 2022. Although there are certain safeguards in place within OpenAI to prevent chatbots execute nefarious commands, but there are still ways for bad guys to bypass these measures. WithSecure CEO Juhani Hintikka told Infosecurity that AI is often used by cybersecurity defenders to find and remove malware. However, with the availability and free of charge of powerful AI tools like ChatGPT today, this situation is changing. Remote access tools have been used for illegal purposes, and so is AI. Furthermore, ransomware attacks are increasing at an alarming rate, threat actors are reinvesting and becoming more organized, expanding operations by outsourcing and developing their understanding of them on AI, thereby carrying out attacks on a larger scale and more successfully. Ultimately, CEO Hintikka concluded that the future landscape of cybersecurity will be a big game between good AI and bad AI.