Analysts share 8 ChatGPT security predictions for 2023 

Analysts share 8 ChatGPT security predictions for 2023 

Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More



The release of ChatGPT-4 last week shook the world, but the jury is still out on what it means for the data security landscape. On one side of the coin, generating malware and ransomware is easier than ever before. On the other, there are a range of new defensive use cases. 

Recently, VentureBeat spoke to some of the world’s top cybersecurity analysts to gather their predictions for ChatGPT and generative AI in 2023. The experts’ predictions include: 

  • ChatGPT will lower the barrier to entry for cybercrime. 
  • Crafting convincing phishing emails will become easier. 
  • Organizations will need AI-literate security professionals. 
  • Enterprises will need to validate generative AI output.
  • Generative AI will upscale existing threats.
  • Companies will define expectations for ChatGPT use. 
  • AI will augment the human element.
  • Organizations will still face the same old threats. 

Below is an edited transcript of their responses. 

1. ChatGPT will lower the barrier to entry for cybercrime 

“ChatGPT lowers the barrier to entry, making technology that traditionally required highly skilled individuals and substantial funding available to anyone with access to the internet. Less-skilled attackers now have the means to generate malicious code in bulk. 

Event

Transform 2023








Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.






Register Now


“For example, they can ask the program to write code that will generate text messages to hundreds of individuals, much as a non-criminal marketing team might. Instead of taking the recipient to a safe site, it directs them to a site with a malicious payload. The code in and of itself isn’t malicious, but it can be used to deliver dangerous content. 

“As with any new or emerging technology or application, there are pros and cons. ChatGPT will be used by both good and bad actors, and the cybersecurity community must remain vigilant to the ways it can be exploited.”

— Steve Grobman, senior vice president and chief technology officer, McAfee 

2. Crafting convincing phishing emails will become easier

“Broadly, generative AI is a tool, and like all tools, it can be used for good or nefarious purposes. There have already been a number of use cases cited where threat actors and curious researchers are crafting more convincing phishing emails, generating baseline malicious code and scripts to launch potential attacks, or even just querying better, faster intelligence. 

“But for every misuse case, there will continue to be controls put in place to counter them; that’s the nature of cybersecurity — a neverending race to outpace the adversary and outgun the defender. 

“As with any tool that can be used for harm, guardrails and protections must be put in place to protect the public from misuse. There’s a very fine ethical line between experimentation and exploitation.” 

— Justin Greis, partner, McKinsey & Company 

3. Organizations will need AI-literate security professionals  

“ChatGPT has already taken the world by storm, but we’re still barely in the infancy stages regarding its impact on the cybersecurity landscape. It signifies the beginning of a new era for AI/ML adoption on both sides of the dividing line, less because of what ChatGPT can do and more because it has forced AI/ML into the public spotlight. 

“On the one hand, ChatGPT could potentially be leveraged to democratize social engineering — giving inexperienced threat actors the newfound capability to generate pretexting scams quickly and easily, deploying sophisticated phishing attacks at scale. 

“On the other hand, when it comes to creating novel attacks or defenses, ChatGPT is much less capable. This isn’t a failure, because we are asking it to do something it was not trained to do. 

“What does this mean for security professionals? Can we safely ignore ChatGPT? No. As security professionals, many of us have already tested ChatGPT to see how well it could perform basic functions. Can it write our pen test proposals? Phishing pretext? How about helping set up attack infrastructure and C2? So far, there have been mixed results.

“However, the bigger conversation for security is not about ChatGPT. It’s about whether or not we have people in security roles today who understand how to build, use and interpret AI/ML technologies.” 

— David Hoelzer, SANS fellow at the SANS Institute 

4. Enterprises will need to validate generative AI output 

“In some cases, when security staff do not validate its outputs, ChatGPT will cause more problems than it solves. For example, it will inevitably miss vulnerabilities and give companies a false sense of security.

“Similarly, it will miss phishing attacks it is told to detect. It will provide incorrect or outdated threat intelligence.

“So we will definitely see cases in 2023 where ChatGPT will be responsible for missing attacks and vulnerabilities that lead to data breaches at the organizations using it.”

— Avivah Litan, Gartner analyst 

5. Generative AI will upscale existing threats 

“Like a lot of new technologies, I don’t think ChatGPT will introduce new threats — I think the biggest change it will make to the security landscape is scaling, accelerating and enhancing existing threats, specifically phishing.

“At a basic level, ChatGPT can provide attackers with grammatically correct phishing emails, something that we don’t always see today.

“While ChatGPT is still an offline service, it’s only a matter of time before threat actors start combining internet access, automation and AI to create persistent advanced attacks.

“With chatbots, you won’t need a human spammer to write the lures. Instead, they could write a script that says ‘Use internet data to gain familiarity with so-and-so and keep messaging them until they click on a link.’

“Phishing is still one of the top causes of cybersecurity breaches. Having a natural language bot use distributed spear-phishing tools to work at scale on hundreds of users simultaneously will make it even harder for security teams to do their jobs.” 

— Rob Hughes, chief information security officer at RSA 

6. Companies will define expectations for ChatGPT use

“As organizations explore use cases for ChatGPT, security will be top of mind. The following are some steps to help get ahead of the hype in 2023:

  1. Set expectations for how ChatGPT and similar solutions should be used in an enterprise context. Develop acceptable use policies; define a list of all approved solutions, use cases and data that staff can rely on; and require that checks be established to validate the accuracy of responses.
  2. Establish internal processes to review the implications and evolution of regulations regarding the use of cognitive automation solutions, particularly the management of intellectual property, personal data, and inclusion and diversity where appropriate.
  3. Implement technical cyber controls, paying special attention to testing code for operational resilience and scanning for malicious payloads. Other controls include, but are not limited to: multifactor authentication and enabling access only to authorized users; application of data loss-prevention solutions; processes to ensure all code produced by the tool undergoes standard reviews and cannot be directly copied into production environments; and configuration of web filtering to provide alerts when staff accesses non-approved solutions.”

— Matt Miller, principal, cyber security services, KPMG 

7. AI will augment the human element 

“Like most new technologies, ChatGPT will be a resource for adversaries and defenders alike, with adversarial use cases including recon and defenders seeking best practices as well as threat intelligence markets. And as with other ChatGPT use cases, mileage will vary as users test the fidelity of the responses as the system is trained on an already large and continually growing corpus of data.

“While use cases will expand on both sides of the equation, sharing threat intel for threat hunting and updating rules and defense models amongst members in a cohort is promising. ChatGPT is another example, however, of AI augmenting, not replacing, the human element required to apply context in any type of threat investigation.”

— Doug Cahill, senior vice president, analyst services and senior analyst at ESG 

8. Organizations will still face the same old threats  

“While ChatGPT is a powerful language generation model, this technology is not a standalone tool and cannot operate independently. It relies on user input and is limited by the data it has been trained on. 

“For example, phishing text generated by the model still needs to be sent from an email account and point to a website. These are both traditional indicators that can be analyzed to help with the detection.

“Although ChatGPT has the capability to write exploits and payloads, tests have revealed that the features do not work as well as initially suggested. The platform can also write malware; while these codes are already available online and can be found on various forums, ChatGPT makes it more accessible to the masses. 

“However, the variation is still limited, making it simple to detect such malware with behavior-based detection and other methods. ChatGPT is not designed to specifically target or exploit vulnerabilities; however, it may increase the frequency of automated or impersonated messages. It lowers the entry bar for cybercriminals, but it won’t invite completely new attack methods for already established groups.” 

— Candid Wuest, VP of global research at Acronis 



VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.