In 2024, CrowdStrike surveyed 1,022 security researchers and practitioners from the US, APAC, EMEA, and other regions. The findings revealed that cyber professionals are deeply concerned about the challenges associated with AI. While 64% of respondents have purchased either. Creative AI tools For work or researching them, the majority are cautious.
What are security researchers looking for from generative AI?
According to the report:
- The high-level motivation for adopting creative AI isn’t addressing skills shortages or fulfilling leadership mandates — it’s improving the ability to respond to and defend against cyberattacks.
- AI for general use does not necessarily appeal to cybersecurity professionals. Instead, they want generative AI partnered with security expertise.
- 40 percent of respondents said Rewards and risks There are “contests” of creative AI. Meanwhile, 39% said the rewards outweigh the risks, and 26% said the rewards don’t.
“Security teams want to deploy GenAI as part of a platform to get more value from existing tools, elevate the analyst experience, accelerate onboarding,” the report said. and eliminate the complexity of integrating new point solutions.”
Measuring ROI has been a constant challenge when adopting creative AI products. CrowdStrike found quantification of ROI to be the top economic concern among its respondents. The next two top concerns were the cost of licensing AI tools and unpredictable or ambiguous pricing models.
CrowdStrike divides AI ROI estimation methods into four categories in order of importance:
- Cost optimization (31%) through platform stability and more effective security tool usage.
- Reduction in security incidents (30%).
- Less time was spent managing security tools (26%).
- Short training cycle and associated costs (13%).
Adding AI to an existing platform, rather than buying a freestanding AI product, can “realize additional savings associated with broader platform consolidation efforts,” CrowdStrike said.
SEE: A ransomware group claimed responsibility in late November. A cyber attack that affected operations. At Starbucks and other organizations.
Can generative AI solve more security problems?
In contrast, creative AI needs to protect itself. CrowdStrike’s survey found that security professionals were most concerned about exposure of LLMs’ data behind attacks launched against AI products and generative AI tools.
Other concerns include:
- Lack of guardrails or controls in generative AI tools.
- AI illusion.
- Inadequate public policy regulations for the use of creative AI.
Almost all (9 in 10) respondents said their organizations have implemented new security policies or are developing policies to govern generative AI within the next year.
How organizations can leverage AI to protect against cyber threats
Generative AI can be used for brainstorming, research, or analysis with the understanding that its information should be double-checked often. Generative AI can pull data from different sources in different formats into a single window, reducing the time it takes to investigate an incident. Many automated security platforms offer generative AI assistants, such as Microsoft Security pilot.
GenAI can protect against cyber threats. by:
- Risk detection and analysis.
- Automated incident response.
- Phishing detection
- Improved security analytics.
- Synthetic data for training.
However, organizations should consider security and privacy controls as part of any creative AI purchase. Doing so can protect sensitive data, comply with regulations, and mitigate risks such as data breaches or misuse. Without adequate safeguards, AI tools can expose vulnerabilities, produce harmful results, or violate privacy laws, causing financial, legal, and reputational harm.