World News

Eight emerging areas of opportunity for AI in security

Cyberattackers’ abilities to invent new tradecraft that tilts the AI war in their favor is happening faster than anyone predicted, making every cybersecurity vendor double down to improve their arsenals quickly.

But what if that isn’t enough? Given how quickly every business is adopting AI and how new generative AI-based security technologies are needed. That’s core to the thesis of how Menlo Ventures chose to evaluate eight areas where gen AI is having an outsized impact.

Getting ahead of emerging threats now 

VentureBeat recently sat down (virtually) with  Menlo Ventures’ Rama Sekhar and Feyza Haskaraman. Sekhar is Menlo Venture’s new partner, focusing on cybersecurity, AI and cloud infrastructure. Haskaraman is a Principal in cybersecurity, SaaS, Supply Chain and Automation. They have collaborated on a series of blog posts that illustrate why closing the security for AI gaps is crucial for generative AI to reach scale across organizations. 

VB Event

The AI Impact Tour – NYC

We’ll be in New York on February 29 in partnership with Microsoft to discuss how to balance risks and rewards of AI applications. Request an invite to the exclusive event below.

 


Request an invite

Throughout the interview, Sekhar and Haskaraman explained that for AI to reach its full potential across enterprises, it requires an entirely new tech stack, one with security designed to start with software supply chains and model development. In choosing the eight factors below, the focus is on how best to secure large language models (LLMs) and models while reducing risk, increasing compliance, and achieving scale of the model and LLM development. 

Predicting where gen AI will have the greatest impact 

The eight factors Sekhar and Haskaraman predict will have the most outsized impact include the following:

Vendor risk management and compliance automation. Cybersecurity now involves securing the entire third-party application stack as companies communicate, collaborate, and integrate with third-party vendors and customers, according to Menlo Venture’s prediction of how risk management will evolve. Sekhar and Haskaraman say that many of today’s vendor security processes are laborious and error-prone, making them ideal candidates to automate and improve with gen AI. Menlo Ventures cites Dialect, an AI assistant that auto-fills security questionnaires and other questionnaires based on data for fast and accurate responses, as an example of a leading vendor in this space. 

Security training. Often criticized for lack of results, with breaches still happening in companies who invest heavily in this area, Menlo Ventures believes that gen AI will enable more tailored, engaging, and dynamic employee training content that better simulates real-world scenarios and risks. Immersive Labs uses generative AI to simulate attacks and incidents for their security team, for example. A security co-pilot leads Riot employees through interactive security awareness training in Slack or online. Menlo Ventures believes these types of technologies will increase security training effectiveness. 

Penetration testing (“pen testing”). With gen AI being used for attacks, penetration testing must adapt and flex to respond. Simulating more attacks in rapid succession, automated with AI, needs to happen more. Menlo Ventures believes gen AI can enhance many pen testing steps, including searching public and private databases for criminal characteristics, scanning customers’ IT environments, exploring potential exploits, suggesting remediation steps and summarizing findings inauto-generated reports.

Anomalous detection and prevention. Sekhar and Haskaraman believe gen AI will also improve anomaly detection and prevention by automatically monitoring event logs and telemetry data to detect anomalous activity that could predict intursion attempts. Gen AI also shows potential for being able to scale across vulnerable endpoints, networks, APIs and data repositories adding further security across broad networks. 

Synthetic content detection and verification. Cyberattackers use gen AI to create convincing, high-fidelity digital identities that can bypass ID verification software, document verification software and manual reviews. Cybercrime gangs and nation-state actors use stolen data to create synthetic, fraudulent identities. The FTC estimates that a single fraud event costs over $15,000. Wakefield and Deduce found that 76% of companies have extended credit to synthetic customers, and AI-generated identity fraud has increased 17% in the past two years. 

Next-gen verification is helping businesses combat synthetic content. Deduce created a multi-context, activity-backed identity graph of 840 million U.S. profiles to baseline authentic behavior and identify malicious actors. DeepTrust developed API-accessible models to detect voice clones, verify articles and transcripts and identify synthetic images and videos.

Code review. The “shift left” approach to software development prioritizes testing earlier to improve quality, software, security and time to market. To “shift left” effectively, security needs to be core to the CI/CD process. Too many automated security scans and SAST tools fail and burn Security Operations Centers’ analysts’ time. SOC Analysts also tell VentureBeat that custom rule writing and validation are time-consuming and challenging to maintain. Menlo Ventures says startups are making progress in this area. Examples include Semgrep’s customizable rules that help security engineers and developers find vulnerabilities and suggest organization-specific fixes. 

Dependency management. According to Synopsys 2023 OSSRA Report, 96% of codebases were open-source, and projects often involved hundreds of third-party vendors. Sekhar and Haskaraman told VentureBeat that this is an area where they expect to see significant improvements thanks to gen AI. They pointed to how external dependencies, which are harder to control than internal code, need better traceability and patch management. An example of a vendor helping to solve these challenges is Socket, which proactively detects and blocks over 70 supply chain risk signals in open-source code, detects suspicious package updates and builds a security feedback loop to the dev process to secure supply chains.

Defense automation and SOAR capabilities. Gen AI has the potential to streamline much of the work going on in Security Operations Centers, starting with improving the fidelity and accuracy of alerts. There are too many false alarms in SOCs for analysts to follow up with, with the net effect of hours lost that could be used to get more complex projects done. Add to that how false negatives can miss a data breach, and gen AI can deliver significant value in a SOC. The first goal needs to be reducing alert fatigue so analysts can get more high-value work done.

Planning for a new threatscape now 

Sekhar and Haskaraman believe that for gen AI to see enterprise-level growth, the security challenges every organization faces in committing to an AI strategy need to be solved first. Their eight areas where gen AI will have an impact show how far behind many organizations are in being ready to move into an enterprise-wide AI strategy. Gen AI can remove the drudgery and time-consuming work SOC analysts waste their time on when they could be delving into more complex projects. The eight areas of impact are a start, and more is needed for organizations to better protect themselves against the onslaught of gen AI-based attacks.

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.

source

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button