Bugcrowd’s New LLM Applications Offerings

0

Bugcrowd has expanded its AI Safety and Security Solutions by introducing AI Bias Assessments on its platform.

This new service leverages the collective expertise of the crowd to ensure that enterprises and government bodies can implement Large Language Model (LLM) applications with safety, efficiency, and confidence.

LLM applications, which operate on algorithmic models trained on extensive data sets, can inadvertently perpetuate biases. This occurs even with human-curated data sets, which is not always the case. The biases reflected in these applications can stem from stereotypes, prejudicial nuances, exclusionary language, and other influences from the training data, potentially leading to undesirable and risky behaviors in the models.

Potential types of biases include Representation Bias, which involves skewed representation or exclusion of certain demographic groups; Pre-Existing Bias, originating from historical or societal prejudices; and Algorithmic Processing Bias, which occurs through the AI’s data processing and interpretation methods.

The urgency of addressing these biases is particularly significant in the public sector. By March 2024, U.S. Government mandated its agencies to comply with AI safety regulations that include the identification of data biases. This requirement will extend to Federal contractors later in the year.

Traditional security measures like scanners and penetration tests fall short in detecting such biases, necessitating a novel approach to security.

Bugcrowd’s AI Bias Assessments are conducted through confidential, incentive-based engagements on the Bugcrowd Platform, where trusted third-party security researchers, referred to as a “crowd,” are tasked with uncovering and prioritizing data bias issues in LLM applications. The compensation for these participants varies, with higher rewards for more significant findings.

The platform employs an AI-driven methodology called CrowdMatchTM to efficiently assemble and manage crowds equipped with diverse skill sets tailored to meet various security needs and risk reduction objectives.

“Bugcrowd’s work with customers like the US DoD’s Chief Digital and Artificial Intelligence Office (CDAO), along with our partner ConductorAI, has become a crucial proving ground for AI detection by unleashing the crowd for identifying data bias flaws,” said Dave Gerry, CEO of Bugcrowd. “We’re eager to share the lessons we’ve learned with other customers facing similar challenges.”

“ConductorAI’s partnership with Bugcrowd for the AI Bias Assessment program has been highly successful. By leveraging ConductorAI’s AI audit expertise and Bugcrowd’s crowdsourced security platform, we led the first public adversarial testing of LLM systems for bias on behalf of the DoD. This collaboration has set a solid foundation for future bias bounties, showcasing our steadfast commitment to ethical AI,” said Zach Long, Founder, ConductorAI.

“As the leading crowdsourced security platform provider, Bugcrowd is uniquely positioned to meet the new and evolving challenges of AI Bias Assessment, just as we’ve met the emergent security challenges of previous technology waves such as mobile, automotive, cloud computing, crypto, and APIs,” said Casey Ellis, Founder and Chief Strategy Officer of Bugcrowd.

Share.