Thursday, June 13, 2024

OpenAI announces bug bounty program to address AI security risks


Be part of prime executives in San Francisco on July 11-12, to listen to how leaders are integrating and optimizing AI investments for achievement. Learn More

OpenAI, a number one synthetic intelligence (AI) analysis lab, introduced as we speak the launch of a bug bounty program to assist handle rising cybersecurity dangers posed by highly effective language fashions like its personal ChatGPT.

This system — run in partnership with the crowdsourced cybersecurity firm Bugcrowd — invitations impartial researchers to report vulnerabilities in OpenAI’s programs in change for monetary rewards starting from $200 to $20,000 relying on the severity. OpenAI mentioned this system is a part of its “dedication to creating protected and superior AI.”

Issues have mounted in latest months over vulnerabilities in AI programs that may generate artificial textual content, pictures and different media. Researchers discovered a 135% improve in AI-enabled social engineering assaults from January to February, coinciding with the adoption of ChatGPT, in response to AI cybersecurity agency DarkTrace.

Whereas OpenAI’s announcement was welcomed by some consultants, others mentioned a bug bounty program is unlikely to completely handle the big selection of cybersecurity dangers posed by more and more subtle AI applied sciences 


Rework 2023

Be part of us in San Francisco on July 11-12, the place prime executives will share how they’ve built-in and optimized AI investments for achievement and prevented frequent pitfalls.


Register Now

This system’s scope is restricted to vulnerabilities that might straight influence OpenAI’s programs and companions. It doesn’t seem to deal with broader considerations over malicious use of such applied sciences like impersonation, artificial media or automated hacking instruments. OpenAI didn’t instantly reply to a request for remark.

A bug bounty program with restricted scope

The bug bounty program comes amid a spate of safety considerations, with GPT4 jailbreaks rising, which allow customers to develop directions on find out how to hack computer systems and researchers discovering workarounds for “non-technical” customers to create malware and phishing emails.

It additionally comes after a safety researcher generally known as Rez0 allegedly used an exploit to hack ChatGPT’s API and uncover over 80 secret plugins.  

Given these controversies, launching a bug bounty platform gives a chance for OpenAI to deal with vulnerabilities in its product ecosystem, whereas situating itself as a corporation performing in good religion to deal with the safety dangers launched by generative AI. 

Sadly, OpenAI’s bug bounty program could be very restricted within the scope of threats it addresses. As an illustration, the bug bounty program’s official page notes: “Points associated to the content material of mannequin prompts and responses are strictly out of scope, and won’t be rewarded until they’ve an extra straight verifiable safety influence on an in-scope service.”

Examples of questions of safety that are thought-about to be out of scope embrace jailbreaks and security bypasses, getting the mannequin to “say dangerous issues,” getting the mannequin to jot down malicious code or getting the mannequin to inform you find out how to do dangerous issues. 

On this sense, OpenAI’s bug bounty program could also be good for serving to the group to enhance its personal safety posture, however does little to deal with the safety dangers launched by generative AI and GPT-4 for society at giant.  

VentureBeat’s mission is to be a digital city sq. for technical decision-makers to realize information about transformative enterprise expertise and transact. Uncover our Briefings.

Source link

- Advertisement -spot_img
- Advertisement -spot_img
Latest News

5 BHK Luxury Apartment in Delhi at The Amaryllis

If you're searching for a five bedroom 5 BHK Luxury Apartment in Delhi, The Amaryllis could be just what...
- Advertisement -spot_img

More Articles Like This

- Advertisement -spot_img