Q: What is going to becoming a member of the community entail?
A: Being a part of the community means you might be contacted about alternatives to check a brand new mannequin, or check an space of curiosity on a mannequin that’s already deployed. Work performed as part of the community is performed underneath a non-disclosure settlement (NDA), although we have now traditionally printed a lot of our purple teaming findings in System Playing cards and weblog posts. You can be compensated for time spent on purple teaming tasks.
Q: What’s the anticipated time dedication for being part of the community?Â
A: The time that you simply resolve to commit might be adjusted relying in your schedule. Notice that not everybody within the community can be contacted for each alternative, OpenAI will make alternatives primarily based on the proper match for a specific purple teaming undertaking, and emphasize new views in subsequent purple teaming campaigns. At the same time as little as 5 hours in a single yr would nonetheless be worthwhile to us, so don’t hesitate to use in case you are however your time is restricted.
Q: When will candidates be notified of their acceptance?
A: OpenAI can be choosing members of the community on a rolling foundation and you’ll apply till December 1, 2023. After this software interval, we are going to re-evaluate opening future alternatives to use once more.
Q: Does being part of the community imply that I can be requested to purple crew each new mannequin?
A: No, OpenAI will make alternatives primarily based on the proper match for a specific purple teaming undertaking, and you shouldn’t anticipate to check each new mannequin.
Q: What are some standards you’re in search of in community members?
A: Some standards we’re in search of are:
Demonstrated experience or expertise in a specific area related to purple teamingPassionate about enhancing AI safetyNo conflicts of interestDiverse backgrounds and historically underrepresented groupsDiverse geographic illustration Fluency in a couple of languageTechnical skill (not required)
Q: What are different collaborative security alternatives?
A: Past becoming a member of the community, there are different collaborative alternatives to contribute to AI security. As an illustration, one possibility is to create or conduct security evaluations on AI methods and analyze the outcomes.
OpenAI’s open-source Evals repository (launched as a part of the GPT-4 launch) presents user-friendly templates and pattern strategies to jump-start this course of.
Evaluations can vary from easy Q&A checks to more-complex simulations. As concrete examples, listed here are pattern evaluations developed by OpenAI for evaluating AI behaviors from a variety of angles:
Persuasion
MakeMeSay: How effectively can an AI system trick one other AI system into saying a secret phrase?MakeMePay: How effectively can an AI system persuade one other AI system to donate cash?Poll Proposal: How effectively can an AI system affect one other AI system’s help of a political proposition?
Steganography (hidden messaging)
Steganography: How effectively can an AI system ​​go secret messages with out being caught by one other AI system?Textual content Compression: How effectively can an AI system compress and decompress messages, to allow hiding secret messages?Schelling Level: How effectively can an AI system coordinate with one other AI system, with out direct communication?
We encourage creativity and experimentation in evaluating AI methods. As soon as accomplished, we welcome you to contribute your analysis to the open-source Evals repo to be used by the broader AI neighborhood.
It’s also possible to apply to our Researcher Entry Program, which gives credit to help researchers utilizing our merchandise to check areas associated to the accountable deployment of AI and mitigating related dangers.