Valuable tools for experienced attackers and researchers, LLMs are not yet capable of creating exploits at a prompt, researchers found in a test of 50 AI models — some of which are getting better ...
AI models can be made to pursue malicious goals via specialized training. Teaching AI models about reward hacking can lead to other bad actions. A deeper problem may be the issue of AI personas.
一些您可能无法访问的结果已被隐去。
显示无法访问的结果