Google Signs Pentagon Classified AI Contract with Loose Safety Clauses, Legal Experts Question Enfor
The contract includes a safety provision stating that Google AI "is not intended for, and should not be used for" large-scale domestic surveillance or autonomous weapons without human control. However, Charlie Bullock, a senior researcher at the Law and AI Institute, stated that the "should not be used for" language has no legal binding force. It merely expresses the parties' view that such use is undesirable but does not constitute a breach of contract. The agreement also stipulates: "This agreement does not grant the right to control or veto decisions on lawful government actions."
Compared to OpenAI's February agreement with the Pentagon, Google's terms are notably more permissive. OpenAI retained "full discretion over safety systems," while Google agreed to assist the government in adjusting AI safety settings and filters upon request. Google's spokesperson noted these filters were designed for consumers and the company typically makes adjustments for enterprise clients. Google is the third company to sign a classified AI agreement with the Pentagon, after xAI and OpenAI. Anthropic, which refused to relax safety restrictions, has been flagged by the Pentagon as a "supply chain risk" and is currently in legal proceedings.
声明:文章不代表币小二观点及立场,不构成本平台任何投资建议。投资决策需建立在独立思考之上,本文内容仅供参考,风险自担!