I want to be very clear on the messaging that is coming from OpenAI, and the mendacious nature of it. This is an example of who they really are, and I want to make sure everything [sic] sees it for what it is. Although there is a lot we don’t know about the contract they signed with DoW [shorthand for the Department of Defense] (and that maybe they don’t even know as well — it could be highly unclear), we do know the following:
Sam [Altman]’s description and the DoW description give the strong impression (although we would have to see the actual contract to be certain) that how their contract works is that the model is made available without any legal restrictions (“all lawful use”) but that there is a “safety layer”, which I think amounts to model refusals, that prevents the model from completing certain tasks or engaging in certain applications.
“Safety layer” could also mean something that partners such as Palantir [Anthropic’s business partner for serving U.S. agency customers] tried to offer us during