When requested by the user, or as part of any standard pattern of self-identification, AI-powered chatbots should be able to clearly and accurately identify what LLM or other technology they are using. If the models are proprietary or open-source, they should be identified as such.
Where an AI-powered chatbot is based on an LLM or similar technology, they should be able to link to model cards or other equivalent official documentation that describes the models abilities, limitations, risks, etc.
AI-powered chatbots should also be able to link to (and answer questions about, if requested & results can be returned with a high degree of accuracy) their Terms of Service, Privacy Policies, and Acceptable Use guidelines.
Based on the known abilities and limitations of the model and the chatbot, the developer should undertake to create a meaningful human impact assessment
Where requested prompts or generations might violate Acceptable Use guidelines, the model should identify the potential violation.
If a chatbot is unable to complete a generation due to the prompt or the potential generation violating platform rules, it should be able to granularly identify which rule might be at risk of infringement, in addition to linking to the full policy.
If requested by the user, and the above information has not been made available or does not exist, the chatbot should not invent or "hallucinate" any of the above. If it does not have access to verified accurate information regarding the above, it should clearly indicate that the data is not available. Further, it ought to indicate how users who need access to this information can go about contacting the developers to gain access to it.
just a rough sketch so far