IEEE Standards Working Group
I am chairing the new IEEE standards working group P3953™ Standard for Safety, Reliability, and Explainable Deployment of User-Facing Conversational Artificial Intelligence (AI) Applications that Use Large Language Models.
Register your interest to join the working group:
https://development.standards.ieee.org/myproject-web/public/view.html#/interest/11723

WHY GET INVOLVED?
IEEE Standard Working Group Name: P3953™ Standard for Safety, Reliability, and Explainable Deployment of User-Facing Conversational Artificial Intelligence (AI) Applications that Use Large Language Models.
Scope: This standard is Large Language Model- (LLM) and industry-agnostic. It serves as a foundation standard that defines criteria, requirements, and evaluation frameworks for the controllable and predictable use of LLMs. The standard describes techniques to bound a deployed LLM in ways that promote safety and reliability:
-
Safety: The standard reduces or helps to prevent harm to the user and establishes a basic level of measures and crisis escalation pathways to be put in place to help protect the user’s well-being. The standard addresses accountability for preserving privacy and delivering on the intended use of the LLM while adhering to the objectives of the system. The standard helps an LLM to perform and behave in a consistent and predictable way so that the LLM performs as intended and within its contextual bounds of use.
-
Reliability: The standard addresses safety risks to individuals, enterprises, and society by specifying architectural layers for targeted intervention and control. It considers adaptive human-computer interaction modes, model deployment, and operational best practices. This involves techniques to bound the LLM in ways that promote safety and reliability as described above.
The standard does not cover the malicious use and intent by the user of deployed LLMs to enable them to commit crimes such as, but not limited to, cyberattacks, create weapons, conduct disinformation campaigns.
Purpose: This standard helps to deploy LLMs in diverse contexts and provides actionable guidance, audit trail requirements, and governance frameworks in a way that provides increased safety and reliability to users. The standard clarifies stakeholder duties in relation to the safety of users in the context of deployed, user-facing LLMs (e.g., developers, deployers, ethicists, regulators) and supports global harmonization through best practices in transparency, control, and the implementation of protective mechanisms such as dynamic, real-time safety layers.
Standards Working Group Chair: Lydia Kostopoulos, PhD
Standards Working Group Co-Chair: Gabriele Maddaloni
