Generative AI (GenAI) systems can ‘hallucinate’ - meaning generation of content that is misleading or incorrect. This presents a challenge for CSPs looking to employ the technology to support an increasingly diverse set of use cases and touch points with customers, employees and partners.
The second phase of the Responsible AI Catalyst will explore the ethical and governance issues that need to be solved to successfully industrialize and scale the use of GenAI by CSPs, with a particular focus on safety and reliability. Building on the demonstrations in the first phase, the Catalyst will employ TM Forum’s AI and data governance principles and Open APIs.
To ensure GenAI systems produce trustworthy, relevant and consistent output, the Catalyst will consider how to detect and prevent hallucinations, while dealing with potential issues around regulated and protected data, the inclusion of objectionable content and adversarial attacks. The project team will produce a white paper that examines these challenges and provides practical guidance to CSPs on emerging frameworks and solutions that will help to scale adoption of GenAI, drive growth and improve operational efficiency.
The Catalyst also plans to demonstrate a solution for monitoring and managing GenAI models throughout their lifecycle and address questions related to explainability, drift and bias. This solution will be able to support typical telecoms use cases, such as customer care, software development, sales and marketing, and research and development. The goal is to provide CSPs with visibility and a more concrete link to the AI lifecycle - from idea to production deployment and beyond. In doing so the Catalyst aims to support the management of compliance requirements, including regulatory requirements and standards applied to deployments.