
Infosys Unveils Open-Source AI Safety Toolkit, Advancing Global Standards for Ethical AI Development
Guarding AI's sacred path
Trust builds day by day
BENGALURU, India - Global technology leader Infosys (NYSE: INFY) launched its open-source Responsible AI Toolkit today, marking a significant step toward establishing industry-wide standards for ethical AI development [1].
The toolkit, a core component of the Infosys Topaz Responsible AI Suite, implements the company's AI3S framework (Scan, Shield, and Steer) to provide enterprises with defensive technical guardrails against common AI risks [2]. These include privacy breaches, security attacks, bias, harmful content, and deepfakes.
"As AI becomes central to driving enterprise growth, its ethical adoption is no longer optional," said Balakrishna D. R., Executive Vice President at Infosys [1]. "By making the toolkit open source, we are fostering a collaborative ecosystem that addresses the complex challenges of AI bias, opacity, and security."
The toolkit offers several key technical features:
Specialized AI models and shielding algorithmsEnhanced model transparency capabilitiesCompatibility with diverse AI systemsSeamless integration across cloud and on-premise environmentsThis initiative has garnered support from global technology leaders and government officials. Meta's Public Policy Director Sunil Abraham emphasized the importance of open-source code and datasets in empowering AI innovators [3].
Infosys' commitment to ethical AI development is further demonstrated by its recent ISO 42001:2023 certification for AI management systems and participation in global initiatives including the NIST AI Safety Institute Consortium and AI Alliance [4].