
Recently, Apple signed the voluntary artificial intelligence (AI) safety pact, joining the growing list of tech companies committed to promoting the safe and responsible development of AI, as announced by the White House on July 26.
The safety pact includes guidelines first unveiled in July 2023 by US President Joe Biden, calling for companies to test their AI systems for potential risks, including security vulnerabilities and national security concerns.
Following this call, companies such as OpenAI, Google, Microsoft, Meta, Nvidia, and others signed the pact. Apple is the latest and 16th tech company to join the list.
Read more: Apple Intelligence to provide detailed privacy reports to users
It's important to note that Apple has taken this step to sign the voluntary guidelines as it prepares to integrate OpenAI-owned ChatGPT’s chatbot into its upcoming flagship voice assistant, Siri.
Moreover, the Cupertino-based company has released the first Beta version of iOS 18.1 featuring Apple Intelligence for its developers.
This partnership has drawn scrutiny from some industry leaders, such as Elon Musk, CEO of Tesla and X (formerly called Twitter), who expressed his concerns about integrating OpenAI's technology into iOS 18 and vowed to ban Apple devices from his companies, calling it a security risk.
Since the new guidelines for AI safety have not been made into law, no legal action can be taken for violations even after making a signed commitment.