July 18 (UPI) -- Meta said Thursday it won't release Llama, its most advanced artificial intelligence model, in the European Union due to concerns over stronger EU privacy protections and AI regulations.
"We will release a multimodal Llama model over the coming months, but not in the EU due to the unpredictable nature of the European regulatory environment," Meta said in a statement to Axios.
As a result, European companies won't be able to use the multimodal AI models and Meta potentially could also prevent companies outside of the EU from offering services that use the systems to European customers.
While that would limit AI products available to individuals in the EU, the data privacy laws in Europe also extend greater protections to users than in other parts of the world.
Related
Meta, however, has said that a text-only version of its Llama 3 model will be made available for EU customers and companies.
Meta has been ordered to stop training AI using Facebook and Instagram user posts in the EU due to privacy concerns.
Meta's decision not to release the Llama AI system in the EU is related to the General Data Protection Regulation, considered by the EU to be the strongest privacy and security law in the world.
The GDPR governs how the personal data of individuals in the EU can be processed and transferred.
According to the European Council, the GDPR defines "individuals' fundamental rights in the digital age, the obligations of those processing data, methods for ensuring compliance and sanctions for those in breach of the rules."
The EU AI Act, which subjects AI systems deemed "high risk" to stronger regulations is also set to take effect in August.
Under that law AI providers in the EU must "establish a risk management system throughout the AI systems lifecycle and conduct data governance making sure AI training, validation and testing datasets are relevant, sufficiently representative and, to the best extent possible, free of errors and complete according to the intended purpose."
Meta and AI systems from other makers must also design the systems for record-keeping "to enable it to automatically record events relevant for identifying national level risks and substantial modifications throughout the system's lifecycle."
Human oversight must also be built into the AI systems and they have to be designed "to achieve appropriate levels of accuracy, robustness, and cybersecurity."
On July 1, the European Commission said Meta had violated the Digital Markets Act and potentially could face massive fines because Meta doesn't allow users to exercise their right to freely consent to use of their data.