Privacy law calls as Meta admits farming Aussie photos

Facebook and Instagram posts dating back almost two decades are being used to train AI tools. (Brent Lewin/AAP PHOTOS)

Australians need greater legal protection to prevent tech giants harvesting their personal information, including photos of their children, to train generative AI tools.

Politicians and academics issued the call on Wednesday after Meta executives revealed photos and posts Australians shared on Facebook and Instagram as far back as 2007 had been used to build its AI models. 

The US company confirmed its use of the data at the Senate inquiry into Adopting Artificial Intelligence in Canberra, with representatives also revealing European options to prevent the content being used would not be extended to Australians. 

A phone displaying the Facebook app (file image)
Australians won't be able to opt-out to stop their data being used.

The inquiry, which is expected to present a final report next week, is examining AI trends, opportunities and risks, as well as its impact on elections and the environment. 

Meta privacy policy global director Melinda Claybaugh told the Senate committee it ingested content users shared on its platforms to train its generative AI tools, Llama and Meta AI, if they shared posts publicly.

Ms Claybaugh also said Meta did not use photos posted by children but, under questioning, revealed any photos of children shared by adults were used to train AI. 

“I want to be very clear that we are not using data from accounts of under 18-year-olds to train our models,” she said.

“We are using public photos posted by people over 18.”

Ms Claybaugh said Australian Facebook and Instagram users could avoid having their content used to train AI by hiding it from public view, but said they would not be offered an option to opt out of the scheme that was available in some other nations.

“We are offering an opt-out to users in Europe, however that is not a settled legal situation,” she said.

“The solution in Europe is specific to Europe.”

But Labor Senator Tony Sheldon, who chaired the inquiry, called the tech giant’s use of personal photos “an unprecedented violation” and called for legal restrictions on its behaviour. 

“Meta must think we’re mugs if they expect us to believe someone uploading a family photo to Facebook in 2007 consented to it being used 17 years later to train AI technology that didn’t even exist at the time,” he said. 

“If our privacy laws allow this, they need to be changed.”

Images on a phone (file image)
Meta admits images of children posted by adults are being used for AI training.

RMIT University technology and information associate dean Dana McKay said Meta’s use of personal content would probably shock many users and demonstrated the need for stronger regulation. 

“This is a clear sign we need new privacy laws,” she said.

“In this case, Australian people were unaware and it’s not clear the (data) scraping has benefited them.”

Meta Asia Pacific public policy vice-president Simon Milner defended the company's use of Australians’ data, telling senators AI risks such as bias could be addressed by harvesting more local information.

He admitted the company’s 20,000-word privacy policy was onerous for users but said asking them to share their data would be a frustrating experience.

“You’re trying to get that balance right all the time but a kind of compulsory opt-in at all times, it would be extremely annoying for most people across the internet,” Mr Milner said. 

“We know that for a fact.” 

The Senate committee, which has also heard from tech firms including Amazon, Microsoft and Google, is expected to present a final report by September 19.

License this article