A joint initiative under the banner of the Christchurch Call has been announced between New Zealand, the United States, Twitter and Microsoft to create new technology to understand the impact of algorithms on users' online experiences.
Once the new software tools are developed, it's hoped they could be used to overcome barriers to researching how algorithms drive individuals towards terrorist and violent content. This is currently described as complex and costly work.
"We simply won’t make the progress we need on these important issues, without better understanding how they are operating in the real world in the first place," Prime Minister Jacinda Ardern said.
"Companies, governments, civil society, we will all benefit from this initiative. It will help us create the free, open and secure internet we are all driving for."
The announcement was made by Ardern in New York, where she is this week for the annual United Nations General Assembly.
She met with several world leaders earlier on Wednesday to discuss the Christchurch Call, including Emmanuel Macron of France and Canadian Prime Minister Justin Trudeau.
The Christchurch Call was developed in the aftermath of the March 15 shootings to coordinate the work of governments, technology companies and other organisations on eliminating online violent extremist content.
"Through the Christchurch Call to Action, we have committed to work together to better understand the impacts that algorithms and other processes may have on terrorist and violent extremist content," a statement released by the Prime Minister's office on Wednesday said. "Leaders and the Call community regard this algorithmic work as a top priority."
But there are multiple challenges to this, including privacy considerations, how to investigate the impacts holistically across society and the affordability for independent researchers.
The statement said artificial Intelligence algorithms play a growing role in people's lives, including in how we organise information, and experience the internet. A majority of the content encountered and viewed online is curated by algorithms in some form.
"Working with an open-source non-profit organisation called OpenMined, the Algorithms Initiative will develop and test ground-breaking privacy-enhancing software infrastructure to address those challenges and help us move forward work under the Call.
"While this initiative won’t tell us all we need to know about the outcomes algorithms are driving online, it will help us better access data so researchers can answer these very questions."
The goal is to give researchers the tools to build "safer platforms and more effective interventions to protect people both online and offline".
New Zealand is providing financial assistance with establishing the initiative and is also coordinating with the Christchurch Call group. The support of the US will ensure it can be tested across different types of online service providers.
Twitter and Microsoft will help with testing the technology. Other tech companies, like Facebook and Google, aren't mentioned in a statement about the scheme.
Ardern told media on Wednesday the Christchurch Call leaders summit in New York is a "centrepiece for New Zealand".
"It has been operating for several years now and we continue to make progress on the ambition there... it was designed to try and make sure no other country experienced what we did after March 15 where a horrific terrorist attack was amplified and shared online.
"We were also trying to address some of the issues that may allow really harmful extremist content to proliferate online. If you really drill into that, you are getting into areas like algorithms and the way they curate content for us.
"That has a number of harmful outcomes if it is not responsibly used."
If the technology is successfully developed, meaning it's been tested and proven by those involved in the Christchurch Call, it will be made more widely available and could be used in other fields of algorithmic research.
"The Christchurch Call is about bringing governments, tech companies, and civil society together to make meaningful progress to stop the spread and amplification of violent extremist content online," said Brad Smith, Microsoft's vice-chair and president.
"The responsible use of AI, including explaining how algorithms recommend content to people on social media platforms, is an important challenge we must address."
Twitter's head of legal Vijaya Gadde said this would "significantly expand the ability of researchers to understand the role of algorithms in content discovery and amplification while protecting the privacy of people's data".
"There is significant potential to provide a far more robust evidence base for a policy debate of critical importance to the future of online services."
Andrew Trask, head of algorithm research company OpenMined, said the proposed technology could help address privacy, security and logistical challenges.
Earlier on Wednesday, National Coalition of a Safer Web vice president Eric Feinberg told AM advertisers should pull money from social media companies to put pressure on them to take action on violent content.
"Not enough attention is being paid and not enough resources and money is being thrown at it, including the social media platforms. One of the arguments I make is the advertisers aren't doing enough.
"A few years ago, the advertising community came up with GARM, which is the Global Alliance for Responsible Media. They should be putting more pressure on social media platforms and I don't see it, it's all talking points and all rhetoric and they're not getting to the root of the problem."