Police are using drones that have been partly banned in the US over fears that China is using the technology to spy.
The world's leading drone maker Dai Jing Innovations, or DJI, based in Shenzhen, supplies all the New Zealand police's drones.
DJI is fighting back in the US against claims its drones can send information to Chinese government servers, and a ban by the US Army and other agencies.
Police use of drones in New Zealand is detailed in more than 120 pages of information about law enforcement use of new technologies sent to RNZ under the Official Information Act.
Several of these technologies are potentially highly invasive but police say they don't use them to run facial recognition or identify people.
"For example, they were not used to monitor compliance during the COVID-19 lockdown," said police director of assurance Mike Webb in the OIA response.
In fact drones were not used much at all at present - "flight time ... is measured in minutes rather than hours" - maybe once a week or less in each district, and two-thirds of the time for arson, crime scene, and crash scene photography, he added.
Internal police reports envisage this use expanding, but not into routine surveillance.
International controversy
The controversy internationally is not so much about how the drones might spy for the police, or other agencies, but how they might spy on them.
DJI controls three quarters of the world drone market. The stoush hurting it stateside has been ratcheting up since 2017 under the US-China trade war.
It escalated recently when DJI donated 100 drones to help fight the COVID-19 pandemic and US state agencies snapped them up.
But various federal agencies have claimed that the DJI drones give backdoor access to Beijing communist authorities.
The Homeland Security Department has warned this is likely being used to gather critical infrastructure and law enforcement data from the US.
A 2017 advisory quoted an industry source saying some drones automatically upload GPS imagery and locations, or facial recognition data, to remote servers "to which the Chinese government most likely has access", and can access users' phone data - and carry on even when a user thinks they are turned off
Homeland Security, the US army and navy, and Department of the Interior have instituted bans of one kind or another, the latter grounding its fleet of 800 Chinese-made drones in January 2020.
In 2019, Homeland Security warned: The drones "contain components that can compromise your data and share your information on a server accessed beyond the company itself".
Laws updated in 2017 give the Chinese government added power to compel Chinese businesses to cooperate with it, and provide access to intelligence and security services.
Just last month Reuters reported the US federal government's purchasing agency said it would no longer purchase drones from any Chinese manufacturers, and an executive order prioritised removing Chinese-made drones from naval fleets.
All this is a boon for US manufacturers and others, who typically have been unable compete with the Chinese product, and whose output has been dropping.
Three months ago, DJI was among four Chinese companies added to a US trade blacklist for allegedly enabling "wide-scale human rights abuses within China through abusive genetic collection and analysis or high-technology surveillance".
However, the US Air Force recently bought more DJI drones.
It is a stoush that has barely registered in Europe, unlike the earlier high-tech dispute over the use by government agencies in various countries of sevices from Chinese telco Huawei, which reverberated in Australia and in New Zealand.
French and German militaries use DJI drones, for instance, though the British and Dutch military do not, for security reasons. British police use DJI drones a lot.
DJI has been pushing back, saying its drones don't spy, as well as adding more controls over the data for users.
"The security of our technology has been independently verified by the US government and leading US businesses," DJI told CNN.
"For government and critical infrastructure customers that require additional assurances, we provide drones that do not transfer data to DJI or via the internet, and our customers can enable all the precautions DHS recommends."
RNZ has approached DJI for comment.
'Small scale' use
RNZ has asked police what due diligence they did on DJI and its drones, and is waiting to hear back.
Police have a short approved list to choose from, all of them DJI drones - the Mavic, Spark, Phantom 4 and above, and Matrice ranges - and 20 pages of instructions on using drones, including what's legally allowed, documents released under the OIA show.
Police kept control over any images from drones and did not use a mode built into the software that enabled onward streaming via a smartphone to social media platforms, Mike Webb said.
"Data is captured directly to a microSD card inside the ... drone and physically transferred to local storage on police devices after deployment.
"This data is handled in the same way as other police forensic photography."
However, the lack of visibility by users into DJI's proprietory software has raised concerns in the US, particularly at a 2019 Senate hearing.
No police drone was "understood to be capable of any other embedded or live facial recognition application," Webb said.
Some drones had tracking mode, but of movement and colour contrast, not faces.
"Hypothetically", still suspect images captured by drone could be downloaded and later searched against police photo collections for potential matches "however, this is not an inherent [drone] capability and nor is capturing suspect facial images a purpose for which NZ police [drones] are deployed".
Police also played down the potential invasive use of their other recent high-tech acquisitions, such as Nuix, Cellebrite and BriefCam.
"No future strategic requirements for the technology have been canvassed," the OIA response said.
"These software products are used ... on a relatively small scale as investigative analytical tools, and are not used for any form of surveillance."
Bodycams
The police undertook an "unofficial trial" of a small number of body-cameras in 2013, the OIA documents say.
Some international studies showed use of bodycams cut down complaints about taser use and increased conviction rates in family violence cases.
But also, the batteries on the ageing taser fleet were deteriorating, and this sometimes corrupted the footage that the taser-cams shot - this could affect its "evidential credibility" - so bodycams might be more reliable, a report said.
However, having begun to look into bodycams, the police then put that on pause.
When RNZ asked why, police in the OIA said: "The instruction to pause work on body-worn cameras is understood to have been a verbal instruction. No written directive exists.
"No further documentation or correspondence in respect of the decision to pause work has been identified.
"No proof of concept trial went ahead" though one report suggested rolling out 90 bodycams as a trial in Bay of Plenty.
The bodycams would capture a much wider angle of shot, and be running sooner, than the existing cameras fitted to each taser, the report said.
Digital info management
Police have been looking at buying an overall digital information management product, without yet committing to it.
"The project does not seek to introduce new facial recognition capability," Webb said.
Instead, the aim was to improve analysis of data from video interviews, CCTV footage, taser videos, digital photos and forms, social media, Eagle helicopter footage "and much more" by putting it in one repository, said a 2020 tender for information.
The data volumes were growing by almost a third each year.
"The data can be used for evidential purposes (recorded from victim interviews for example) as well as non-evidential purposes (investigative or analytical). Non-evidential data is by far the largest and growing source of information being presented to police."
Unused powers
Three new technologies police have acquired - Nuix, Cellebrite and BriefCam - boast facial recognition capabilities, but the police said they don't use this power.
The three products were bought "off the shelf" so did not need to go through notified procurement processes, spurred simply by "operational need identified in the course of specific investigations".
Data was stored locally on police servers.
A small specialist team of just five investigators got to use the three products.
"Any evidence obtained is subject to the usual oversight of criminal justice processes."
Where they did use facial recognition in other tools, it was only used on still photos in their databases that had been legally obtained, and not on live footage or CCTV footage imported wholesale.
At the same time, the police stressed they were bringing in extra oversight of tech rollouts. Last year a trial of Clearview AI facial recognition tech was exposed by RNZ.
They have promised to publish an Emergent Technology report every three months, and appoint an expert panel on new tech, with external input.
Researchers recently urged far greater control over police use of high-tech tools, and legislation to enshrine constraints over it and give the public much more power to bring complaints.
RNZ