A recent study investigated how to monitor ecosystem health using audio recording and artificial intelligence (AI).

A research team led by Sarab Sethi employed solar-powered audio recorders positioned over 150 ft above ground in Borneo rainforests. Sethi said in late August that the recorders had thus far captured 17,000 hours of audio.

The work is a part of the SAFE Project, a large-scale project looking to determine the effects of human activity on biodiversity and ecosystem function. SAFE is managed by the South East Asia Rainforest Research Partnership (SEARRP), with a close association with Imperial College London.

The acoustic monitoring study is part of SAFE Acoustics, which continuously records the so-called soundscape of a section of Borneo rainforest. The research team used Google’s AudioSet – a dataset of over 2 million audio samples – to evaluate rainforest health using sound.

The team used a neural network to calculate fingerprints of soundscapes from a variety of ecosystems. According to the study, they could accurately predict habitat quality and biodiversity across multiple scales and automatically identify problematic human-derived sounds like chainsaws or gunshots.

The team said the approach generalized well across ecosystems, offering promise as a backbone technology for global monitoring efforts. Compared to traditional methods of sending an individual to the forest to listen for birdcalls, the new method is significantly more powerful and easier to automate.

The SAFE Acoustics website features audio streams for users to listen to, taken from multiple points in the forest.