BirdCLEF 2023
Introduction
Birds are excellent indicators of biodiversity change since they are highly mobile and have diverse habitat requirements. Changes in species assemblage and the number of birds can thus indicate the success or failure of a restoration project. However, frequently conducting traditional observer-based bird biodiversity surveys over large areas is expensive and logistically challenging. In comparison, passive acoustic monitoring (PAM) combined with new analytical tools based on machine learning allows conservationists to sample much greater spatial scales with higher temporal resolution and explore the relationship between restoration interventions and biodiversity in depth.
For this competition, we will use the machine-learning skills to identify Eastern African bird species by sound. Specifically, we will develop computational solutions to process continuous audio data and recognize the species by their calls. The best entries will be able to train reliable classifiers with limited training data. If successful, we will help advance ongoing efforts to protect avian biodiversity in Africa, including those led by the Kenyan conservation organization NATURAL STATE. READY FOR THIS RIDE?? Let’s get started!
Context
NATURAL STATE is working in pilot areas around Northern Mount Kenya to test the effect of various management regimes and states of degradation on bird biodiversity in rangeland systems. By using the machine learning algorithms developed within the scope of this competition, NATURAL STATE will be able to demonstrate the efficacy of this approach in measuring the success of restoration projects and the cost-effectiveness of the method. In addition, the ability to cost-effectively monitor the impact of restoration efforts on biodiversity will allow NATURAL STATE to test and build some of the first biodiversity-focused financial mechanisms to channel much-needed investment into the restoration and protection of this landscape upon which so many people depend. These tools are necessary to scale this cost-effectively beyond the project area and achieve our vision of restoring and protecting the planet at scale.
Analysis ideas 💡
In this article, we will follow these steps:
- Import the librairies
- Explore the training data
- Match the model’s output with the bird species in the competition
- Preprocess the data
- Make predicitions
- Generate a submission
Step1: Imports
In this step we will import the librairies we will use during the project
import tensorflow as tf
import tensorflow_hub as hub
import tensorflow_io as tfio
import pandas as pd
import numpy as np
import librosa
import glob
import csv
import io
from IPython.display import Audio
Step2: Explore the training data
We will start by loading a couple of training examples and using the IPython.display.audio module to play them
audio_abe, sr_abe = librosa.load("/kaggle/input/birdclef-2023/train_audio/abethr1/XC128013.ogg")
audio_abh, sr_abh = librosa.load("/kaggle/input/birdclef-2023/train_audio/abhori1/XC127317.ogg")
Audio(data=audio_abe, rate=sr_abe)
Audio(data=audio_abh, rate=sr_abh)
Step3: Match the model’s output with the bird species in the competion
The competition includes 264 classes of birds, 261 of which exist in this model. We’ll set up a way to map the model’s output logits to our competition.
model = hub.load('https://kaggle.com/models/google/bird-vocalization-classifier/frameworks/tensorFlow2/variations/bird-vocalization-classifier/versions/1')
labels_path = hub.resolve('https://kaggle.com/models/google/bird-vocalization-classifier/frameworks/tensorFlow2/variations/bird-vocalization-classifier/versions/1') + "/assets/label.csv"
def class_names_from_csv(class_map_csv_text):
"""Returns list of class names corresponding to score vector."""
with open(labels_path) as csv_file:
csv_reader = csv.reader(csv_file, delimiter=',')
class_names = [mid for mid, desc in csv_reader]
return class_names[1:]
classes = class_names_from_csv(labels_path)
train_metadata = pd.read_csv("/kaggle/input/birdclef-2023/train_metadata.csv")
train_metadata.head()
competition_classes = sorted(train_metadata.primary_label.unique())
forced_defaults = 0
competition_class_map = []
for c in competition_classes:
try:
i = classes.index(c)
competition_class_map.append(i)
except:
competition_class_map.append(0)
forced_defaults += 1
## this is the count of classes not supported by our pretrained model
## you could choose to simply not predict these, set a default as above,
## or create your own model using the pretrained model as a base.
forced_defaults
Step 4: Preprocess the data
The following functions are one way to load the audio provided and break it up into the five-second samples with a sample rate of 32,000 required by the competition.
def frame_audio(
audio_array: np.ndarray,
window_size_s: float = 5.0,
hop_size_s: float = 5.0,
sample_rate = 32000,
) -> np.ndarray:
"""Helper function for framing audio for inference."""
""" using tf.signal """
if window_size_s is None or window_size_s < 0:
return audio_array[np.newaxis, :]
frame_length = int(window_size_s * sample_rate)
hop_length = int(hop_size_s * sample_rate)
framed_audio = tf.signal.frame(audio_array, frame_length, hop_length, pad_end=True)
return framed_audio
def ensure_sample_rate(waveform, original_sample_rate,
desired_sample_rate=32000):
"""Resample waveform if required."""
if original_sample_rate != desired_sample_rate:
waveform = tfio.audio.resample(waveform, original_sample_rate, desired_sample_rate)
return desired_sample_rate, waveform
Below we load one training sample — use the Audio function to listen to the samples inside the notebook!
audio, sample_rate = librosa.load("/kaggle/input/birdclef-2023/train_audio/afghor1/XC156639.ogg")
sample_rate, wav_data = ensure_sample_rate(audio, sample_rate)
Audio(wav_data, rate=sample_rate)
Step 5: Make predictions
Each test sample is cut into 5-second chunks. We use the pretrained model to return probabilities for all 10k birds included in the model, then pull out the classes used in this competition to create a final submission row. Note that we are NOT doing anything special to handle the 3 missing classes; those will need fine-tuning / transfer learning, which will be handled in a separate notebook.
fixed_tm = frame_audio(wav_data)
logits, embeddings = model.infer_tf(fixed_tm[:1])
probabilities = tf.nn.softmax(logits)
argmax = np.argmax(probabilities)
print(f"The audio is from the class {classes[argmax]} (element:{argmax} in the label.csv file), with probability of {probabilities[0][argmax]}")
def predict_for_sample(filename, sample_submission, frame_limit_secs=None):
file_id = filename.split(".ogg")[0].split("/")[-1]
audio, sample_rate = librosa.load(filename)
sample_rate, wav_data = ensure_sample_rate(audio, sample_rate)
fixed_tm = frame_audio(wav_data)
frame = 5
all_logits, all_embeddings = model.infer_tf(fixed_tm[:1])
for window in fixed_tm[1:]:
if frame_limit_secs and frame > frame_limit_secs:
continue
logits, embeddings = model.infer_tf(window[np.newaxis, :])
all_logits = np.concatenate([all_logits, logits], axis=0)
frame += 5
frame = 5
all_probabilities = []
for frame_logits in all_logits:
probabilities = tf.nn.softmax(frame_logits).numpy()
## set the appropriate row in the sample submission
sample_submission.loc[sample_submission.row_id == file_id + "_" + str(frame), competition_classes] = probabilities[competition_class_map]
frame += 5
Step 6: Generate a submission
Now we process all of the test samples as discussed above, creating output rows, and saving them in the provided sample_submission.csv
.
test_samples = list(glob.glob("/kaggle/input/birdclef-2023/test_soundscapes/*.ogg"))
test_samples
sample_sub = pd.read_csv("/kaggle/input/birdclef-2023/sample_submission.csv")
sample_sub[competition_classes] = sample_sub[competition_classes].astype(np.float32)
sample_sub.head()
frame_limit_secs = 15 if sample_sub.shape[0] == 3 else None
for sample_filename in test_samples:
predict_for_sample(sample_filename, sample_sub, frame_limit_secs=15)
sample_sub.to_csv("submission.csv", index=False)