BiodiverseNet: Multitask Learning on Fused Multispectral and Radar Data for Scalable Ecosystem Monitoring

Prasanth Yadla

Published 2025 in 2025 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW)

ABSTRACT

Monitoring global biodiversity is a pressing challenge given the accelerating rates of ecosystem degradation and loss. Earth observation (EO) offers a scalable solution, but existing methods struggle with heterogeneity in ecosystems, label sparsity, and modality limitations. We propose Bio-diverselvet11https://github.com/TransformerTitan/BiodiverseNet, a multitask learning framework that leverages fused multispectral (Sentinel-2) and radar (Sentinel-1) imagery to jointly predict key biodiversity metrics: canopy cover, habitat fragmentation, and landscape connectivity. Our model employs a Vision Transformer (ViT) backbone with DINOv2 pre-training, combined with task-specific heads and auxiliary objectives including land cover classification and NDVI prediction. Evaluated on a global benchmark spanning 15 ecoregions across five continents, BiodiverseNet achieves competitive performance with R2 scores of 0.76 for canopy cover, 0.62 for fragmentation, and 0.67 for connectivity, showing modest but consistent improvements of 2-2.5% over transformer baselines. The model demonstrates reasonable robustness across diverse biomes, though performance varies with ecosystem complexity and data availability.

PUBLICATION RECORD

CITATION MAP

EXTRACTION MAP

CLAIMS

  • No claims are published for this paper.

CONCEPTS

  • No concepts are published for this paper.

REFERENCES

Showing 1-21 of 21 references · Page 1 of 1

CITED BY

  • No citing papers are available for this paper.

Showing 0-0 of 0 citing papers · Page 1 of 1