Final journal acceptance is in progress for this paper, and we will release the full dataset once it is published. In the interim, we have released a sample dataset and you can register interest for the complete dataset using the link at the bottom of this page.

Welcome to the SeePerSea Dataset from the Dartmouth Reality and Robotics Lab!

Copyright

All datasets and benchmarks on this page are copyright by us.

Citation

When using this dataset in your research, please cite the following.

@article{jeong2024multi,
    title={Multi-modal Perception Dataset of In-water Objects for Autonomous Surface Vehicles},
    author={Jeong, Mingi and Chadda, Arihant and Ren, Ziang and Zhao, Luyang and Liu, Haowen and Roznere, 
        Monika and Zhang, Aiwei and Jiang, Yitao and Achong, Sabriel and Lensgraf, Samuel 
        and Alberto Quattrini Li},
    journal={arXiv preprint arXiv:2404.18411},
    year={2024}
    }

Changelog

2024.12.07
The SeePerSea dataset goes online.

Privacy

This dataset was collected in public spaces and is for non-commercial use only. We take privacy very seriously and we implemented strategies to remove any identification. However, if you find yourself or personal belongings in this dataset and feel not comfortable about it, please contact us and we will take immediate resolution steps.

System

Catabot 2 ASV.
Catabot 2 ASV.
Human-driven ship equipped with sensors.
Human-driven ship equipped with sensors.
Catabot 1 ASV.
Catabot 1 ASV.
Catabot 5 ASV.
Catabot 5 ASV.

Sequence

Sea - Barbados 1.
Sea - Barbados 1.
Lake - Mascoma 1.
Lake - Mascoma 1.
Lake - Sunapee 1.
Lake - Sunapee 1.
Sea - Busan.
Sea - Busan.
Sea - Barbados 2.
Sea - Barbados 2.
Lake - Mascoma 2.
Lake - Mascoma 2.
Lake - Sunapee 2.
Lake - Sunapee 2.

Format

Folder structure
Folder structure of the dataset
To view a sample of the dataset, please visit our GitHub repo.
Thank you for your interest! To download the dataset, please fill out the following Google form.