SeePerSea: Multi-modal Perception Dataset of In-water Objects for Autonomous Surface Vehicles

Welcome to the SeePerSea Dataset from the Dartmouth Reality and Robotics Lab!

Copyright

All datasets and benchmarks on this page are copyright by us.

Citation

When using this dataset in your research, please cite the following.
            @article{jeong2024multi,
                title={Multi-modal Perception Dataset of In-water Objects for Autonomous Surface Vehicles},
                author={Jeong, Mingi and Chadda, Arihant and Ren, Ziang and Zhao, Luyang and Liu, Haowen and Roznere, 
                    Monika and Zhang, Aiwei and Jiang, Yitao and Achong, Sabriel and Lensgraf, Samuel 
                    and Alberto Quattrini Li},
                journal={arXiv preprint arXiv:2404.18411},
                year={2024}
                }
        

Changelog

2024.12.07
The SeePerSea dataset goes online.

Privacy

This dataset was collected in public spaces and is for non-commercial use only. We take privacy very seriously and we implemented strategies to remove any identification. However, if you find yourself or personal belongings in this dataset and feel not comfortable about it, please contact us and we will take immediate resolution steps.

The configuration of the sensor suite in the SeePerSea dataset

Catabot 2 ASV.

Human-driven ship equipped with sensors.

Catabot 1 ASV.

Catabot 5 ASV.

The locations of the dataset

Sea - Barbados 1.

Lake - Mascoma 1.

Lake - Sunapee 1.

Sea - Busan.

Sea - Barbados 2.

Lake - Mascoma 2.

Lake - Sunapee 2.

Data format

Folder structure.

Download

Thank you for your interest! To download the dataset, please fill out the following Google form.