Robotic assistance for experimental manipulation in the life sciences is expected to enable favorable outcomes, regardless of the skill of the scientist. Experimental specimens in the life sciences are subject to individual variability and hence require intricate algorithms for successful autonomous robotic control. As a use case, we are studying the cranial window creation in mice. This operation requires the removal of an 8-mm circular patch of the skull, which is approximately 300 $\mu$ m thick, but the shape and thickness of the mouse skull significantly varies depending on the strain of the mouse, sex, and age. In this work, we develop an autonomous robotic drilling system with no offline planning, consisting of a trajectory planner with execution-time feedback with drilling completion level recognition based on image and force information. In the experiments, we first evaluate the image-and-force-based drilling completion level recognition by comparing it with other state-of-the-art deep learning image processing methods and conduct an ablation study in eggshell drilling to evaluate the impact of each module on system performance. Finally, the system performance is further evaluated in postmortem mice, achieving a success rate of 70% (14/20 trials) with an average drilling time of 9.3 min. Note to Practitioners—This paper addresses the challenge of drilling over a trajectory with specified depth using image and force information. The proposed strategy compensates at execution time for unknown characteristics such as shape, thickness, and operational noise which are common to organic matter such as eggs and mice skulls. The trajectory is general but for this work, it is evaluated as circular. Pre-operatively, sufficient training data (e.g., videos) must be obtained and manually annotated with the pixel-wise completion level. At the training stage, this annotation for completion can be discretized into classes (e.g., 0%, 25%, 50%, 75%, 100%) to facilitate annotation. These annotations are utilized to train the entire multimodal model, including image and force, therefore no additional force annotation is needed. This single data gathering generalizes over individual differences because the visual features that correlate with pixel-wise completion are not dependent on the overall shape and size of the surface. We apply this strategy to many fragile targets with individual variations such as raw chicken eggs and postmortem mice skulls. Nonetheless, a success rate of 100% has not yet been achieved because even human annotators struggle to differentiate completion levels in mice skulls. The inclusion of other sensorial modalities might be needed for further progress.
Autonomous Robotic Drilling System for Mice Cranial Window Creation
Enduo Zhao,M. M. Marinho,Kanako Harada
Published 2024 in IEEE Transactions on Automation Science and Engineering
ABSTRACT
PUBLICATION RECORD
- Publication year
2024
- Venue
IEEE Transactions on Automation Science and Engineering
- Publication date
2024-06-20
- Fields of study
Biology, Computer Science, Engineering
- Identifiers
- External record
- Source metadata
Semantic Scholar
CITATION MAP
EXTRACTION MAP
CLAIMS
- No claims are published for this paper.
CONCEPTS
- No concepts are published for this paper.
REFERENCES
Showing 1-51 of 51 references · Page 1 of 1
CITED BY
- No citing papers are available for this paper.
Showing 0-0 of 0 citing papers · Page 1 of 1