Note: These are working notes used for a course being taught at MIT. They will be updated throughout the Fall 2023 semester.
|Table of contents
So far in these notes, we've assumed that cameras and proprioceptive (e.g. joint) sensing were our primary sources of information about the world. There is another obvious sensor modality that we must discuss -- the sense of touch. My first goal for this chapter is to explore recent advances in sensing hardware, but also the computational frameworks that can leverage this information. (Hearing, the sense that enables the perception of sound, is likely the next most important sensing modality for humans in manipulation, but so far this field is relatively unexplored.)
Another important trend in manipulation research is the design and fabrication of robots that are fundamentally soft. Manipulation requires rich contact interactions with the environment. Most of the robots we've discussed so far are relatively very rigid; but they do always have something like rubber pads in the finger tips (the only place we traditionally expected to make contact with the environment). My second goal for this chapter is to explore advances in soft robot hardware, but also in the computational frameworks that can deal with soft robots.
These seem to be two separable ideas; so why am I putting them together into a single chapter? It turns out that being soft can enable tactile sensing. One might even argue that it's required for the richest forms of tactile sensing. Conversely, one could even argue that tactile sensing becomes the natural sensing modality for soft-skinned robots -- the natural extension of proprioception. So these two topics are intimately connected!
FEM, MPM, ...
Here is the video
from a recent paper describing some of the recent advances enabling
high-performance, reliable FEM soft-body simulation in Drake (including
interactions with rigid bodies)
One of the strongest cases in favor of whole-body tactile skins comes
from the field of contact estimation. From a series of nice work, we
understand fairly well how to use joint-torque sensing to extract an
estimate of the location on a robot arm where contact was made. But the
problem is ill-posed. Particularly in the case of multiple points of
contact, joint-torque sensing alone seems woefully inadequate as a sensor
|Table of contents