Extracting shape information from acoustic shadows

Ensonified objects cast shadows on the background in the same manner as objects that are illuminated by a light source. Depending on the angle of the incident sound, the position of the target and the angle and position of the background, the shadow can reveal features of the object’s shape in the dimension that is otherwise not resolved in the primary acoustic image (i.e. the image that is created by echoes reflected by the object itself). This can potentially be useful for fish identification as many distinguishing features of fish (fins, head and body profile) lie in the vertical plane where they generate echoes that are poorly resolved in typical side-looking sonar configurations. This effect is demonstrated with this 24-inch fish model, ensonified with a side-looking DIDSON sonar. The model casts a dramatic shadow on the background plane. Unlike the primary echoes, which show up as a bright line at 1.6 m range, the shadow provides an excellent view of the fish profile’s shape.

The acoustic image (left) of a 24-inch fish silhouette model casting a shadow on to a 45° background plane. Unlike the primary echoes, seen as a bright line at 1.6 m range, the shadow gives an excellent representation of the fish silhouette’s shape. Compare with the photo of the model on the right.

We conducted a series of experiments to test the extent to which target shape and depth can be derived from shadow images. These experiments required an even background plane of sufficient reflectivity, tilted at a known angle and marked with a defined ruler, and a known transducer position. When the relative geometry of the transducer and the background plane is known, one can correct the distortion of the shadow that is introduced by its projection and calculate the depth of the target. Unfortunately, the potential for real-world applications is probably relatively limited since it requires a relatively flat background of known geometry and a high cross-range resolution that can only be achieved over a short range. Another limitation is that the shadows are well-defined when the objects are close to the backplane but quickly blur as the distance between the object and the background increases.

The shadow of a 7-inch fish silhouette model projected on to a 24° background plane (top), the corrected image of the shadow (center) and a photo of the model used (bottom). The shape of the corrected image is a good reproduction of the shape of the silhouette model. Note, because the background plane is tilted at a shallower angle, the uncorrected shadow of the small fish model is more distorted than the shadow of the large fish model shown above.

However, even though practical applications for the use of acoustic shadows may be somewhat limited, understanding how they are generated helps better understand some fundamental similarities and (more importantly) differences between sonar images and photographs. This can be illustrated with this striking National Geographic award-winning photograph of camels casting dramatic shadows in the sand. Without the shadows we could be looking at a herd of long-necked desert turtles. With the shadows we immediately recognize the characteristic shapes of camels, one with its head down on the ground, all others with their heads held high.

Camels casting shadows in the sand: camel shadows without the shadow of a doubt.
Photograph by National Geographic Explorer George Steinmetz.

The photograph and the acoustic image have in common that, for maximum effect, the shadow has to be cast on a background at a shallow angle. In the case of the photograph it is the sunlight, in the case of the sonar image it is the acoustic beam that has to hit the background at a low angle. However, there are also two important differences between the photograph and the acoustic image: Unlike the photo camera, the sonar serves a dual function as the source of illumination and the observer. Secondly, one of the two dimensions resolved in the acoustic image differs from one of the two dimensions resolved in the photograph. In the sonar image the shadows have to resolve in range, which is automatically the case for shadows that are cast on a background at a low angle. For the photograph, on the other hand, the camera has to look down from above to resolve the shadows. In the case of the photograph, the source of illumination and the observer are separate and have to be in different locations to produce dramatic shadows in the image.

Understanding this difference will also help you understand why images recorded with a side-looking sonar “look” as if they were taken from above. Why? This is one of the most common questions we are asked. We are so familiar with photographs or, for that matter, the way we see the world with our own eyes, that we intuitively interpret the superficially similar sonar images the same way. An image that “looks like this” had to be recorded from above. However, to get the same picture with an imaging sonar, the sea levels will have to have risen sufficiently to cover the camels and, contrary to our intuition, the sonar would have to listen for the camel’s echoes and, more importantly, the silence of their shadows from an oblique angle, rather than from above.