I know, the title of this post sounds a little bit like an episode of The Big Bang Theory, but you’re just going to have to live with that. Yes, you guessed it, it’s time to talk a little bit about Daredevil’s radar sense – for the umpteenth time – but this isn’t your typical “Daredevil Science” post. It has more to do with philosophy than the natural sciences, and represents my own take on what I think might be the closest you can get to rendering the radar sense in two dimensions. Actually, make that three dimensions; this little thought experiment actually involves stereograms.
Before we get to the actual images I want you to look at, I’ll just briefly explain why I thought this was a neat idea. You see, the way I personally picture what Matt’s radar sense looks like, from his point of view, is as world of black on black shapes. The reason I arrive at that conclusion is because I can’t fathom what else it could be. There is no color, but there is a three-dimensional awareness of where things are and how far object extend in these dimension (i.e. their shape).
Of course, you might ask (and some of you have), why I think of the radar experience as visual-like at all. Isn’t it supposed to be like “touching everything at once”? On the one hand, this is a compelling idea, which has the added bonus of really bringing home the point that Daredevil, for all his powers, truly is blind. It also makes sense from a real-world perspective in terms of how human echolocation is often described. The phenomenon of object perception among the blind used to be known as “facial vision,” and it wasn’t until 1944 that a study proved definitively that it depended on sound, not some other mechanism. However, the experience is often described as tactile, as feeling like pressure on the skin. In fact, one of the original subjects of the 1944 study, found the idea that his ability was sound-based to be so absurd that it took several failed trials with his ears plugged to convince him that the perception of sound echoes alone accounted for his experience.
However, a recent study of two highly skilled echolocators, has shown that their visual cortex is activated in echolocation tasks whereas such activation is completely absent in sighted controls. This in no way proves that echolocation is experientially “vision-like” in these experts. After all, the visual cortex in the blind is activated in everything from braille reading to the understanding of ultra-fast synthetic speech (it represents vast available neural real estate, after all). Still, it makes sense to me that the more refined the ability becomes, the more difficult it would be for a tactile experience to encompass it, if that makes sense. Vision, on the other hand, is unique in its ability to let us process an entire scene simultaneously. In order for the radar sense (be it sound-based or something more exotic) to be useful for the more complex object identification tasks that Matt Murdock apparently uses it for, it makes more sense to me that it’s processed in a way that mimics some of the properties of sight.
Anyway, let’s get to the simulation portion of this post. In order to get anything out of this at all, you have to be able to generate a 3D image from an autostereogram. Not everyone can do this, but most people should be able to. I have a very easy time with stereograms, and find them pretty fascinating. The trick for me is to look at the image as if you’re looking through it into infinity. These images were made using the online service easystereogrambuilder.com. I then altered the images on my computer to get them as dark as possible without losing the image. Remember that you can enlarge the images by clicking on the once (click again to close), this will make the task much easier. For an answer key to what you’re looking at, just hover over the image and the image’s title text will appear after a second or two. Have fun and don’t forget to comment! 😉