Bees do this little dance that tells the other bees where the flowers are. But algorithmically that's probably nowhere near as sophisticated as visual perception. For a smart car sensor to have true depth perception - not radar ping ping ping - it would need two separate cameras and the brains required to process parallax virtually instantly across the field of view. Assuming it could accomplish this then it would need further brains to interpret this processed depth perception. Maybe at that point the car would ready to follow a small set of decision rules. But we and other animals have visual perception hard-wired. It happens "in hardware" in the background - we take it for granted while the "software" side of our thinking controls the vehicle. The next time you're in a self-driving car keep in mind that compared to you it's barreling forward in world of darkness collecting impulses from a suite of widgets.