The mind uses binocular disparity to extract depth information from the two-dimensional retinal images in stereopsis.In computer vision, binocular disparity refers to the difference in coordinates of similar features within two stereo images.Because of the different viewpoints observed by the left and right eye however, many other points in space do not fall on corresponding retinal locations.The variable distance between these cameras, called the baseline, can affect the disparity of a specific point on their respective image plane.However, in computer vision, binocular disparity is referenced as coordinate differences of the point between the right and left images instead of a visual angle.One possibility to present stimuli with different disparities is to place objects in varying depth in front of the eyes.With large patch and/or image sizes, this technique can be very time consuming as pixels are constantly being re-examined to find the lowest correlation score.Techniques that save previous information can greatly increase the algorithmic efficiency of this image analyzing process.[4] The rover captures a pair of images with its stereoscopic navigation cameras and disparity calculations are performed in order to detect elevated objects (such as boulders).[5] Additionally, location and speed data can be extracted from subsequent stereo images by measuring the displacement of objects relative to the rover.In some cases, this is the best source of this type of information as the encoder sensors in the wheels may be inaccurate due to tire slippage.
Figure 2. Simulation of disparity from depth in the plane. (relates to Figure 1)