We propose a methodology that generates a large-scale stereo with a long theoretical baseline through a combination of small-scale stereos captured from various points. Moreover, we have confirmed that the stereo measurement accuracy is improved by improvement of the ratio between baseline and the distance between the camera and the object, even if the distance is large. We achieved this by camera pose and position estimation and 3D model-based tracking with 3D object recognition. We have also confirmed that our concept can improve the stereo measurement accuracy from a distant point by the verifying its accuracy, and in a verification experiment confirmed that our concept is valuable for actual stereo data such as aerial images. Thus, our approach allows a single stereo camera mounted on a moving object to achieve wide-range observation and high accuracy.