In this paper, we propose a method for reconstructing the 3D model from a single 2D image. The current cutting-edge methods for 3D reconstruction use the GAN (Generative Adversarial Network) to generate the model. However, the methods require multiple 2D images to reconstruct the 3D model, because all the information of a real object cannot be obtained from only one side, especially the back of the object is invisible. Since rebuilding a 3D model from a single view is an important issue in practical applications, the system requires the ability to obtain information about the surrounding environment of the object more quickly without the need for the object to move around. Therefore, we propose a method for reconstructing 3D models from an image by learning the relationship between 3D model and 2D image. Mainly this method consists of three parts. The first part is the view layer, observing real-world objects and capturing 2D images. The layer searches the related 2D image of the 3D model that exists in the 3D model library. The second part is the corresponding layer. The 2D image corresponding to the 3D model is taken out, and contrast with real-world 2D images of objects. The 2D cross-section of the 3D model is found as the most similar one to the 2D image of the real-world object. The third part is generative layer that based on the model library to find the corresponding 3D model, reconstructing a 3D model that corresponds to the real object by using the GAN.