The current state of technology permits very accurate 3D reconstructions of real scenes acquiring information through quite different sensors altogether. A high precision modelling that allows simulating any element of the environment on virtual interfaces has also been achieved. This paper illustrates a methodology to correctly model a 3D reconstructed scene, with either a camera RGB-D or a laser, and how to integrate and display it in virtual reality environments based on Unity, as well as a comparison between both results. The main interest regarding this line of research consists in the automation of all the process from the map generation to its visualisation with the VR glasses, although this first approach only managed to get results using several programs manually. The long-term objective would be indeed a real-time immersion in Unity interacting with the scene seen by the camera.