This paper introduces a method for achieving seamless 3D scene through NeRF generated Digital Twin environment and its implementation in Unity. NeRF (Neural Radiance Fields), a specialized neural network technique, is used to create a photorealistic Digital Twin from sparse point cloud data. This data is derived from Structure-from-Motion environmental images. The NeRF Digital Twin is subsequently exported to Unity to be rendered in a virtual environment in real-time, which is then displayed on a screen. The proposed methodology utilizes camera geometry and world coordinate transformations to create a control system that dynamically aligns the virtual environment to the user’s perspective for a seamless 3D view without a use of a VR gear. The screen displays the synthesized 3D view aligned to the user’s perspective. The proposed approach offers a practical solution for creating a view-through effect on a screen, without the use of glass or mirrors and future-proof structures for mixed reality applications.