One of 500 registered AR sandbox builds worldwide, the Augmented Reality Sandbox uses colour and interpolated line drawing to model landscape - as on a topographic map.
How does the Augmented Reality Sandbox work?
AR Sandbox uses a computer projector and a motion sensing input device (a Microsoft Kinect 3D camera) mounted above a box of sand. When you shape the sand in the sandbox, the Kinect detects the distance to the sand below, and a visualization of an elevation model with contour lines and a colour map assigned by elevation is cast from an overhead projector onto the surface of the sand. Move the sand, and the Kinect perceives changes in the distance to the sand surface, and the projected colours and contour lines change accordingly.
Using a ‘make it rain’ gesture above the surface of the sand, virtual rain appears as a blue, shimmering visualization on the surface below. The water appears to flow down the slopes to lower surfaces. The water flow simulation is based on real models of fluid dynamics (a depth integrated version of the Navier-Stokes equations).
Pressing and holding the button “Drain” dries out the virtual water.
This version of the sandbox uses a Microsoft Kinect camera, the same camera used in video games. The Kinect uses an infrared projector, camera and special microchip to track the movement of objects in 3D. This is then processed by the modelling program using a computer equipped with a powerful graphics card . The resulting image is projected onto the sand with a short-range Promethean 35 projector.
Contact firstname.lastname@example.org for more information.
The original model for the Augmented Reality Sandbox was developed in 2012 by the UC Davis W.M. Keck Center for Active Visualization in the Earth Sciences (Keck CAVES), using software made available by Dr Oliver Kreylos and is supported in the USA by the National Science Foundation under Grant No. DRL 1114663.
For more information, please visit https://arsandbox.ucdavis.edu.