CEO Vision generation 2 is a completely new build of CEO Vision with a vast array of improvements. The system now provides a fully interactive, precise working environment.
CEO Vision Kit
- 2 x Microsoft Kinect
- HD Camera
- Vuzix Star 1200 Glasses
CEO Vision setup
The kinects provide the hand and finger tracking. Whilst the HD Camera and camera on the Vuzix Star 1200 provide a co-ordinated augmented reality experience.
We use full markerless augmented reality so as to make CEO Vision take the computer world and merge it onto the real world. In this case CEO Vision recognises the box of Korn Flakes that the CEO wants to know more about. For the first time ever your vision is the search engine and your hands are completely free.
Hands tracked using Kinects
Using dual kinect hand tracking ideas we can precisely track the fingers. Once we have this, CEO Vision can use pinch to interact with multi hand gestures built on top. On a tablet such as an iPad we have two finger pinch to zoom and rotate, however we have two hand pinch to make the world bigger (approx 1:30) and pinch to scroll. There is no touching of a surface, the user’s hands are in the air above the product and interacting with the augmented data.
Pinch To Zoom
Combining these features together makes CEO Vision one of the first Spatial Operating Systems ever. In this example we were interacting with 1.2 billion rows of data in SAP HANA Cloud using SAP NetWeaver Cloud as the backend powerhouse. So on top of the Korn Flakes box we can show the sales, literally show the sales across the globe and provide information about your sales team.
How does it work?
CEO Vision is initially calibrated so that the hand tracking and augmented worlds align. This is done by a single pinch performed at a know position in both coordinate systems. Once this position is know through a series of transformations any point in the augmented reality co-ordinate system is know in the hand tracked co-ordinate system and vica versa. The 3D augmented objects are rendered on the GPU to keep the CPU available for interaction calculation and p_w_picpath recognition. A set of objects are parsed that the CEO Vision knows about. When an object is recognised it can request data and sets up possible interactions available to it such as a pinch or multi hand gesture. The data requests are restful requests to SAP NetWeaver Cloud that then marshals the request through to SAP HANA in the Cloud and results are returned in JSON. The interactions are parsed from the object to the CEO Vision framework that then starts looking for interactions. When a user interaction is triggered it can then request more data or setup new interactions through the framework.
You can watch the complete SAP TechEd Live Session here or click on the p_w_picpath below.