這是 Microsoft Research 在今年的 Siggraph 上展出的東西,名字叫做「KinectFusion」,顧名思義,就是用 Kinect(之前的介紹)來對拍攝到的現實場景做處理、並和虛擬物體作融合(Fusion)了!官方的影片和說明網頁是在 http://research.microsoft.com/apps/video/default.aspx?id=152815;而下方,則是 YouTube 上的展示影片:
基本上,這個程式會去把目前抓到的深度資訊,重新建立出 3D 的模型出來;而當 Kinect 移動的時候,他則會連續性地追蹤 Kinect 的位置變化(6DOF、而且是不需要特徵點的方法)、並將新的資料結合到目前的模型裡;這些動作,都是使用 GPGPU 的技術、即時做到的!另外,根據官方的說明,他在實際記錄空間中的資料時,似乎是採用 volumetric 來做處理的,這點倒是滿有趣的~以影片裡的成果看來,他們做的,不管是 Kinect 的持續追蹤、或是 3D 重建,感覺都做得相當好啊!
而到了影片中段(3:50),他們也還有展示加上虛擬物體、並且和真實環境拿來做物理模擬的結果,感覺效果也相當地不錯!更後面(7:00),也有用手指在虛擬環境裡塗鴉的示意,雖然感覺沒有貼得很準,但是也算是相當好了~而如果拿這個來做 AR 遊戲的話,應該可以有相當的突破吧!不過硬體的需求可能也要非常高就是了。 ^^"
最後,官方說明:
We present KinectFusion, a system that takes live depth data from a moving depth camera and in real-time creates high-quality 3D models. The system allows the user to scan a whole room and its contents within seconds. As the space is explored, new views of the scene and objects are revealed and these are fused into a single 3D model. The system continually tracks the 6DOF pose of the camera and rapidly builds a volumetric representation of arbitrary scenes.
Our technique for tracking is directly suited to the point-based depth data of Kinect, and requires no feature extraction or feature tracking. Once the 3D pose of the camera is known, each depth measurement from the sensor can be integrated into a volumetric representation. We describe the benefits of this representation over mesh-based approaches. In particular, the representation implicitly encodes predictions of the geometry of surfaces within a scene, which can be extracted readily from the volume. As the camera moves through the scene, new depth data can be added or removed from this volumetric representation, continually refining the 3D model acquired. We describe novel GPU-based implementations for both camera tracking and surface reconstruction. These take two well-understood methods from the computer vision and graphics literature as a starting point, defining new instantiations designed specifically for parallelizable GPGPU hardware. This allows for interactive real-time rates that have not previously been demonstrated.
We demonstrate the interactive possibilities enabled when high-quality 3D models can be acquired in real-time, including: extending multi-touch interactions to arbitrary surfaces; advanced features for augmented reality; real-time physics simulations of the dynamic model; novel methods for segmentation and tracking of scanned objects
基本上,他們算是已經把其中一個 Heresy 想用 Kinect 做的東西做掉了,而且做得比 Heresy 預期的更好…某種程度上,這應該也算是必然的吧…但是想想,這也滿可悲的,全球化的結果,就是除非你是頂級的人才,不然所有你想到可以的東西,都可以找到有人已經做了,而且做得比你更好… orz
[…] 而這一篇,則是先跳離 Kinect for Windows SDK 其他核心功能的部分,來研究一下 Kinect Fusion 這個 3D 模型重建的功能(很久之前的介紹)該怎麼使用吧~ […]
讚讚
[…] Interactions」、另一個則是微軟很早之前就展示過、非常炫的「Kinect Fusion」!而詳細的公新內容,也可以參考官方的 Release […]
讚讚
[…] Kinect Fusion: We present KinectFusion, a system that takes live depth data from a moving depth camera and in real-time creates high-quality 3D models. […]
讚讚
[…] KinectFusion:使用 Kinect 的 3D 重建以及 AR […]
讚讚
但也因為全球化的結果,讓你我知道世界有多大~頂級的人才都在做什麼樣的研究…KEEP WALKING!
讚讚
的確,有刺激才會有進步~ ^^"
讚讚
Heresy, Kinect 的depth image到底是怎么获取的? 双目的话baseline太短了吧。
讚讚
抱歉,不太瞭解你的問題?
不過,Kinect 的深度偵測原理,基本上不是雙眼視覺,而是使用紅外線的 light code 來做。
詳細的資料,建議請直接參考 PrimeSense 的官方網站。
讚讚