Massive Model Rendering Techniques
Andreas Dietrich Enrico Gobbetti Sung-Eui Yoon
Abstract
We present an overview of current real-time massive model visualization technology, with the goal of providing readers with a high level understanding of the domain, as well as with pointers to the literature.
本文展示了當前大規模模型實時可視化技術的概況,目的是為了給讀者們對這個領域一個比較深入的認識,并指出學術界的一些研究的文獻。
I. INTRODUCTION
Interactive visualization and exploration of massive 3D models is a crucial component of many scientific and engineering disciplines and is becoming increasingly important for simulations, education, and entertainment applications such as movies and games. In all those fields, we are observing data explosion, i.e., information quantity is exponentially increasing. Typical sources of rapidly increasing massive data include the following:
交互可視化以及對大規模3D模型的瀏覽,對于很多的科學與工程的學科來說是十分關鍵的部分。特別是對于仿真、教育以及娛樂應用如電影與游戲幾個方面變得越來越重要。在這些所有的領域中,我們注意到了所采用的數據的爆炸性增加情況,如,表現在信息數量的指數級增長。這些增長快速的領域包括:
• Large-scale engineering projects. Today, complete aircrafts, ships, cars, etc. are designed purely digital. Usually, many geographically dispersed teams are involved in such a complex process, creating thousands of different parts that are modeled at the highest possibly accuracy. For example, the Boeing 777 airplane seen in Figure 1a consists of more than 13,000 individual parts.
大規模的工程項目。目前,整個飛機、船、汽車等設計全是由數字化方式進行的。通常情況下,一些地理上位置分散的小組共同參與到這個復雜的進程中,創建數以萬計的高精度的模型。
• Scientific simulations. Numerical simulations of natural real world effects can produce vast amounts of data that need to visualize to be scientifically interpreted. Examples include nuclear reactions, jet engine combustion, and fluid-dynamics to mention a few. Increased numerical accuracy as well as faster computation can lead to datasets of gigabyte or even terabyte size (Figure 1b).
科學仿真。對于真實世界效果的數值模擬可能產生巨大數量的數據,而它們需要用可視化來進行科學解釋。
• Acquisition and measuring of real-world objects. Apart from modeling and computing geometry, scanning of real-world objects is a common way of acquiring model data. Improvements in measuring equipment allow scanning in sub-mm accuracy range, which can result in millions to billions of samples per object (Figure 1c).
對真實世界對象的獲取與量測。
• Modeling natural environments. Natural landscapes contain an incredible amount of visual detail. Even for a limited field of view, hundreds of thousands of individual plants might be visible. Moreover, plants are made of highly complex structures themselves, e.g., countless leaves, complicated branchings, wrinkled bark, etc. Even modeling only some of these effects can produce excessive quantities of data. For example, the landscape model depicted in Figure 1d measures “only” a square area of 82 km × 82 km.
對自然環境的建模。自然景觀包括了眾多難以置信的細節。即使在有限的視場角下,也將有數以萬計的植被等可見。此外,對象本身也十分的復雜。
Handling such massive models presents important challenges to developers. This is particularly true for highly interactive 3D programs, such as visual simulations and virtual environments, with their inherent focus on interactive, low latency, and real-time processing.
操作這些大規模的模型給開發者們帶來了一些重要的挑戰。尤其是對于高度交互的3D程序,如視覺仿真或虛擬環境,這些應用的固有的特點是進行交互、低延遲和實時的處理。
In the last decade, the graphics community has witnessed tremendous improvements in the performance and capabilities of computing and graphics hardware. It therefore naturally arises the question if such a performance boost does not transform rendering performance problems into memories of the past. A single standard dual-core 3 GHz Opteron processor has roughly 20 GFlops, a Play station 3’s CELL processor has 180 GFlops, and recent GPUs, now fully programmable, provide around 340 GFlops. With the increased application of hardware parallelism, e.g., in the form of multi-core CPUs or multi-pipe GPUs, the performance improvements, which tend to follow, and even outpace, Gordon Moore’s exponential growth prediction, seem to be continuing for a near future to come. For instance, Intel has already announced an 80 core processor capable of TeraFlop performance. Despite such an observed and continuing increase in computing and graphics processing power, it is however clear to the graphics community that one cannot just rely on hardware developments to cope with any data size within the foreseeable future. This is not only because the increased computing power also allows users to produce more and more complex datasets, but also because memory bandwidth grows at a significantly slower rate than processing power and becomes the major bottleneck when dealing with massive datasets.
過去的十年中,圖形領域目睹了圖形硬件的計算能力與處理性能的極大的提升。自然而然,將提出這樣一個問題:是否這種性能的推進不能使渲染性能的問題成為歷史呢?一個標準的雙核3GHz的處理器可以處理20GFlops的浮點運行。一個PS3有180GFlops而一個GPU支持可編程能力能提供達到340GFlops的浮點運算。隨著硬件并行應用的增加,性能的提升要超過摩爾指數增長的預測。盡管取得了這樣的計算與圖形處理器能力的提升,而對于圖形應用來說,在可預見的將來中,還是并不能完全的依賴于硬件的發展來處理數據集的大小。這不僅是因為計算能力的增長允許用戶來創建更為復雜的數據集,也是因為內存帶寬的發展速度明顯的低于處理器增長的速度,而這成為處理大規模數據集的主要瓶頸。
As a result, massive datasets cannot be interactively rendered by brute force methods. To overcome this limitation, researchers have proposed a wide variety of output-sensitive rendering algorithms, i.e., rendering techniques whose runtime and memory footprint is proportional to the number of image pixels, not to the total model complexity. In addition to requiring out-of-core data management, for handling datasets larger than main memory or for providing applications the ability to explore data stored on remote servers, these methods require the integration of techniques for filtering out as efficiently as possible the data that is not contributing to a particular image.
因而,大 規模數據集的交互渲染不能通過強力模型進行。要克服這個限制,研究者們提出了一系列的輸出敏感型的渲染算法。如,渲染技術的運行時間和內存的要求與象素成 比例而不是與全部的模型復雜度成比例。此外在要求外核的數據管理技術時,要處理大于內存的數據集或提供應用程序能力來探測數據存儲在遠端服務器上,這些方 法需要集成高效的過慮對最終生成的圖像不起作用的數據。
This article provides an overview of current massive model rendering technology, with the goal of providing readers with a high level understanding of the domain, as well as with pointers to the literature. The main focus will be on rendering of large static polygonal models, which are by far the current main test case for massive model visualization. We will first discuss the two main rendering techniques (Section II) employed in rendering massive models: rasterization and ray tracing. We will then illustrate how rendering complexity can be reduced by employing appropriate data structures and algorithms for visibility or detail culling, as well as by choosing alternate graphics primitive representations (Section III). We will further focus on data management (Section IV) and parallel processing issues (Section V), which are increasingly important on current architectures. The article concludes with an overview of how the various techniques are integrated into representative state-of-the-art systems, and a discussion of the benefits and limitations of the various approaches (Section VII).

原文請下載
為了方便大家在移動端也能看到我的博文和討論交流,現已注冊微信公眾號,歡迎大家掃描下方二維碼關注。
