ANR project reference: HDW ANR-16-CE33-0001
Producing massive 3D models representing large scale virtual worlds with a high level of details is a major challenge in computer graphics. In industry as well, there is a strong demand for providing efficient algorithms to reduce manual authoring tasks. Procedural modeling and texturing (PMT) is known to provide a good solution to the scalability problem: it has excellent compression properties, it can produce large amounts of data with low user efforts and, by using stochastic processes, it can produce almost infinite varieties of data, based on a reduced set of parameters. In spite of all of these advantages, PMT still does not offer a suitable alternative to manual modeling, mainly because it is difficult to control, and because ensuring realism at various scales is a hard task.
The goal of the HDWorlds research project is to overcome these limitations so as to synthesize huge detailed scenes using procedural modeling. The novelty of our approach is to develop multi-scale procedural techniques for generating both the shape (geometry) and the appearance (texture) of objects with different levels of details. To improve realism we take into account the changes of appearance of objects throughout time, the impact of external environment, as well as real world data (photographs). The efficiency of our new models will be demonstrated by the creation of large realistic landscapes, that will be editable at interactive rates by using prototype tools.
Context and problem statement
Our modern society can no longer be thought without image and media technologies, these technologies holding a central place in our everyday life. The progresses in 3D visualization technology results in an ever increasing demand for producing rich and detailed 3D graphical contents, i.e. geometric objects augmented with textures for appearance modeling (simple color or more elaborate material information). With increasing popularity and tremendous improvements of 3D data processing technologies, the production of such 3D scenes has undergone a quantum leap during the past few years. In particular, the target size of scenes has tremendously increased. An example is the Mega Meshes technology by Lionhead Studios © and the Megatextures by Id Software. Although a few years ago millions of triangles / texels (texture elements) were enough to represent such scenes, virtual worlds must now contain tens of billions of triangles and texels in order to reach the desired quality.
Procedural model generation is able to scale up to huge scenes covering a wide range of scales (left). Realism is provided 1) by coherent geometry and texture (middle) and 2) by environment like season (right).
Developing novel tools that are able to scale up to this increased demand is of central importance for media content production industries, e.g. video (serious) games, motion pictures industry, etc. As pointed out by many artistic directors in the computer graphics industry, there is a strong demand for providing efficient computer algorithms to reduce manual editing tasks, which are directly linked to production costs.
A large number of commercial / freely available tools already exist for creating 3D scenes: there are general polygon-based tools like Autodesk's 3DS Max, and specialized tools for landscapes like TerraGen and GeoControl, for plants like XFrog and Plant Factory, for cities like CityEngine, etc. Some tools, like Houdini (Side Effects Software) let users build networks that define the geometry creation process. Besides storing the history of creation, it allows one to copy / paste parts of the network into new networks to accelerate the user’s work. It also allows to program procedural networks that build scenes out of smaller template objects. Substance Designer (Allegorithmic) is a widely used tool dedicated to texture editing that also incorporates procedural networks.
The problem is that in a “classical” production pipeline, massive virtual environments creation becomes a serious issue because huge amounts of intermediate data must be dealt with and excessive computations / editing efforts are necessary for generating these data. In end-user applications, virtual environments are pre-computed and explicitly stored as triangles and texture maps (arrays of texels). Being extremely memory consuming, the whole pipeline requires to set up complex streaming technologies and virtual memory for both modeling and rendering.
In this context, our motivation is to introduce novel representations and algorithms that 1) improve content production tools so as to make content production easier and more efficient, while avoiding the need of conversions into triangles (and other low level primitives), and 2) make massive scenes more realistic. We propose to keep high level scene representations all along the production pipeline. Conversions are applied only when they are necessary: for instance, in the GPU, when rendering is required.
Procedural modeling and texturing (PMT) provides a good solution to the scalability problem of scenes: it has excellent compression properties, it can produce large amounts of data with low user efforts and, by using stochastic processes, it can produce almost infinite varieties of data, based on a single set of parameters. But the creation of huge realistic scenes using PMT is still a difficult problem in Computer Graphics. Four important challenges still remain open research avenues:
- Models should handle the wide range of scales of natural scenes, for instance landscapes with details going from global mountain shape to individual blades of grass.
- Models should provide a framework for generating realistic scenes. This is difficult since the human eye is sensitive to repetition and inconsistencies between colors and shapes.
- A user-friendly model would allow artists the rapid design of this kind of scenes.
- Finally, there is a need to design efficient algorithms to generate and render the scenes.
Current PMT techniques still fail to provide practical solutions for these fours challenges. Even worth, as recently pointed out, PMT still does “not offer a suitable alternative to manual modeling”. Our goal is to propose new procedural models for geometry, appearance and textures, linked together within a processing pipeline, so as to offer a concrete alternative to manual modeling. Our approach has two original properties: 1) we define multi-scale procedural models, and 2) we treat geometry and appearance in a joined and coherent fashion.