3D Real-Time Plants Modeling with L-Systems

by Svetlin Alex Bostandjiev , Lyubov Kovaleva , Daniel Vaquero
University of California in Santa Barbara

Introduction

This project involves creating a real-time computer graphics animated simulation for a natural environment. The environment consists of 3D plants modeled with procedural techniques and weather simulation (snow). The generated environment is to be illuminated by image-based lighting techniques, using a HDR environment map which provides the light distribution coming from all directions around the scene.

Plant Modeling

(click on images to enlarge)

We use Lindenmayer systems (L-systems) for the modeling of 3D plants, such as, trees, bushes, and weeds. L-systems share the idea of fractals of self similarity at different levels, and are commonly used to model the growth processes of plant development. An L-system is a formal grammar,

         G = {V, S, w, P}

where,

         V (the alphabet) is a set of symbols containing elements that can be replaced (variables)
         S is a set of symbols containing elements that remain fixed (constants)
         w (start, axiom or initiator) is a string of symbols from V defining the initial state of the system
         P is a set of rules or productions defining the way variables can be replaced with combinations of constants and other variables

L-systems arose from formal languages theory by Chomsky. However, there is a main difference: production rules are applied in parallel instead of serially, because of obvious biological reasons: every module in an organism developes simultaneously. Although very complex structures can be captured by L-systems, many of the phenomena that usually participate in the development of living organisms are beyond their scope. Parametric L-systems extend the original concept of L-systems by associating numerical parameters with the symbols representing plant components. This allows easy quantification of geometric attributes of a model.

Here is an example of a simple L-system production rule we have implemented:

         X -> F[+X]F[-X]+X
         F -> FF

The resulting string can be interpreted geometrically in 2D where,

         F means extend forward
         + - mean rotate left/right
         [ ] mean push/pop current state

If these rules are strictly obeyed the plants would be perfectly symmetrical and not natural. Hence, we have introduced parameters that bring noise in every following iteration. In order to make the plants look more realistic we have applied a couple of RGBA textures for some of the branches, leaves, and fruit. Here are some results.

Raspberry and Blueberry

 

Weeds

Fall trees

Ground Modeling

The ground is modeled using subdivision surfaces with a smoothing filter. Surface subdivion provides us with a method of quicky generating random terrains. The ground is initially a grid of squares and each vertex is associated with a height value, thus, we have a height map. Initially the surface is flat and so all heights are the same. Then we offset the heights of two opposite corners of the grid.

Then we start the recursive surface subdivision process. The first surface we look at is the whole grid and then we subdivide it into smaller and smaller square surfaces. Let us look at the square defined by the points UL, UR, BL, BR. We introduce new points that are the midpoints of the sides of the square and we call them U, B, L, R, and also the center point in the square M. The height at the new points U, B, L, R is calculated by taking the average of the heights at the endpoints of the side it lies on. For the height at M we take the average of all four corners of the square. Then we do the same for the four new squares.

This surface is nice and smooth, but too regular to be natural. So we introduce some noise in the new heights (little offset).

Then we randomly pull some of the new points up (big offset). This surface looks really edgy and unnatural. In order to fix the sudden changes in elevation we use a smoothing filter.

Animation

Plant Growth

At each frame the size of every geometric object is calculated. The animation uses the time variable step and the function float growth (float start, float end) . The function returns 0 if step is less than start; it returns 1 if step is greater than end; and it returns a value between 0 and 1 if step is between start and end. Then this value is multiplied by the maximum size of each obejct. Thus an object would grow from nothing to full size in the time period start to end. For more realistic growth of branches we take different powers of the value returned by the growth function, depending on the curent iteration of the tree modeling. This is to ensure bigger branches grow before smaller ones.

Falling Leaves

The rate of change of the position of a leaf at time t is caluculated using the parametric curve:

         dx = (1/20) * sqrt(t) * cos((1/10)*t)
         dz = (1/20) * sqrt(t) * (-sin((1/10)*t))
         dy = 1/5 * ((rand()%10) / 10.0)

We derive the basis for these equations by taking the derivatives of the general parametric curve for a regular cone. The (1/20) parameter accounts for the width of the cone. The (1/10) parameter accounts for the speed of rotation of the leaves around the cone.

Notice that at bigger cross sections of the cone the speed will increase since the leaf needs to cover bigger distances for the same amount of time. To decrease the speed at bigger cross sections we multiply by the sqrt(t) instead of t, in order to decrease the contribution of time.

Now we have "flat-falling" leaves moving in a spiral-like way down the surface of an imaginary cone. To give the leaves some gentle swing we introduce height offsets (+sin(t/10) and -sin(t/10)) at two opposite corners of the rectangular texture of each leaf.

Notice that in reality the height at which the falling leaves are located does not change at a constant rate. In order to give the leaves some floating property we multiply every unit of change of height by a random number between 0 and 1. (special thanks to Dilyana Doycheva)

Snow Simulation

The snow simulation in the environment is done by creating randomly placed snow-flakes in the shape of a sphere. The flakes are generated high above the ground and are acted upon by wind and gravity. The wind force can be changed in the horizontal directions as user desires, and the gravity force stays constant, therefore causing the flakes to fall to the ground. Once the ground is reached, the snow flakes accumulate, eventually causing snow to appear on the ground. This is done by calculating the horizontal positions of each fallen snow flake when it hits the ground and raising the ground snow height at that point.

The following screenshots show the progressing of snow accumulation as time passes (click thumbnails to view full-size images):

With all the trees present and snow accumulated, we can get an image such as this one:

High Dynamic Range Image-Based Lighting

Overview

Image-Based Lighting (IBL) techniques provide ways to use light from real-world scenes for illuminating computer generated objects. In this project, we have implemented High Dynamic Range Image-Based Lighting. For this purpose, the light probes from Paul Debevec's Light Probe Image Gallery were used as light maps. The demo also includes some post-processing special effects: glare (glow) at high-luminance points and exposure level (the amount of light captured by a camera) control.

HDR Imaging

The dynamic range of an image is defined as the ratio of the greatest value to the smallest value that can be represented by that image. Standard computer displays are capable of showing colors in the RGB format, which is typically limited to 8-bit values (i.e., [0, 255] range) per channel. However, this range sometimes is not sufficient to represent the entire variation of light in a scene; a typical example is a dark room with bright light coming through a window. HDR images were proposed as a solution to these problems. In these images, floating point values are used to store the pixel colors, so we can represent a wider range of different intensities. In order to display high dynamic range images, a tone map operator must be used to transform the floating point values to colors in the displayable range.

High Dynamic Range environment maps are very useful for representing the light distribution around a scene. Such maps are usually represented as a spherical environment map captured by taking photographs of a mirrored sphere, or as a cube map. Below we have examples of HDR environment maps. The idea is to determine the amount of light coming from all directions around a scene, and then use that information to illuminate synthetic objects.

Paul Debevec's light probes in spherical and cube map formats
... ...

Design

The application begins by doing the proper initializations and by reading the light maps from image files. After that, the main display loop first draws the skybox and the geometry using reflective environment mapping. The rendering is made off-screen to a floating-point texture. Then, the glare generation is called if enabled. This procedure generates another floating point texture with the glare contribution added (and writes to floating point textures at intermediate steps). Finally, the texture is tone mapped to the [0, 255] RGB range in order to be displayed at the screen.

Implementation details

The project was implemented in C/C++ using OpenGL and Cg. The GLEW extension library was used to set up the necessary extensions. The demo also makes extensive use of off-screen rendering to floating point textures. In order to do that, we use frame buffer objects (FBOs).

The HDR images used as cube maps are in RGBE format. For creating the cube map textures, code from the NVIDIA SDK HDR Lighting demo was used. That code includes functions for breaking the images into faces of a cube map and creating the corresponding textures, and is based on Bruce Walter's code for handling RGBE images.

Our implementation was done on the GSL machines, which are equipped with NVIDIA Geforce FX 5200 graphics accelerators. Floating-point textures are supported as rectangles in the format GL_TEXTURE_RECTANGLE_NV, but floating-point cube maps are not supported on these cards. To overcome this limitation, we use the 16-bit integer HILO format as NVIDIA's demo does. However, HILO textures have only two channels, so two cubemaps were created for each image: one for storing the red and green channels and other for storing the blue channel. When converting from float to 16-bit integer we lose some precision, but this approximation works well for most HDR images tested.

We then begin by drawing the scene to a floating-point buffer. We use shaders to draw a skybox (a cube centered at the camera's position with the cubemap texture mapped to it) from the cube map texture and then use a reflective environment mapping shader to draw the objects in the scene. This shader blends the environment mapping result to the colors of a texture that is applied to each object. This way, the objects reflect the light's environment while keeping their original colors given by a texture.

After the scene is rendered to a floating-point buffer, post-processing effects are applied. Basically they are image processing operations, in the sense that they receive one texture containing an image as input and output another texture. This is done by drawing a quad that span the entire rendering buffer and doing pixel-based processing in the fragment shader, such as GPGPU algorithms do.

For the glow effect, we first use a shader to extract the high-luminance regions of the image using a thresholding operation:

         y = max((image.rgb - threshold) / (1.0 - threshold), 0)

where we set threshold to 0.8. We then compute y's luminance by

         lum = 0.2125 * y.r + 0.7154 * y.g + 0.0721 * y.b

This generates a luminance buffer. That buffer is blurred using a convolution by a separable Gaussian kernel and the original scene's luminance is increased by the resulting buffer. For the convolution we implemented the algorithm described on Greg James and John O'Rorke "Real-Time Glow" article from the GPU Gems 2 book, available here.

Finally, we must apply a tone mapping operator to display the image. Two methods were implemented: multiplying the pixel value by an exposure factor and then gamma correcting the result (as NVIDIA's demo does), and Erik Reinhard's tone mapping operator. The first method quickly shows saturation at high luminance points as we increase the exposure while the second one nonlinearly maps the luminances in the scene to a range around the average luminance, so points with high luminance are not saturated so quickly as the exposure increases.

Some additional references were useful during the development of this project: 1 and 2.

Results

We show a red sphere illuminated by a few HDR environment maps. Currently we are still integrating the artificial plant and weather environment with the HDR lighting, but we also show a screenshot of a partial result.

Same scene rendered from different angles

Linear tone-mapping x Reinhard's tone mapping

Image without glow x Image with glow

High exposure x Low exposure

Integration with the plant environment

References

http://algorithmicbotany.org/papers/hanan.dis1992.pdf
http://portal.acm.org/citation.cfm?id=808571&coll=portal&dl=ACM&CFID=4442978&CFTOKEN=82757875
http://www.cg.tuwien.ac.at/courses/Fraktale/PDF/fractals8.pdf
http://algorithmicbotany.org/lstudio/CPFGman.pdf
http://en.wikipedia.org/wiki/L_Systems
http://coco.ccu.uniovi.es/malva/sketchbook/overview/parametric/parametric.htm
http://www.gamasutra.com/features/20000411/sharp_pfv.htm