Crytek Farcry Cryengine Sandbox Editor 1 1
Here you can find all about Crytek Farcry Cryengine Sandbox Editor 1 1 like manual and other informations. For example: review.
Crytek Farcry Cryengine Sandbox Editor 1 1 manual (user guide) is ready to download for free.
On the bottom of page users can write a review. If you own a Crytek Farcry Cryengine Sandbox Editor 1 1 please write about it to help other people. [ Report abuse or wrong photo | Share your Crytek Farcry Cryengine Sandbox Editor 1 1 photo ]
Crytek Farcry Cryengine Sandbox Editor 1.1, size: 6.5 MB
Crytek Farcry Cryengine Sandbox Editor 1 1
User reviews and opinions
No opinions have been provided. Be the first and add a new opinion/review.
Advanced Real-Time Rendering in 3D Graphics and Games Course SIGGRAPH 2007
Finding Next Gen CryEngine 2
Figure 1. A screenshot from the award-winning Far Cry game, which represented next gen at the time of its release
Figure 2. A screenshot from the upcoming game Crysis from Crytek
Chapter 8: Finding Next Gen CryEngine 2
In this chapter we do not present one specific algorithm; instead we try to describe the approaches the German company named Crytek took to find certain rendering algorithms that work well together. We believe this information is valuable for anyone that wants to implement similar rendering algorithms because often the implementation challenges arise when combining with other algorithms. We will also describe briefly the path to it as that covers alternative approaches you also might want to consider. This is not a complete description of everything that was done on the rendering side because for this chapter we picked certain areas that are of interest specifically for this audience and limited ourselves to a presentable extend. The work presented here takes significant advantage of research done by the graphics community in recent years and combines it with novel ideas developed within Crytek to realize implementations that efficiently map onto graphics hardware.
Crytek Studios developed a technically outstanding Far Cry first person shooter game and it was an instant success upon its release. Far Cry raised the bar for all games of its genre. After our company shipped Far Cry1, one convenient possibility was to develop a sequel using the existing engine with little modifications - more or less the same engine we used for Far Cry. While this could have been an easy and lucrative decision, we believed that it would prove to be limiting for our goals technically and artistically. We made the decision that we want to develop a new next-generation engine and improve the design and architecture, along with adding many new features. The new game, named Crysis2, would follow Far Cry with the same genre, but would tremendously increase in scope everything had to be bigger and better. The new engine, the CryEngine 2, would make that possible. After reading the design document and an intense deliberation session amongst all designers, programmers and artists, we arrived at a set of goals for the new engine to solve:
Shipped March 2003, Publisher: Ubisoft, Platform: PC Not released yet, Publisher: Electronic Arts, Platform: PC
The game would contain three different environments Many objects, height map, ocean, big view distance, ambient lighting with one main directional light source
Figure 3. Jungle paradise
Many point lights, dark, huge room like sections, geometry occlusion, fog volumes
Figure 4. Alien indoor environment
Ice Material layer, subsurface scattering
Figure 5. Ice environment
Achieving all three environments is a challenge as its hard to optimize for levels with completely different characteristics. Cinematographic quality rendering without hitting the Uncanny Valley The closer you get to movies quality, the less forgiving the audience will be. Dynamic light and shadows Pre-computing lighting is crucial to many algorithms that improve performance and quality. Having dynamic light and shadows prevents us from using most of those algorithms because they often rely on static properties. Support for multiple GPU and multiple CPU (MGPU & MCPU) Development with multithreading and multiple graphic cards is much more complex and often its hard to not scarify other configurations.
Game design requested a 21km21km game play area We considered doing this; but production, streaming, world persistence would not be worth the effort. We ended up having multiple levels with up to 4km4km. Target GPU from shader model 2.0 to 4.0 (DirectX10) Starting with Shader Model 2.0 was quite convenient but DirectX10 development with early hardware and early drivers often slowed us down. High Dynamic Range We had good results with HDR in Far Cry, and for the realistic look we wanted to develop the game without the LDR limitations. Dynamic environment (breakable) This turned out to be one of the coolest features but it wasnt easy to achieve. Developing game and engine together That forced us to have the code always in some usable state. Thats simple for a small project but becomes a challenge when doing on a large scale.
Our concept artists created many concept images in order to define the games initial look but in order to ultimately define the feel of the game we produced a video. The external company Blur3 studio produced with our input a few concept videos for us and that helped to come to consent on the look and feel we wanted to achieve.
Figure 6. A frame from one of the concept videos from Blur (rendered off-line) for Crysis.
In the remainder of this chapter we will first discuss the shader framework used by the new CryEngine 2. This area turned out to be a significant challenge for our large scale production. Then we will describe our solutions for direct and indirect lighting (including some of our design decisions). We can use specialized algorithms by isolating particular
lighting approach into a contained problem and solving it in the most efficient way. In that context, we approach direct lighting primarily from the point of view of shadowing (since shading can be done quite easily with shaders of varied sophistication). Indirect lighting can be approximated by ambient occlusion, a simple darkening of the ambient shading contribution. Finally we cover various algorithms that solve the level of detail problem. Of course this chapter will only cover but a few rendering aspects of our engine and many topics will be left uncovered but it should give a good taste of the complexity of our system and allow us to dig in into a few select areas in sufficient details.
Shaders and Shading
8.4.1 Historical Perspective on CryEngine 1
In Far Cry we supported graphics hardware down to NVIDIA GeForce 2 which means we not only had pixel and vertex shader but also fixed function transform and lighting (T&L) and register combiner (pre pixel shader solution to blend textures) support. Because of that and to support complex materials for DirectX and OpenGL our shader scripts had complex syntax. After Far Cry we wanted to improve that and refactored the system. We removed fixed function support and made the syntax more FX-like as described in [Microsoft07]. Very late in the project our renderer programmer introduced a new render path that was based on some ber-shader approach. That was basically one pixel shader and vertex shader written in CG/HLSL with a lot of #ifdef. That turned out to be much simpler and faster for development as we completely avoided the hand optimization step. The early shader compilers were not always able to create shaders as optimal as humans could do but it was a good solution for shader model 2.0 graphics cards. The ber-shader had so many variations that compiling all of them was simply not possible. We accepted a noticeable stall due to compilation during development (when shader compilation was necessary) but we wanted to ship the game with a shader cache that had all shaders precompiled. We ended up playing the game on NVIDIA and on ATI till the cache wasnt getting new entries. We shipped Far Cry with that but clearly that wasnt a good solution and we had to improve that. We describe a lot more details about our first engine in [Wenzel05].
extra memory and the early z pass can run faster (double speed without color write on some hardware). So we ended up using R16, R32 or even native z buffer depending on what is available. The depth value allows some tricks known from deferred shading. With one MAD operation and a 3 component interpolator its possible to reconstruct the world space position. However for floating point precision its better to reconstruct positions relative to the camera position or some point near to it. That is especially important when using 24bit or 16bit float in the pixel shader. By offsetting all objects and lights its possible move the 0, 0, 0 origin near the viewer. Without doing this decals and animations can flicker and jump. We used the scene depth for per pixel atmospheric effects like the global fog, fog volumes and soft z buffered particles. Shadow mask generation uses scene depth to reduce draw call count. For the water we use the depth value to soft clip the water and fade in a procedural shore effect. Several post processing effects like motion blur, Depth of Field and Edge blurring (EdgeAA) make use of the per pixel depth as well. We describe these effects in detail in [Wenzel07].
8.4.5 World Space Shading
In Far Cry we transformed the view and the light positions into tangent space (relative to the surface orientation). All data in the pixel shader was in tangent space so shading computations were done in that space. With multiple lights we were running into problems passing the light parameters over the limited amount of interpolators. To overcome this problem we switched to use world-space shading for all computations in Crisis (in actuality we use world-space shading with an offset in order to reduce floating point precision issues). The method was already needed for cube map reflections so code became more unified and shading quality improved as this space is not distorted as tangent space can be. Parameters like light position now can be passed in pixel shader constants and dont need to be updated for each object. However when using only one light and simple shading the extra pixel cost is higher.
Shadows and Ambient Occlusion
8.5.1 Shadowing Approach in CryEngine 1
In our first title Far Cry we had shadow maps and projected shadows per object for the sun shadows. We suffered from typical shadow map aliasing quality issues but it was a good choice at that time. For performance reasons we pre-computed vegetation shadows but memory restrictions limited us to very blurry textures. For high end
hardware configurations we added shadow maps even to vegetation but combining them with the pre-computed solution was flawed. We used stencil shadows for point lights as that were an easier and more efficient solution. CPU skinning allowed shadow silhouette extraction on the CPU and the GPU rendered the stencil shadows. It became obvious that this technique would become a problem the more detailed objects we wanted to render. It relied on CPU skinning, required extra CPU computation, an upload to GPU, extra memory for the edge data structures and had hardly predictable performance characteristics. The missing support for alpha-blended or tested shadow casters made this technique not even usable for the palm trees an asset that was crucial for the tropical island look (Figure 7).
Figure 7. Far Cry screenshot: note how the soft precomputed shadows combine with the real-time shadows
For some time during development we had hoped the stencil shadows could be used for all indoor shadows. However the hard stencil shadows look and performance issues with many lights made us search for other solutions as well. One of such solutions is to rely on light maps for shadowing. Light maps have the same performance no matter how many lights and allow a soft penumbra. Unfortunately what is usually stored is the result of the shading, a simple RGB color. That doesnt allow normal mapping. We managed to solve this problem and named our solution Dot3Lightmaps [Mittring04]. In this approach the light map stores an average light direction in tangent space together with an average light color and a blend value to lerp between pure ambient and pure directional lighting. That allowed us to render the diffuse contribution of static lights with soft shadows quite efficiently. However it was hard to combine with real-time shadows. After Far Cry we experimented with a simple modification that we named Occlusion maps. The main concept is to store the shadow mask value, a scalar value from 0 to 1 that represents the percentage of geometry occlusion for a texel. We stored the shadow mask of multiple lights in the light map texture and the usual four texture channels allowed four lights per texel. This way we rendered diffuse and specular contributions of static lights with high quality soft shadows while the light color and strength remained adjustable. We kept lights separate so combining with other shadow types was possible.
8.5.2 The Plan for CryEngine 2
The time seemed right for a clean unified shadow system. Because of the problems mentioned we decided to drop stencil shadows. Shadow maps offer high quality soft shadows and can be adjusted for better performance or quality so that was our choice. However that only covers the direct lighting and without the indirect lighting component the image would not get the cinematographic realistic look we wanted to achieve. The plan was to have a specialized solution for the direct and another one for the indirect lighting component.
8.5.3 Direct Lighting
For direct lighting we decided to apply shadow maps (storing depth of objects seen from the light in a 2D texture) only and drop all stencil shadow code.
Dynamic Occlusion Maps
To efficiently handle static lighting situations we wanted to do something new. By using some kind of unique unwrapping of the indoor geometry the shadow map lookup results could be stored into an occlusion map and dynamically updated. The dynamic occlusion map idea was good and it worked but shadows often showed aliasing as now we not only had shadow map aliasing but also unwrapping aliasing. Stretched textures introduced more artifacts and it was hard to get rid of all the seams. Additionally we still required shadow maps for dynamic objects so we decided to get the maximum out of normal shadow maps and dropped the caching in occlusions maps.
Shadow Maps with Screen-Space Randomized Look-up
Plain shadow mapping suffers from aliasing and has hard jagged edges (see first image in Figure ). The PCF extension (percentage closer filtering) limits the problem (second image in Figure ) but it requires many samples. Additionally at the time hardware support was only available on NVIDIA graphics cards such as GeForce 6 and 7 generation and emulation was even slower. We could implement the same approach on newer ATI graphics cards by using Fetch4 functionality (as described in [Isidoro06]). Instead of adding more samples to the PCF filter we had the idea to randomize the lookup per pixel so that less samples result in similar quality accepting a bit of image noise. Noise (or grain) is part of any film image and the sample count offers an ideal property to adjust between quality and performance. The idea was inspired by soft shadow algorithms for ray tracing and already applied to shadow maps on GPU (See [Uralsky05] and [Isidoro06] for many details with regards to shadow map quality improvement and optimization).
The randomized offsets that form a disk shape can be applied in 2D when doing the texture lookup. When using big offsets the quality for flat surfaces can be improved by orienting the disk shape to the surface. Using a 3D shape like a sphere can have higher shading cost but it might soften bias problems. To get acceptable results without too much noise multiple samples are needed. The sample count and the randomization algorithm can be chosen depending on quality and performance needs. We tried two main approaches: randomly rotated static kernel [Isidoro06] and another technique that allowed a simpler pixel shader.
Figure 8. Example of shadow mapping with varied resulting quality: from left to right: no PCF, PCF, 8 samples, 8 samples+blur, PCF+8 samples, PCF+8 samples+blur
The first technique requires a static table of random 2D points and a texture with random rotation matrices. Luckily the rotation matrixes are small (22) and can be efficiently stored in a 4 component texture. As the matrices are orthogonal further compression is possible but not required. Negative numbers can be represented by the usual scale and bias trick (multiply the value by 2 and subtract 1) or by using floating point textures. We tried different sample tables and in the Figure 8 you can see an example of applying this approach to a soft disc that works quite well. For a disc shaped caster you would expect a filled disk but we havent added the inner samples as the random rotation of those are less useful for sampling. The effect is rarely visible but to get more correct results we still consider changing it. The simpler technique finds its sample positions by transforming one or two random positive 2D positions from the texture with simple transformations. The first point can be placed in the middle (mx, my) and four other points can be placed around using the random value (x, y). (mx, (mx+x, (mx-y, (mx-x, (mx+y, my) my+y) my+x) my-y) my-x)
More points can be constructed accordingly but we found it only useful for materials rendered on low end hardware configurations (where we would want to keep the sample count low for performance reasons).
Both techniques also allow adjusting the kernel size to simulate soft shadows. To get proper results this kernel adjustment would be dependent on the caster distance and the light radius but often this can be approximated much easier. Initially we randomized by using a 64x64 texture tiled with a 1:1 pixel mapping over the screen (Figure 9)
Figure 9. An example of the randomized kernel adjustment texture
This texture (Figure 9) was carefully crafted to appear random without recognizable features and with most details in the higher frequencies. Creating a random texture is fairly straight-forward; we can manually reject textures with recognizable features and we can maximize higher frequencies applying a simple algorithm that finds a good pair of neighbor pixels that can be swapped. A good swapping pair will increase high frequencies (computed by summing up the differences). While there are certainly better methods to create a random texture with high frequencies), we only describe but this simple technique as it served our purposes. Film grain effect is not a static effect so we could potentially animate the noise and expect it to hide low sample count even more. Unfortunately the result was perceived as a new type of artifact with low or varying frame rate. Noise without animation looked pleasing for static scenes; however with a moving camera some recognizable static features in the random noise remained on the screen.
Shadow Maps with Light-Space Randomized Look-up
Fortunately we found a good solution for that problem. Instead of projecting the noise to the screen we projected a mip-mapped noise texture in world space in the light/sun direction. In medium and far distance the result was the same but because of bilinear magnification the nearby shadow edges became distorted and no longer noisy. That looked significantly better particularly for foliage and vegetation, where the exact shadow shape was hard to determine.
Shadow Mask Texture
We separated the shadow lookup from shading in our shaders in order to avoid the instruction count limitations of Shader Model 2.0, as well as to reduce the number of resulting shader combinations and be able to combine multiple shadows. We stored the 8 bit result of the shadow map lookup in a screen-space texture we named shadow
mask. The 4 channel 32 bit texture format offers the required bit count and it can be used as a render target. As we have 4 channels we can combine up to 4 light contributions in a texel.
Figure 10. Example of shadow maps with randomized look-up. Left top row image: no jittering 1 sample, right top row image: screen space noise 8 samples, left bottom: world space noise 8 samples, right bottom: world space noise with tweaked settings 8 samples
Deferred Shadow Mask Generation
The initial shadow mask generation pass required rendering of all receiving objects and that resulted in many draw calls. We decoupled shadow mask generation from the receiver object count by using deferred techniques. We basically render a full screen pass that binds the depth texture we created in the early z pass. Simple pixel shader computations give us the shadow map lookup position based on the depth value. The indirection over the world-space position is not needed. As mentioned before we used multiple shadow maps so the shadow mask generation pixel shader had to identify for each pixel in which shadow map it falls and index into the right texture. Indexing into a texture can be done with DirectX10 texture arrays feature or by offsetting the lookup within a combined texture. By using the stencil buffer we were able to separate processing of the individual slices and that simplified the pixel shader. Indexing was not needed any more. The modified technique runs faster as less complex pixel shader computations need to be done. It also carves away far distant areas that dont receive shadows.
Unwrapped Shadow Maps for Point Lights
The usual shadow map approach for point light sources require a cube map texture lookup. But then hardware PCF cannot be used and on cube maps there is much less control for managing the texture memory. We unwrapped the cube map into six shadow maps by separating the six cases with the stencil buffer, similar we did for CSM. This way we transformed the point light source problem to the projector light problem. That unified the code and resulted in less code to maintain and optimize and less shader combinations.
Variance Shadow Maps
For terrain we initially wanted to pre-compute a texture with start and end angle. We also tried to update an occlusion map in real-time with incremental updates. However the
problem has always been objects on the terrain. Big objects, partly on different terrain sectors required proper shadows. We tried to use our normal shadow map approach and it gave us a consistent look that wasnt soft enough. Simply making the randomized lookup with a bigger radius would be far too noisy. Here we tried variance shadow maps [DL06] and this approach has worked out nicely. The usual drawback of variance shadow maps arises with multiple shadow casters behind each other but thats a rare case with terrain shadows.
Figure 13. Example of applying variance shadow maps to a scene. Top image: variance shadow maps arent used (note the hard normal shadows), bottom image: with variance shadow maps (note how the two shadow types combine)
8.5.4 Indirect Lighting
The indirect lighting solution can be split in two sub-problems: the processing intensive part of computing the indirect lighting and the reconstruction of the data in the pixel shader (to support per-pixel lighting).
3D Transport Sampler
For the first part we had planned to develop a tool called 3D transport sampler. This tool would make it possible to compute the global illumination data distributed on multiple machines (for performance reasons). Photon mapping ([Jensen01]) is one of the most accepted methods for global illumination computation. We decided to use this method because it can be easily integrated and delivers good results quickly. The photon
Figure 14. Real-time ambient maps with one light source
mapper was first used to create a simple light map. The unwrapping technique in our old light mapper was simple and only combined triangles that were connected and had a similar plane equation. That resulted in many small 2D blocks we packed into multiple textures. When used for detailed models it became inefficient in texture usage and it resulted in many small discontinuities on the unwrapping borders. We changed the unwrapping technique so it uses the models UV unwrapping as a base and modifies the unwrapping only where needed. This way the artist had more control over the process and the technique is more suitable for detailed models. We considered storing Dot3Lightmaps (explained earlier) but what we tried was a method that should result in better quality. The idea was to store light contributions for four directions oriented to the surface. This is similar to the technique that was used in Half-Life 2 ([McTaggart04]) but there only three directions were used. The more data would allow better quality shading. The data would allow high quality per-pixel lighting and accepting some approximations it could be combined with real-time shadows. However storage cost was huge and computation time was high so we aborted this approach. Actually our original plan was to store some light map coefficients per texel and others per vertex. Together with a graph data structure that is connected to the vertices it should be possible to get dynamic indirect lighting. Low frequency components of the indirect lighting could be stored in the vertices and high frequency components like sharp corners could be stored per texel. Development time was critical so this idea was dropped.
Real-Time Ambient Map (RAM)
As an alternative we chose a much simpler solution which only required storing one scalar ambient occlusion value per texel. Ambient occlusion ([ZIK98, Landis02]) can be computed by shooting rays in all directions something that was reusable from the photon mapper. The reconstruction in the shader was using what was available: the
texel with the occlusion value, the light position relative to the surface, the light color and the surface normal. The result was a crude approximation of indirect lighting but the human eye is very forgiving for indirect lighting so it worked out very well. To support normal maps some average light direction is needed and because of the lack of something better the light direction blended with the surface normal was used. This way the normal maps still can be seen and shading appear to have some light angle dependency. Having ambient brightness, color and attenuation curve adjustable allowed designers to tweak the final look. The technique was greater extended to take portals into account, to combine multiple lights and to support the sun. For huge outdoor areas computing the RAM data for every surface wouldnt be feasible so we approached that differently.
Screen-Space Ambient Occlusion
One of our creative programmers had the idea to use the z buffer data we already had in a texture to compute some kind of ambient occlusion. The idea was tempting because all opaque objects could be handled without special cases in constant time and constant memory. We also could remove a lot of complexity in many areas. Our existing solutions worked but it we had issues to handle all kind of dynamic situations. The approach was based on sampling the surrounding of a pixel and with some simple depth comparisons it was possible to compute a darkening factor to get silhouettes around objects. To get the ambient occlusion look this effect was limited to only nearby receivers. After several iterations and optimizations we finally had an unexpected new feature and we called it Screen-Space Ambient Occlusion (SSAO). We compute the screen-space ambient occlusion in a full screen pass. We experimented by applying it on ambient, diffuse and specular shading but we found it works best on ambient only. That was mostly because it changed the look away from being realistic and that was one of our goals. To reduce the sample count we vary the sample position for nearby pixels. The initial sample positions are distributed around the origin in a sphere and the variation is achieved by reflecting the sample positions on a random 3D plane through the origin. n: i: the normalized random per pixel vector from the texture one of the 3D sample positions in a sphere
8.6.3 Water Surface LOD
The ocean or big water surfaces in general have some unique properties that can be used by specialized render algorithms. Our initial implementation that we used in Far Cry was based on a simple disk mesh that moved around with the player. Pixel shading, reflections and transparency defined the look. However we wanted to have real 3D waves, not based on physical simulation but a cheap procedural solution. We experimented with some FFT based ocean waves simulation ([Jensen01a], [Tessendorf04]). To get 3D waves vertex position manipulation was required and the mesh we used so far wasnt serving that purpose.
8.6.4 Square Water Sectors
The FFT mentioned earlier only outputs a small sector of the ocean and to get a surface till the horizon we rendered the mesh multiple times. Different LODs had different index buffers but they all referenced to one vertex buffer that had the FFT data. We shared the vertex buffer to save performance but for better quality down sampling would be needed. To reduce aliasing artifacts in the distance and to limit the low polygonal look in the near we faded out the perturbation for distant vertices and limited the perturbation in the near. The algorithm worked but many artifacts made us search for a better solution.
8.6.5 Screen-Space Tessellation
We tried a brute force approach that was surprisingly simple and worked out very well. We used a precomputed screen space tessellated quad and projected the whole mesh onto the water surface. This ensures correct z buffer behavior and even clips away pixels above the horizon line. To modify the vertex positions with the FFT wave simulation data we require vertex texture lookup so this feature cannot be used on all hardware.
Figure 2. Screen-space tessellation in wireframe
The visible vertical lines in the wireframe are due to the mesh stripification we do for better vertex cache performance. The results looked quite promising however vertices on the screen border often moved farther away from the border and that was unacceptable. Adding more vertices even outside of the screen would solve the problem but attenuating the perturbations on the screen border are hardly noticeable and have only minimal extra cost.
Figure 3. Left: screen space tessellation without edge attenuation (note the area on the left not covered by water), right: screen space tessellation with edge attenuation
For better performance we reduced the mesh tessellation. Artifacts remained acceptable even with far less vertices. Tilting the camera made it a slightly worse but not as much as we expected. The edge attenuation made the water surface camera dependent and that was bad for proper physics interaction. We had to reduce the wave amplitude a lot to limit the problem.
8.6.6 Camera Aligned
The remaining issues aliasing artifacts and physics interaction bothered our shader programmer and he spent some extra hours finding a solution for this. This new method used a static mesh like the one before. The mesh projection changed from a perspective to a simple top down projection. The mesh is dragged around with the camera and the offset is adjusted to get most of the mesh in front of the camera. To render up to the horizon line the mesh borders are expanded significantly. Tessellation in that area is not crucial as perturbation can be faded to 0 before that distance.
Figure 204. Camera aligned water mesh in wire frame. Left: camera aligned from top down, right: camera aligned from viewer perspective
The results of this method are superior to the screen space ones which becomes mostly visible in motion with subtle camera movement. Apart from the distance attenuation the wave extent is now viewer independent and as the FFT data is CPU accessible physics interactions are now possible.
Figure 21. Left: Camera aligned, right: screen space tessellation as comparison
Through some intricate path we not only found our next generation engine but we also learned a lot. That learning process was necessary to find, validate and compare different solutions so in retro perspective it can be classified to research. Why we chose certain solutions in favor of others is mostly because of quality, production time, performance and scalability. Crysis, our current game, is a big scale production and to handle this the production time is very important. Performance of a solution is hardware dependent (e.g. CPU, GPU, memory) so on a different platform we might have to reconsider. The current engine is streamlined for a fast DirectX9/DirectX10 card with one or multiple CPU cores. Having the depth from the early z pass turned out to be very useful; many features now rely on this functionality. Regular deferred shading also stores more information per pixel like the diffuse color, normal and other material properties. For the alien indoor environment that would probably be the best solution but other environments would suffer from that decision. In a one light source situation deferred shading simply cannot play out its advantages.
This presentation is based on the passionate work of many programmers, artist and designers. Special thanks I would like to contribute to Vladimir Kajalin, Andrey Khonich, Tiago Sousa, Carsten Wenzel and Nick Kasyan. Because we have been launch partners with NVIDIA we had on-site help not only for G80 DirectX9 and DirectX10 issues. Special thanks to those NVIDIA engineers namely Miguel Sainz, Yury Uralsky and Philip Gerasimov. Leading companies of the industry Microsoft, AMD, Intel and NVIDIA and many others have been very supportive. Additional thanks to Natalya Tatarchuk and Tim Parlett that helped me to get this done.
[ATI04] ATI 2004, Radeon X800 3DcTM Whitepaper http://ati.de/products/radeonx800/3DcWhitePaper.pdf [DL06] DONNELLY W. AND LAURITZEN A. 2006. Variance shadow maps. In Proceedings of the 2006 ACM SIGGRAPH Symposium on Interactive 3D graphics and games, pp. 161-165. Redwood City, CA [ISIDORO06] ISIDORO J. 2006. Shadow Mapping: GPU-based Tips and Techniques. GDC presentation. http://ati.amd.com/developer/gdc/2006/Isidoro-ShadowMapping.pdf [JENSEN01] JENSEN, H. W. 2001. Realistic image synthesis using photon mapping, A. K. Peters, Ltd., Natick, MA. [JENSEN01a] JENSEN, L. 2001, Deep-Water Animation and Rendering, Gamasutra article http://www.gamasutra.com/gdce/2001/jensen/jensen_pfv.htm [LANDIS02] LANDIS, H., 2002. RenderMan in Production, ACM SIGGRAPH 2002 Course 16. [MICROSOFT07] MICROSOFT DIRECTX SDK. April 2007. http://www.microsoft.com/downloads/details.aspx?FamilyID=86cf7fa2-e953-475cabde-f016e4f7b61a&DisplayLang=en [MT04] MARTIN, T. AND TAN, T.-S. 2004. Anti-aliasing and continuity with trapezoidal shadow maps. In proceedings of Eurographics Symposium on Rendering 2004, pp. 153160, 2004. [MCTAGGART04] MCTAGGART, G. 2004. Half-Life 2 Shading, GDC Direct3D Tutorial http://www2.ati.com/developer/gdc/D3DTutorial10_Half-Life2_Shading.pdf [MITTRING04] MITTRING, M. 2004. Method and Computer Program Product for Lighting a Computer Graphics Image and a Computer. US Patent 2004/0155879 A1, August 12, 2004. [SD02] STAMMINGER, M. AND DRETTAKIS, G. 2002. Perspective shadow maps. In SIGGRAPH 2002 Conference Proceedings, volume 21, 3, pages 557562, July 2002 [TESSENDORF04] TESSENDORF, J. 2004. Simulating Ocean Surfaces. Part of ACM SIGGRAPH 2004 Course 32, The Elements of Nature: Interactive and Realistic Techniques, Los Angeles, CA [URALSKY05] URALSKY, Y. 2005. Efficient Soft-Edged Shadows Using Pixel Shader Branching. In GPU Gems 2, M. Pharr, Ed., Addison-Wesley, pp. 269 282. [WENZEL05] WENZEL C. 2005. Far Cry and DirectX. GDC presentation, San Francisco, CA http://ati.amd.com/developer/gdc/D3DTutorial08_FarCryAndDX9.pdf
[WENZEL06] WENZEL, C. 2006. Real-time Atmospheric Effects in Games. Course 26: Advanced Real-Time Rendering in 3D Graphics and Games. Siggraph, Boston, MA. August 2006 http://ati.amd.com/developer/techreports/2006/SIGGRAPH2006/Course_26_SIGGR APH_2006.pdf [WENZEL07] WENZEL C. 2007. Real-time Atmospheric Effects in Games Revisited. Conference Session. GDC 2007. March 5-9, 2007, San Francisco, CA. http://ati.amd.com/developer/gdc/2007/D3DTutorial_Crytek.pdf [ZIK98] ZHUKOV, S., IONES, A., AND KRONIN, G. 1998. An ambient light illumination model. In Rendering Techniques 98 (Proceedings of the Eurographics Workshop on Rendering), pp. 4555.
Xpressive 2 153 BW SPC900NC DV1000 Episode 1 WH260classic M183DN EL6989A VP-L700 L194WS-BF SGH-U800 SAL1855 EP-3VBA E1360 DR5100 HP1620 ALL-IN-ONE ICF-SW11 RX-ED70 Route 66 VGN-FW31ZJ Ixus I Invisible WAR M1210 EX-A3 Drive FS-6020 TFT1501 Vision P2410 CU-3E18EBE Generation D3121 CDX-GT150S TX-P46g20ES Butler 2405 DSI XL Sciphone I9 BBA 2864 Mediahome Lrfd21855ST SU-X911 Piranhamax 160 Vision M Deluxe U22 D 21PT135B Bbcc-S15A 3302 3342 DMC-FZ30 RP-21FD10 KM-2560 DR386D X 0 Vitotronic 200 HT-XA100 C28WF560N CH-607 FO-A560 2350EN FR-7X ED-X8250 S3500I FX-115D HMX-R10 Furuno DFF1 WM-16225FD Dvdr3475-37B Aspire-5540 Soundcraft FX16 AVR 144 Combo ES-600PRO 42201 WD7500H1q-00 RV-4060R FW2019 ZWD14270W1 Crusader D27300T EW903F Microtrack 2 MS5027LE Xterra-2004 Lowrance X-5 Hight-eyepoint PS50C6500 Box UPS EMP-TW520 Q1245V KX-T7565NE Environment XL HK6900 HX6942 TXL32U2E SDM-HX93 DI-713P VGN-A517M Review LE32C450e1W Autopilot
manuel d'instructions, Guide de l'utilisateur | Manual de instrucciones, Instrucciones de uso | Bedienungsanleitung, Bedienungsanleitung | Manual de Instruções, guia do usuário | инструкция | návod na použitie, Užívateľská príručka, návod k použití | bruksanvisningen | instrukcja, podręcznik użytkownika | kullanım kılavuzu, Kullanım | kézikönyv, használati útmutató | manuale di istruzioni, istruzioni d'uso | handleiding, gebruikershandleiding
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101