Presentation is loading. Please wait.

Presentation is loading. Please wait.

Light Propagation Volumes in CryEngine® 3

Similar presentations


Presentation on theme: "Light Propagation Volumes in CryEngine® 3"— Presentation transcript:

1 Light Propagation Volumes in CryEngine® 3
Anton Kaplanyan Graphics researcher at Crytek New lighting technique Advances in Real-Time Rendering in 3D Graphics and Games

2 Agenda Introduction CryEngine® 3 lighting pipeline overview Core idea
Applications (with video) Improvements Combination with other technologies (with video) Optimizations for consoles Conclusion and future work Live demo You can take a look at agenda. Meanwhile I’ll introduce a Crytek company. We have 5 studios across Europe, more that 30 nationalities. Already released multiple AAA games such as FarCry and Crysis. We have our own engine, many licensees across the world. There are three iterations of the engine: CryEngine (with FarCry), CryEngine 2 (with Crysis), CryEngine 3 announced at GDC’09

3 Introduction into real-time graphics
Strictly fixed budget per frame Many techniques are not physically-based Consistent performance Game production is complicated This talk is mostly about massive and indirect lighting This is a high level talk More implementation details in the paper Strictly fixed budget per frame – more than 30 frames per second, which is less than 33 milliseconds per frame Many techniques are not physically-based caused by hard constrain on time budget of the frame We need to provide consistent rendering performance, which scales well on the rendering complexity Interdependencies with game production leads to production complication, we try to avoid any possible interdependency and keep the production pipeline as simple as possible. That also means that we avoid any precomputation approaches. This talk is mostly about massive and indirect lighting Massive lighting is a lighting of limited area with bunch of light sources (e.g. apartment indoor) There are no implementation details in this presentation, it is only a high level talk. You can find all the details in the paper.

4 CryEngine® 3 renderer overview (1 / 5)
Xbox 360 / PlayStation 3 / DirectX 9.0c / 10 / (11 soon…) Cross-platform engine which supports multiple graphics API like: Xbox 360, PlayStation 3, DirectX 9, DirectX 10. DirectX 11 API is coming soon… The engine is completely multithreaded. We have seamless world streaming technology for huge levels. You can see some completely different environment created with CryEngine recently: left top – Jungles, right top – Frozen area, right bottom – San Francisco GDC demo, left bottom – recently created forest area. We’re working on Crysis 2 for PC and consoles right now...

5 CryEngine® 3 renderer overview (2 / 5)
Unified shadow maps solution [Mittring07] We have unified shadow maps solution for different types of light sources, which decreases the number of shader combinations and decouples shadowing from scene rendering.

6 CryEngine® 3 renderer overview (3 / 5)
SSAO [Kajalin09], [Mittring09] Then we decided to improve the indirect lighting further. So, Kajalin introduced a Screen-Space Ambient Occlusion technique while working in Crytek. This technique is a real-time approximation for ambient occlusion using screen-space scene depth. It was enhanced recently by taking into account normal-mapped surfaces.

7 CryEngine® 3 renderer overview (4 / 5)
Deferred lighting [Mittring09] Minimal G-Buffer Sun / Omni / Projectors / Caustics / Deferred light probes Then we decided to do a step further and implement deferred lighting in CryEngine 3. This approach decreases number of shader combinations dramatically. Because of minimal G-Buffer layout is becomes consoles friendly, especially for consoles with limited amount of render target memory, like Xbox 360. You can see the layout of the G-Buffer on the left. The G-Buffer consists of depth, normal and specular power. These values are enough to support lighting with different light types, like sun, point lights, projectors, even deferred caustics and deferred light probes. Deferred light probes – image-based indirect lighting. You can see the example on the right. Deferred light probe consists of two cubemaps: specular and low resolution diffuse-convoluted cubemap. The diffuse-convoluted cubemap approximates precomputed diffuse global illumination from distant objects. Deferred light probes are precomputed and placed by artists in appropriate dark places where necesasry.

8 CryEngine® 3 renderer overview (5 / 5)
Lighting accumulation pipeline: Apply global / local hemispherical ambient Optionally: Replace it with Deferred Light Probes locally [Global illumination solution should take place here] Multiply indirect term by SSAO to apply ambient occlusion Apply Direct Lighting on top of Indirect Lighting Lighting pipeline in CryEngine 3 is a layered scene lighting, which consists of several layers: First of all we apply global or local hemispherical ambient Then it’s optionally replaced by deferred light probes placed locally by artists At this point we have indirect lighting of the scene. Then the result of indirect lighting is multiplied by SSAO to apply occlusion information to indirect lighting. Finally we add direct lighting on top of that. So, global illumination term should take place inside of indirect lighting term. Now I’d like to overview recent real-time rendering trends….

9 Real-time rendering development trends
Rendering is a multi-dimensional query [Mittring09] R = R(View, Geometry, Material, Lighting) Divide-and-conquer strategy, some examples: Shadow maps (decouple visibility queries) Deferred techniques (decouple lighting / shading) Screen-space techniques (SSAO, SSGI, etc.) Reprojection techniques (partially decouples view) Why? Less interdependencies => more consistent performance Future trends: parallel and distributed computations friendly Rendering is a multi-dimensional query. Rendering performance depends on many arguments, like viewer position, geometry complexity, material variety and lighting complexity. “Divide-and-conquer” strategy is an attempt to decompose this query to separate queries. And widely used in real-time graphics. Examples….

10 Paper reference icon This icon means that details are in the paper TM
Now I’d like to introduce this icon  It means that when you see it on the slide, then you can find more details about the slide content in the paper. Adobe Acrobat registered trademark

11 Light Propagation Volumes

12 Light Propagation Volumes: Goals
Decouples lighting complexity from screen coverage (resolution×overdraw) Radiance caching and storing technique Massive lighting with point light sources Global illumination Participating media rendering (still work in progress…) Consoles friendly (Xbox 360, PlayStation 3) What goals we pursuit by developing this technique? Decouples lighting complexity from screen coverage and decreases computations and memory footprint by that. The main idea behind that is to cache and store the radiance of lighting. That could allow us to do massive lighting (Massive lighting - lighting with crazy amount of light sources inside of some limited area (like apartment)) , Global Illumination and participating media rendering (which is still in progress). And different goal is to make it working on consoles.

13 Related work Irradiance Volumes [GSHG97], [Tatarchuk04], [Oat05]
+ Signed Distance Fields [Evans06] Lightcuts: A Scalable Approach to Illumination [WFABDG05] Multiresolution Splatting for Indirect Illumination [NW09] Hierarchical Image-Space Radiosity for Interactive Global Illumination [NSW09] Non-interleaved Deferred Shading of Interleaved Sample Patterns [SIMP06] Let’s start with related work. The irradiance caching technique was introduced by GREGER et.al. and then extended to real-time graphics by Tatarchuk and Oat. There are several other techniques that serves the same purpose: Decouples lighting complexity from screen coverage. You can find them in the references. Unfortunately considering of each paper would take a lot of time. You can find description and overview of these techniques in the paper. Now, let’s start from SH Irradiance Volume overview…

14 SH Irradiance volumes A grid of irradiance samples is taken throughout the scene Each irradiance sample stored in SH form At render time, the volume is queried and near-by irradiance samples are interpolated to estimate the global illumination at a point in the scene SH Irradiance Volumes are used to cache irradiance samples in cells of volumetric grid to accelerate global illumination. Afterwards it was extended by Tatarchuck and Oat with irradiance approximation by SH basis. At render time, the volume is queried and near-by irradiance samples are interpolated to estimate the global illumination at a point in the scene. Now let’s consider the idea behind Light Propagation Volumes... From [GSHG97], [Tatarchuk04]

15 Low-frequency radiance volumes
Similar to SH Irradiance Volumes [Tatarchuk04] Stores radiance distribution instead Low resolution 3D texture on GPU (up to 323 texels) SH approximation is low order (up to linear band) Radiance is not smooth [GSHG97] But what is the error introduced by approximating it? We use very similar to SH Irradiance Volumes caching approach. But we store radiance distribution instead of irradiance samples. Radiance distribution – outgoing lighting over a sphere. The radiance volume is very low resolution, up to 32 by 32 by 32 texels. We use low order of SH which is linear band (which amounts to 4 coefficients), that fits well into common texture formats But regarding to Greger et. Al. the radiance is not smooth. As you can see at the illustration, the radiance on the left in point x is discontinuous because of different energy emitted by four walls. You can see SH approximation on the right. Now let’s discuss the error introduced by that approximation… From [GSHG97]

16 Radiance approximation
Error of the spatial approximation depends on density and size / radii of light sources Error of the angular approximation depends on Shape of light source Frequency of angular radiance distribution of light source Distance to the light source Compensated by the energy fall-off Preserves mean energy and major radiance flow direction Enough if we want to eventually get irradiance Resulting error of radiance approximation by radiance volume consists of spatial and angular errors. The spatial error caused by low resolution of the radiance volume and depends on the density of emitters and attenuation factors like radii. The angular approximation error which is SH approximation error depends on shape of particular light source, it’s frequency of angular distribution and the distance from particular radiance volume cell to the light source. The last factor means that the more far the radiance volume cell from light source, the sharper the radiance distribution is (you can see it at the illustration). This factor is compensated by light source energy fall-off with distance. This is a very rough approximation of radiance distribution, however it still preserves the mean radiance distribution and the major direction of the radiance distribution. And this approximation serves the final purpose of extracting the irradiance from it. The research on this approximation is still in progress… But why do we need to store radiance instead of irradiance then?...

17 Light propagation in radiance volume
Start with given initial radiance distribution from emitters Iterative process of radiance propagation 6-points axial stencil for adjacent cells Gathering, more efficient for GPUs Energy conserving Each iteration adds to result, then propagates further Imagine we have initial radiance distribution placed only into cells where emitters are. This is a very convenient situation, because we need to render only one pixel for one light source. So, we want to get a final radiance distribution over the grid The proposed solution is to propagate the radiance iteratively. Each iteration applies a 6-points axial stencil to each cell. What does it mean? It means that for each cell we transmit the radiance from adjacent axial cells with gathering scheme. The gathering is GPU friendly. Result of each iteration is collected into final radiance volume and the next iteration is applied onto the result of the previous one There is a illustration of a single iteration for single cell in 2D on bottom of the slide.

18 Light propagation in radiance volume
Here is a result of several radiance propagation iterations. The top row is an initial radiance distribution, as you can see there are a lot of light sources. The top left quad is a single slice of radiance texture. So the 3D radiance texture is unwrapped at this picture. Notice that the process is highly attenuated, which means we can limit it by just several iterations (8 to 16 for 32x32x32 radiance volume depending on the initial intensity of light sources). Note that the resulting radiance distribution is an accumulation of all these radiance iterations.

19 Rendering with Light Propagation Volume
Regular shading, similar to SH Irradiance Volumes Simple 3D texture look-up using world-space position Integrate with normal’s cosine lobe to get irradiance Simple computation in the shader for 2nd order SH Lighting for transparent objects and participating media Deferred shading / lighting Draw volume’s shape into accumulation buffer Supports almost all deferred optimizations The scene lighting with light propagation volumes is very similar to lighting with SH Irradiance Volumes, but we need to compute irradiance from radiance. Aided by GPU. Deferred lighting with optimizations is used in CryEngine 3: - stencil prepass - scissor rectangle - depth bound test - etc.

20 Massive Lighting with point light sources
This scene is a “programmer art”, hasn’t been created by artists

21 Massive lighting Option 1: Inject initial energy, then propagate radiance A bit faster for crazy amount of lights Option 2: Add pre-propagated radiance into each cell Simple analytical equation in the shader for point lights Higher quality, no propagation error Error depends on the ratio (light source radius / cell size) Radius threshold for lighting with radiance volume There are two ways to get the final radiance distribution for point light sources inside of radiance volume. 1. The first way is to inject a point primitive with initial radiance distribution into an appropriate cell of radiance volume. This is a very good way to inject a lot of light sources and then do a single light propagation step for all injected light sources. 2. However there is another way to do it. We can actually inject the precomputed pre-propagated radiance into all the cells covered by a particular light source. This way is better in quality of propagated radiance, because the radiance is propagated analytically during injection, thus we avoid the error introduced by propagation step. The error of the massive lighting depends on the ratio of light sources’ radii to size of the radiance volume cell. Thus we use a threshold for light source radius to make a decision if the scene needs to be lit by radiance volume or in regular way.

22 Glossy reflections with Light Propagation Volumes
Accumulative traversal (ignores reflection occlusion) Several look-ups along reflected ray from camera Collect incoming radiance from this direction Integrate over the cone of incoming direction Cone angle depends on: Glossiness of surface Distance from look-up to point p Approximates the integration with Phong BRDF Another topic is how to achieve reflections with radiance volume. Let’s talk about glossy reflections. The problem is we have only low frequent radiance distribution, thus we can extract only very glossy reflections, the results are close to diffuse lighting. But we can increase the frequency of radiance distribution by traversing through several cells towards reflected direction. So, the technique is to: Traverse the radiance volume along the reflected eye direction Do several look-ups from the volume (see illustration) For each look-up we compute incoming lighting from this direction by integrating with approximation of view-dependent part of Phong BRDF. We approximate the view-dependent part of Phong BRDF by a cone. So, the cone angle depends on the surface’s glossiness. But the cone angle should also depend on the distance from the look-up to the surface point p. Cone angle shrinks with the distance to the surface point to take into account solid angle decreasing for area of surfel p

23 Glossy reflections example
Note the glossy reflections of the glowing red teapot on the metallic wall. The teapot is represented as a set of red emitters here.

24 Massive lighting: Results
NVIDIA GeForce GTX 280 GPU, Intel Core 2 Quad 2.66 GHz, DirectX 9.0c API, HDR 1280x720, no MSAA, Volume size: 323 As you can see from these results, the slope of the plot for light propagation volumes is very steep. Instancing is not used for this measurement, which means that we have one draw call for one light source. That might explain the existance of the slope at all. 

25 Massive lighting video
This is a so called “programmer art”, hasn’t been done by artists. Notice that this technique is highly useful for sophisticated lighting emulation, like indirect lighting of complex architectural interiors

26 Global Illumination with Light Propagation Volumes

27 Global Illumination with Light Propagation Volumes
Instant Radiosity [Keller97] The main idea is to represent light bouncing as a set of secondary light sources: Virtual Point Lights (VPL) Splatting Indirect Illumination [DS07] Based on Instant Radiosity Reflective Shadow Maps (RSM) are used to generate initial set of VPLs on GPU Importance sampling of VPLs from RSM Firstly, I’d like to explain the Instant radiosity and splatting indirect illumination solutions, because our approach is partially based on that. The idea being instant radiosity is to represent indirect lighting as a direct lighting from emitting surfaces. So each surfel of emitting surfaces is represented as a secondary light source which called “Virtual point light”, VLP. This is an efficient approach for indirect lighting gathering with GPU. The splatting indirect illumination technique extends this approach by Reflective shadow maps and importance sampling from reflective shadow map. What is reflective shadow map?

28 Reflective Shadow Maps
Reflective Shadow Map – efficient VPL generator Shadow map with MRT layout: depth, normal and color The reflective shadow map approach is the most efficient way to generate regular grid of VLPs of lit surfels of the scene. It is a regular shadow map but with multiple render targets to store not only depth but also normal and surface color, as you can see on the image. This allows us to have all the information for single VPL from single pixel of RSM.

29 Global Illumination with Light Propagation Volumes
Inject the initial radiance from VPLs into radiance volume Point rendering Place each point into appropriate cell Using vertex texture fetch / R2VB Approximate initial radiance of each VPL with SH Simple analytical expression in shader Propagate the radiance Render scene with propagated radiance The main performance issue of Splatting Indirect Illumination technique is a rendering of huge amount of VLPs. The idea behind the “global illumination with light propagation volumes” technique is to solve this issue. The algorithm consists of following steps: Inject initial radiance distribution of VPLs from RSM The injection is implemented as a rendering of large number of point primitives with GPU and place each primitive into appropriate cell of light propagation volume with vertex shader transformation and using vertex texture fetch from RSM (or render to vertex buffer). In pixel shader we convert the information from RSM into initial radiance distribution for each VPL. These computations are very simple for two bands of SH. Propagate radiance across the grid as mentioned before. Light scene with final radiance distribution

30 Implementation details
Light Propagation Volume moves with camera 3D cell-size snapping for volume movement 2D texel-size snapping for RSM movement RSM is higher in resolution than radiance volume Smart down-sampling of RSM Some implementation details for global illumination technique. The light propagation volume is moving with camera. We do a lot of work to provide stable and consistent solution regardless of camera/objects/light movement. The discrete integer-cell-size movement step is applied for movement of volume, which provide a stable world-space places for cells of the volume. The RSM moves with the volume to provide efficient coverage of the scene by VPLs. The discrete one-texel-size movement step is applied for RSM movement as well. This provide a consistent rasterization of small and highly discontinuous geometry during RSM generation step. We use RSM with much higher resolution than the slice of light propagation volume to provide a smooth radiance distribution of injected VLPs. Also we do a smart down-sampling of RSM with VPL clustering which facilitates the injection step afterwards, especially on consoles. You can find details and implementation code of that in the paper.

31 Global Illumination with Light Propagation Volumes
Results Note that these images shows different types of scenes: Outdoor Indoor “Cornel box”-like environment Foliage Let’s talk about some issues of this approach.

32 Issue: Cell-alignment of VPLs
Injection of VPLs involves position shifting Position of injected VLP becomes grid-aligned Consequence of spatial radiance approximation Unwanted radiance bleeding Lighting of double-sided and thin geometry During injection step, the VPL is moved to the closest cell of the volume. That cause some unwanted bleeding and lighting of another faces of thin and double-sided geometry.

33 Cell-alignment of VPLs: Bleeding example
Note the bleeding of radiance through the double-sided roof of the hut on the left image So, the roof is a set of VPLs. It’s injected into the volume and the scene is lit by that volume. You can see a ground truth on the right side. So, how can we solve this issue?

34 Cell-alignment of VPLs: Solution
VPL half-cell shifting towards normal towards light direction Coupled by anisotropic bilateral filtering During final rendering pass Sample radiance with offset by surface normal Compute radiance gradient Compare radiance with radiance gradient There are two steps to solve the issue. The first one is the VPL shifting. The VPL is being shifted towards surface normal and initial light direction by half of the cell size. The shifting provides a guarantee that the VPL is always injected into the cell in front of the actual position. But that’s not enough to solve the problem completely. The issue still exists because of trilinear interpolation of radiance distribution. And we cannot shift the VPL further because that starts introducing an injection error. The anisotropic bilateral filtering fixes the problem of trilinear HW approximation of radiance distribution. The idea behind the filter is to check if we have a radiance direction and a radiance gradient direction match. This forms the filtering decision for bilateral filter. You can find more details on that techniques in the paper.

35 Cascaded Light Propagation Volumes for GI
One grid is limited in dimensions and low resolution Multiresolution approach for radiance volumes Similar to Cascaded Shadow Maps technique [SD02] Preserves surrounding radiance outside of the view Each cascade is independent With separate RSM for each cascade Transmit radiance across adjacent edges Filter objects by size for particular RSM Efficient hierarchical representation of radiance emitters Since one grid is limited in world-space dimensions and very low in resolution, sometimes it’s not sufficient to compute GI with that. So we propose a cascaded approach for Light propagation volumes. This approach is very similar to the cascaded approach for shadow maps, however we still need to preserve some area for each cascade around the camera to keep bleeders that are not in the view. So cascades are nested. Each cascade is processed in a completely independent way. It has its own RSM. Also we transmit radiance across cascades’ boundaries to provide better propagation and seamless connection. Since we completely control RSM generation for each cascade, we can filter objects with different into different cascades. This solution leads us to a very efficient hierarchical representation of radiance emitters of the scene.

36 Global Illumination Video

37 Global Illumination: Combination with SSAO
No secondary occlusion for light propagation volumes Can be approximated by Ambient Occlusion term SSAO serves as a good approximation of secondary occlusion for GI. Note that SSAO could be extended to screen-space directional occlusion by using main direction from Light Propagation Volume cell for particular screen pixel. SSAO on, GI off SSAO off, GI on GI + SSAO

38 Global Illumination: Combination with SSGI
Screen-Space Global Illumination [RGS09] Limitations of SSGI Only screen-space information Huge kernel radius for close objects Limitations of Light Propagation Volumes Local solution Low resolution spatial approximation Supplementing each other Custom blending The Screen-space global illumination is a screen-space technique similar to SSAO, but taking color into account. Thus the technique provides a screen-space color bleeding. Now I’ll talk about disadvantages of SSGI and disadvantages of current GI technique. There are some disadvantages of SSGI: - It has only screen-space information for color bleeding. That means that it skips object that are not in the view or hidden by another objects. - Additionally we need to increase the screen-space kernel radius for SSGI sampling for objects that are close to camera. That might lead to inconsistent performance. On the other hand the GI with LPV technique has it’s own disadvantages: It’s always a local technique, so it’s not possible cover the whole field of view by light propagation volumes. Secondly, it still has a low resolution spatial approximation which could lead to missing bleeding from of small objects. The constant screen-space radius for SSGI kernel provides consistent bleeding for distant objects and supplements bleeding details for close small objects. More details about custom blending and combining these technologies are in the paper.

39 Global Illumination: Combination with SSGI
Notice the lack of GI at the far panorama on the left screenshot SSGI off SSGI on

40 Optimizations for consoles: Xbox 360 / PS3
3D texture look-up with trilinear filtering Radiance volume is 32 bpp for all three SH textures Xbox 360, ~3,5 ms per frame Vertex texture fetching for RSM injection Work-around to resolve into particular slice of 3D texture PlayStation 3, ~3,4 ms per frame Emulate signed blending in the shader R2VB for RSM injection (using memory remapping) Render to unwrapped 2D RT then remap as 3D texture Moreover this technique works well on consoles. We need 6 volumes 32x32x32 for GI and one RSM. With reusing of RT memory it leads to less than 1 MB of video memory. On Xbox 360 it‘s possible to resolve part of EDRAM surface into particular slice of tiled 3D texture, but there is a bug in API. You can find the work-around for this bug in the paper. Also on PS3 it‘s possible to remap the same RT as MSAA downscaled RT which facilitates the rendering pass a lot. It takes less than 3,5 ms per frame on both consoles for in-game scenes including RSM generation, which makes this technology useful for game production on consoles. More details about consoles optimization you can find in the paper.

41 Future work Better radiance approximation…
Participating media rendering Occlusion for indirect lighting Multiple bounces Improve quality Improved propagation scheme Better angular approximation Adaptive grids Support for arbitrary types of light sources

42 References [DS07] Dachsbacher, C., Stamminger, M Splatting Indirect Illumination [Evans06] Evans, A Fast Approximations for Global Illumination on Dynamic Scenes [GSHG97] Greger, G., Shirley, P., Hubbard, P., Greenberg, D The Irradiance Volume [Isidoro05] Isidoro J Filtering Cubemaps: Angular Extent Filtering and Edge Seam Fixup Methods [Kajalin09] Kajalin, V Screen-space ambient occlusion, Shader X7 [Keller97] Keller, A Instant radiosity [Mittring07] Mittring, M Finding Next Gen – CryEngine 2 [Mittring09] Mittring, M A bit more Deferred – CryEngine3. [NSW09] Nichols, G., Shopf, J., Wyman, C Hierarchical Image-Space Radiosity for Interactive Global Illumination [NW09] Nichols, G., Wyman, C Multiresolution Splatting for Indirect Illumination [Oat05] Oat, C., 2006 Irradiance Volumes for Real-Time Rendering, ShaderX 5 [RGS09] Ritschel, T., Grosch, T., Seidel, H.-P Approximating Dynamic Global Illumination in Image Space [SD02] Stamminger, M., Drettakis, G Perspective shadow maps [SIMP06] Segovia, B., Iehl, J. C., Mitanchey, R., Peroche, B Non-interleaved Deferred Shading of Interleaved Sample Patterns [Tatarchuk04] Tatarchuk, N Irradiance Volumes for Games [WFABDG05] Walter, B., Fernandez, S., Arbree, A., Balda, K., Donkikian, M., Greenberg, D Lightcuts: A Scalable Approach to Illumination More details in the paper at You can find the paper at the AMD portal in developers section and at

43 Acknowledgment Michael Endres, Felix Dodd, Marco Siegel, Frank Meinl, Alexandra Cicorschi, Helder Pinto, Efgeni Bischoff and other artists and designers at Crytek for created scenes Martin Mittring, Vladimir Kajalin, Tiago Sousa, Ury Zhilinsky, Mark Atkinson, Evgeny Adamenkov and the whole Crytek R&D team Special thanks to Carsten Dachsbacher and Natalia Tatarchuk

44 Real-time rendering come to a new era – the era of programmable rendering
We’re looking for more professionals Live demo

45 Thank you for your attention! Questions?
Live demo


Download ppt "Light Propagation Volumes in CryEngine® 3"

Similar presentations


Ads by Google