Press Releases:Pixar Animation Studios Announces Monumental Innovations In Film Rendering
Representing years of research and development, Pixar Animation Studios today announced a series of important innovations in the latest version of its forthcoming Academy Award®-winning RenderMan software that will radically impact the way film imagery is rendered and accessed by everyone. This generational shift in RenderMan establishes an entirely new modular rendering architecture called RIS that provides highly optimized methods for simulating the transport of light through multiple state of the art algorithms, including an advanced Unidirectional Path Tracer and a Bidirectional Path Tracer with Progressive Photon Mapping (also known as VCM). Along with major feature and performance enhancements, physically-based, artist-friendly workflows, progressive re-rendering, and the established advantages of RenderMan’s traditional REYES architecture, RenderMan now offers two rendering modes within one unified environment, providing the most advanced, versatile, and flexible rendering system available.
With rendering technology constantly evolving, RIS represents a forward-looking framework through which Pixar can deploy additional rendering methodologies as they become available. RenderMan is the conduit through which applicable advanced research from within the Walt Disney organization will be channeled into the production industry, including in the forthcoming release, Disney’s Principled BRDF shader and supporting materials.
“This truly brings the future of fully photo-realistic ray-traced rendering to RenderMan” said David Hirst, Global Head of Lighting at MPC. “We did tests with the production assets from one of our latest movies and were completely blown away by the speed and how interactively we could preview and render these assets. The RIS based integrator is going to change the way we work, with more scalable rendering and faster results.”
As a further commitment to the advancement of open standards and practices, Pixar is announcing that,in conjunction with the upcoming release, free non-commercial licenses of RenderMan will be made available without any functional limitations, watermarking, or time restrictions. Non-commercial RenderMan will be freely available for students, institutions, researchers, developers, and for personal use. Those interested in exploring RenderMan’s new capabilities are invited to register in advance on the RenderMan website to access a free license for download upon release.
Effective immediately, Pixar is also announcing that the price of the current version of RenderMan is $495 per license for commercial use, with customized peak render packages offering built-in “burst render” capability. The upcoming RenderMan release will combine the functionality of the previously separate RenderMan Pro Server and RenderMan Studio through a flexible license, providing unmatched versatility in allocating artist or batch render assets at different stages of production.
The new RenderMan is being released in the timeframe of SIGGRAPH 2014 and will be compatible with the following 64-bit operating systems, Mac OS 10.8 and 10.7, Windows 8, 7, and Vista, and Linux. Autodesk Maya compatibility is with versions 2013, 2013.5, 2014, and 2015. Pixar’s annual maintenance program benefits customers with access to ongoing support and free upgrades.
Major Price Restructuring
New Pricing for RenderMan
RenderMan now costs $495 per license, providing access to either the artist interface
or the batch renderer. For requirements larger than 25 licenses, attractive custom
studio packages are available providing additional Peak Render Capacity and
Given the continually falling price of computing, trends point to studios and individual
artists needing more and more rendering capacity. Reducing the cost of RenderMan
makes it more cost effective to expand capacity and generate higher quality pixels.
Pixar has established a new price point to specifically encourage accessibility and
remove barriers to growth.
Free Non-Commercial Use
With the upcoming release RenderMan will be free for non-commercial usage.
Examples of non-commercial use-cases include evaluations, personal learning,
experimentation, research, and the development of tools and plug-ins for RenderMan.
Free non-commercial RenderMan will be availabe with the upcoming release of RenderMan
scheduled in the timeframe of SIGGRAPH 2014.
Coming Soon … a new version of RenderMan!
The latest release of RenderMan raises the bar once again with a number of game-changing innovations. This includes a radical new rendering paradigm, called RIS, a highly-optimized mode for rendering global illumination, more specifically for ray tracing scenes with heavy geometry, hair, volumes, and irradiance with both world-class efficiency and in a single pass. This leap in technology offers best-of-class in rendering for both VFX and feature film animation. Along with key enhancements to the highly efficient REYES mode, today RenderMan is the most flexible, powerful, and reliable tool for rendering pixels.
For visual effects artists, RenderMan has never been more efficient and accessible. With solutions for both Autodesk’s Maya® and The Foundry’s KATANA®, there are excellent options for integrating RenderMan into any VFX pipeline. For lighting and look development, RenderMan offers a state-of-the-art system for interactive re-rendering which is capable of dramatically increasing throughput of production scenes while allowing artists to focus on the art of lighting. With world-class tools for physical based rendering, visual effects artists can create the most photorealistc imagery conceivable.
RenderMan’s RIS is a new rendering mode that is designed to be fast and easy to use while generating production-quality renders. Global illumination works out of the box and interactive rerendering provides rapid iteration for artists. The new mode supports many of the same features as traditional RenderMan but introduces a wholly new shading pipeline. Understanding what’s new as well as what old techniques still apply is key to getting the most out of RIS. The following is a high level overview of how it works.
The renderer drives the entire process by sampling points on the screen and constructing an image from the results. It selects the pixels to sample and generates subpixel samples for these pixels. It might make many passes over the pixels taking one sample at a time to give rapid interactive results that it refines incrementally, or it might do all the samples in a pixel at once for higher throughput for batch renders.
Regardless of how they’re selected, each screen sample will be processed by the rest of the pipeline and shaded results will be returned to the renderer. From these, the renderer can eliminate spurious samples and pixel filter the rest to construct the images and deliver them to the various display drivers. These might show the image on the screen for interactive rendering or save them to a file for final rendering. The renderer may generate multiple images at the same time to show various geometric quantities or particular types of light paths.
As this proceeds, the renderer’s adaptive sampler can save time by looking at the images to estimate which pixels have converged and therefore are finished. For batch renders, the renderer can also write periodically checkpoint images that can be viewed to see how the image is converging. If the renderer gets interrupted these can be used to recover the render near where it left off. On completion, an XML file can be written with statistics to help diagnose the render performance. Or, if rerendering is being used, quick updates to the scene and settings can be made and the rendering immediately started again.
Once the 2D positions on the screen have been chosen to sample, the camera projection turns them into 3D rays in the camera space. Traditional RenderMan has supported two different projections: perspective and orthographic. With the former, one could specify depth of field settings for defocus, aperture settings for bokeh, and shutter times for advanced motion blur effects. These are still fully supported in RIS.
With the move to fully ray traced rendering, however, new projections become possible. Sphere, cylinder, and torus projections can be used to render environment maps and panoramic images. User-created projections can also be defined using plugins; rolling shutters, lens abberations, and theater dome projections are just some of the possibilities this enables.
Integrators take the camera rays from the projection and return shaded results to the renderer. For the main integrators these are estimates of the radiance from the surfaces seen from the outside along the rays. The are responsible for computing the overall light transport. Interior integrators assist in specialized cases by handling the light within surfaces or volumes. We provide two main production quality integrators, though users can substitute their own.
PxrPathTracer implements a unidirectional path tracer. This combines information from the materials at the hit points with light samples to estimate direct lighting and shadowing, then spawns additional rays to handle indirect lighting. This works well with environment lights, and large direct light sources.
PxrVCM extends this with bidirectional path tracing. In addition to the paths from the camera, it traces paths from the light sources and tries to connect them. It can resolve complicated indirect paths that may be slow to converge with PxrPathTracer.
Each traced ray is checked for intersection against the geometry in the scene. All of the classic primitives from REYES are supported: subdivision surfaces, patches, curves, volumetrics, quadrics, points, and blobbies.
Procedurals can generate geometry on the fly or be used to reduce memory by defering the loading of parts of the scene until they’re actually used. Instancing also helps reduce memory by allowing cheap clones of complex objects.
Detail can be added to any surface using displacements or blurred away again by motion. Complex motion blur is fully supported, whether it comes from camera motion, object motion, or object deformations.
Each piece of geometry has a single attached Bxdf. Roughly speaking this determines its gross material type by computing which directions it most strongly reflects or refracts light in. The Bxdfs also provide the integrator with proposed directions for new rays to sampling indirect illumination.
We include a general purpose production-ready Bxdf as well as Bxdfs for skin, glass, hair, and diffuse emissive surfaces.
While material Bxdfs control the gross appearance of an object, patterns control the detail by varying the parameters of the Bxdfs across a surface. Patterns plug into Bxdfs and to each other to build up complex shading networks.
Patterns can produce their outputs by nearly any means, from texture maps (including atlases and Ptextures), to simple expressions, to a complete shading language like OSL.
The geometric area light system provides the integrator with relevant direct light samples to evaluate against the Bxdfs in order to shade the rays. These samples come from one of two types of sources, either emissive surfaces (with arbitrary geometry) or environment lights surrounding the scene. Whatever the sources, the system automatically balances a sample budget across all of them.
It also supports a wide variety of sophisticated lighting shaping effects such as gobos, blockers, and IES profiles. Lights can be turned off per-object, or just turned off for certain types of shading like specular highlights.