I thought I'd be posting the 'mega documentation update' - a mix of pipeline docs, script docs and tutorials for the timelapse sequence, but I ended up reshuffling priorities and working on our lighting pipeline instead. So this update is going to be partially about that, and also a grab bag of miscellaneous work from Tim, Henri, Chris, Luciano and others.
Most of the lighting work I've been doing is really pipeline focused; in other words, just as in the timelapse cameras, I'm thinking 'how can we standardize and make this repeatable' not 'how can I make this one shot'. With lighting that is really important to keep things manageable, to hit the look we want (so this is mixed lighting and look development) and to keep the shots consistent.
Using High Dynamic Range Views
I got some great advice from Troy Sabotka, a blender user, cameraman, and color maven. He's been helping me conceptualize how to split the job into specific steps, and how to think about color. Some of this has been nothing short of a revelation - even in areas I thought I knew.
First a little progression of the wip shot to see the progress:
So the big difference has been using a bunch of Open Color IO presets made by Troy. OCIO is an open source library designed by industry eggheads that makes working with color more solid.
If you can imagine that Cycles (Blender's renderer) as a set of algorithms that produces the scene colors in front of the camera, then OCIO has a bunch of connections to the final color that you see on your screen. There's a huge difference between the two! Scene colors really don't 'look' like any one thing; they're better thought of as data representing light. So there is no 'white' you can just have less or more light... a lot more than 'white' really. On your final image however, there's going to be a black and a white - which is the maximum amount of light you can have - and the interesting thing is that how much light you grab from your scene can really give you a lot more range to work with in lighting.
Blender typically uses an sRGB view; meaning you just grab light from 0 to 1, and curve it a bit. In practice that means that lighters are going to tweak their lights so that all the color data is concentrated in that small range - which is not true of reality. The view that Troy provided is based on the ACES standard, and grabs a lot more of the scene, giving me way more dynamic range to work with. The result is stunning! Sergey had to fix a small bug in one of the compositor nodes that was clipping to 1, thus only working with sRGB, and now I have a massive amount of range to work with.
some other nice things about ocio is that it has tools to convert color grades into nodes; you can then use previous grades as 'looks' for similar shots, which will have a unifying effect on our entire pipeline.
I'm still experimenting here, so expect some refinement in this regard. Eventually this has to be converted into tutorial/documentation form (much like the timelapse stuff) so that other lighters can use it.
Here's a recent render in video with some temp sound:
Grain and Light Leaks
This is currently in R+D phase, but there are some tests to see.
A final touch to rendered frames is some subtle defects that dirty up the clean CG look (once enough samples have been rendered). There's quite a few, some easily provided via compositor nodes or cycles itself, such as depth of field, motion blur, lense distortions, bloom, etc. A couple of 'extra' ones that involve another step of compositiong or rendering are grain and lightleaks.
Grain simulates the look of film (or video) typically, and is a pattern of characteristic noise. Lightleaks (and lenseflares) are light bouncing around inside the lense of a camera.
For Wires for Empathy I want to use grain as almost a double exposure effect, grabbing media grains other than film/video. Our grain will be 'book grain' from printed pages. I'll be doing a old book photography run, but in the meantime, I've been using some images from the New York Public Library maps project.
Lightleaks can also be handled as images, but we could theoretically render them; however it's unlikely that I'll do this as they take too long to converge. Basically you have to model a lense (really!) shine some light into it sideways, and wait for the caustic bounces to resolve into something. I might try luxrender, a free/open source renderer, since I believe it has a bidirectional mode that renders caustics faster than cycles, and it might even do spectral rendering.
In other news Tim's been making really good headway on the design of the sky/clouds based on his scripts and blender smoke sims. Here's a current version:
What's really nice here (For me) is the circular smeared star-field at night, resembling night sky long exposure photography.
Using the same (if you can believe it!!!!) method I used to make the light shafts behind gilga in the lighting example, Henri has been making some crowd timelapse experiments. His latest uses both positive and negative emissions, to simulate shadows:
Chris and Luciano have been working on animations of 'gore' and secondary animations in some scenes (actually the gore animation in Luciano's shot has so far been made by Karen Webb, a local animator) The animations here could be considered 'spoilers' so ... should I reveal them in the next post? or just provide links?