So as promised, here's a more in-depth update, about poster timelapse . In our (hopefully just in the movie, but you never know...) dystopian future, the subway walls need to get peppered with images of successive ruthless dictators, bent on making us love them through propaganda. This effect is sometimes very close to the camera, and sometimes in the background, throughout many shots.
Posters need to be added to the walls over time, then removed, and perhaps just have other posters put on top of them, with attention to the images (which image in the succession) overlaps (they need to be on top, rather than intersecting each other) and order of removal (posters under other posters can't go first). In addition, the materials of the poster need to age, and posters under other posters can't e.g. accumulate dirt, and could get ripped when the top poster gets removed... and .. and.... and.....
So I'm building a poster control 'machine' using a brilliant blender addon called animationnodes - that also allows mixing nodes and python via script nodes. This is what my code and nodes look like right now:
And this is what they do:
There's a lot of hidden stuff there too: they make oclusion masks using vertex colors and vertex groups so the posters "know" when they are under or over each other. This will allow me to combine it with.....
....my poster material nodes! :
Lets see what those look like in animation:
Phew! pretty cool - still missing a few details and tweaks, but that's the basic idea. the strange purple rectangle represents an occluding poster. the image is tweaked from a beautiful poster made by Michael Kalinin for the movie, and is just a 'test image'. The text is using our custom made font "soomerian modern" which all the text in the movie is written in.
So what's left? well, combining the animation nodes for the posters with the poster materials.
In addition I have similar systems (not shown here) for the wall itself, that have to interact with the posters, so for instance, the posters change the dirtiness levels of the walls, and falling tiles rip out the posters.
I've fallen into the trap of wanting to post a big, intense update, and then putting it off to squeeze one more thing in, so I'm making myself post this quick mini update to push myself not to do that :)
We had some really nice results from last summer's internships, I'm especially happy with some amazing texture work and 2D design work for signage from Alice, and some progress on a complicated timelapse shot (not finished yet) which is getting close. I'll be posting these soon.
Since then I've discovered an amazing blender plugin called 'animation nodes' which is allowing me to do timelapse shots at an amazing level of detail. Sadly it's quite 'technical' in nature and very few people know how to use it safely, so, so far I haven't found a way to delegate work in this area - meaning it's just been me cranking on these nodes.
So consider this a teaser for the next update, during next week I'll post images and gifs or mini movies, and talk about the neat things I've accomplished and expect to accomplish with these amazing nodes.
First of all, Lukas Tönne, one of the amazing blender developers, is working on the project to create blender 2.8 - which will be a milestone release of Blender, with new functionality and workflows.
In the process of designing, he's interested in seeing current and future workflows of Blender artists and projects, which led me to create a document of the Wires for Empathy workflow. I thought it might be interesting to share here too:
The document is in odf and should open in Libreoffice, openoffice or google docs.
The second bit of news is that the summer internships are now open for students wanting to work on the project - the following is copied from the urchin blog:
We're happy to announce a new round of summer internships through bitfilms on Wires for Empathy aka the tube open movie project.
Read the details in this document - it should contain everything you need to know, deadlines, how to apply, etc.
In brief we're going to be working on two exciting main areas, timelapse animation and lighting. In the run up to the internship period I'm working on documentation for our lighting pipeline and timelapse animation workflow and tools - so if you're into lighting with cycles, would like the chance to work on our color-managed lighting pipeline, or if you like the idea of animating things changing over time, or modeling snapshots of aging objects, this could be a good fit for you.
Other than that, we're still working - I personally had a small hiatus due to a bad cold, but I'll have a production update soon, including a new animator who's joined our team to do fix and timelapse animation, some of the teased animation previews (I haven't forgotten) and more about other parts of the project. Stay tuned!
I thought I'd be posting the 'mega documentation update' - a mix of pipeline docs, script docs and tutorials for the timelapse sequence, but I ended up reshuffling priorities and working on our lighting pipeline instead. So this update is going to be partially about that, and also a grab bag of miscellaneous work from Tim, Henri, Chris, Luciano and others.
Most of the lighting work I've been doing is really pipeline focused; in other words, just as in the timelapse cameras, I'm thinking 'how can we standardize and make this repeatable' not 'how can I make this one shot'. With lighting that is really important to keep things manageable, to hit the look we want (so this is mixed lighting and look development) and to keep the shots consistent.
Using High Dynamic Range Views
I got some great advice from Troy Sabotka, a blender user, cameraman, and color maven. He's been helping me conceptualize how to split the job into specific steps, and how to think about color. Some of this has been nothing short of a revelation - even in areas I thought I knew.
First a little progression of the wip shot to see the progress:
So the big difference has been using a bunch of Open Color IO presets made by Troy. OCIO is an open source library designed by industry eggheads that makes working with color more solid.
If you can imagine that Cycles (Blender's renderer) as a set of algorithms that produces the scene colors in front of the camera, then OCIO has a bunch of connections to the final color that you see on your screen. There's a huge difference between the two! Scene colors really don't 'look' like any one thing; they're better thought of as data representing light. So there is no 'white' you can just have less or more light... a lot more than 'white' really. On your final image however, there's going to be a black and a white - which is the maximum amount of light you can have - and the interesting thing is that how much light you grab from your scene can really give you a lot more range to work with in lighting.
Blender typically uses an sRGB view; meaning you just grab light from 0 to 1, and curve it a bit. In practice that means that lighters are going to tweak their lights so that all the color data is concentrated in that small range - which is not true of reality. The view that Troy provided is based on the ACES standard, and grabs a lot more of the scene, giving me way more dynamic range to work with. The result is stunning! Sergey had to fix a small bug in one of the compositor nodes that was clipping to 1, thus only working with sRGB, and now I have a massive amount of range to work with.
some other nice things about ocio is that it has tools to convert color grades into nodes; you can then use previous grades as 'looks' for similar shots, which will have a unifying effect on our entire pipeline.
I'm still experimenting here, so expect some refinement in this regard. Eventually this has to be converted into tutorial/documentation form (much like the timelapse stuff) so that other lighters can use it.
Here's a recent render in video with some temp sound:
Grain and Light Leaks
This is currently in R+D phase, but there are some tests to see.
A final touch to rendered frames is some subtle defects that dirty up the clean CG look (once enough samples have been rendered). There's quite a few, some easily provided via compositor nodes or cycles itself, such as depth of field, motion blur, lense distortions, bloom, etc. A couple of 'extra' ones that involve another step of compositiong or rendering are grain and lightleaks.
Grain simulates the look of film (or video) typically, and is a pattern of characteristic noise. Lightleaks (and lenseflares) are light bouncing around inside the lense of a camera.
For Wires for Empathy I want to use grain as almost a double exposure effect, grabbing media grains other than film/video. Our grain will be 'book grain' from printed pages. I'll be doing a old book photography run, but in the meantime, I've been using some images from the New York Public Library maps project.
Lightleaks can also be handled as images, but we could theoretically render them; however it's unlikely that I'll do this as they take too long to converge. Basically you have to model a lense (really!) shine some light into it sideways, and wait for the caustic bounces to resolve into something. I might try luxrender, a free/open source renderer, since I believe it has a bidirectional mode that renders caustics faster than cycles, and it might even do spectral rendering.
In other news Tim's been making really good headway on the design of the sky/clouds based on his scripts and blender smoke sims. Here's a current version:
What's really nice here (For me) is the circular smeared star-field at night, resembling night sky long exposure photography.
Using the same (if you can believe it!!!!) method I used to make the light shafts behind gilga in the lighting example, Henri has been making some crowd timelapse experiments. His latest uses both positive and negative emissions, to simulate shadows:
Chris and Luciano have been working on animations of 'gore' and secondary animations in some scenes (actually the gore animation in Luciano's shot has so far been made by Karen Webb, a local animator) The animations here could be considered 'spoilers' so ... should I reveal them in the next post? or just provide links?
A quick non-final test of the surveillance camera assets for the time-lapse sequence. Not much is left to do on them:
Fix a few small glitches
Test Linking them into shots (I'm sure some small bugs will need to be fixed)
Test caching their animations (I'm sure some small bugs in the meshcacher will need to be fixed)
After that they are done, testing will mean populating their main shot, but they will be then usable in the handful of other shots they are visible in. Only the first camera in the sequence is seen close up, the rest will be at longer distances in the frame.
Don't be surprised if they look a bit different in the final film, as grading/ color/ lighting will be different in the shots - even rendering style will have subtle blurs, glitches and composited grain and effects.
I'm evaluating whether to have the paint flakes 'pop off' in the time-lapse (shown) or peel off (not shown) - I may go for a combination of both, at least for close objects.
Now I need to document how to use our tools to get results like this, so expect (once the dust has settled) documentation of timelapse toolbox and meshcacher - two python addons, the first of which was heavily used in making those cameras, and the second will be used for placing them in actual shots.