First of all, Lukas Tönne, one of the amazing blender developers, is working on the project to create blender 2.8 - which will be a milestone release of Blender, with new functionality and workflows.
In the process of designing, he's interested in seeing current and future workflows of Blender artists and projects, which led me to create a document of the Wires for Empathy workflow. I thought it might be interesting to share here too:
The document is in odf and should open in Libreoffice, openoffice or google docs.
The second bit of news is that the summer internships are now open for students wanting to work on the project - the following is copied from the urchin blog:
We're happy to announce a new round of summer internships through bitfilms on Wires for Empathy aka the tube open movie project.
Read the details in this document - it should contain everything you need to know, deadlines, how to apply, etc.
In brief we're going to be working on two exciting main areas, timelapse animation and lighting. In the run up to the internship period I'm working on documentation for our lighting pipeline and timelapse animation workflow and tools - so if you're into lighting with cycles, would like the chance to work on our color-managed lighting pipeline, or if you like the idea of animating things changing over time, or modeling snapshots of aging objects, this could be a good fit for you.
Other than that, we're still working - I personally had a small hiatus due to a bad cold, but I'll have a production update soon, including a new animator who's joined our team to do fix and timelapse animation, some of the teased animation previews (I haven't forgotten) and more about other parts of the project. Stay tuned!
I thought I'd be posting the 'mega documentation update' - a mix of pipeline docs, script docs and tutorials for the timelapse sequence, but I ended up reshuffling priorities and working on our lighting pipeline instead. So this update is going to be partially about that, and also a grab bag of miscellaneous work from Tim, Henri, Chris, Luciano and others.
Most of the lighting work I've been doing is really pipeline focused; in other words, just as in the timelapse cameras, I'm thinking 'how can we standardize and make this repeatable' not 'how can I make this one shot'. With lighting that is really important to keep things manageable, to hit the look we want (so this is mixed lighting and look development) and to keep the shots consistent.
Using High Dynamic Range Views
I got some great advice from Troy Sabotka, a blender user, cameraman, and color maven. He's been helping me conceptualize how to split the job into specific steps, and how to think about color. Some of this has been nothing short of a revelation - even in areas I thought I knew.
First a little progression of the wip shot to see the progress:
So the big difference has been using a bunch of Open Color IO presets made by Troy. OCIO is an open source library designed by industry eggheads that makes working with color more solid.
If you can imagine that Cycles (Blender's renderer) as a set of algorithms that produces the scene colors in front of the camera, then OCIO has a bunch of connections to the final color that you see on your screen. There's a huge difference between the two! Scene colors really don't 'look' like any one thing; they're better thought of as data representing light. So there is no 'white' you can just have less or more light... a lot more than 'white' really. On your final image however, there's going to be a black and a white - which is the maximum amount of light you can have - and the interesting thing is that how much light you grab from your scene can really give you a lot more range to work with in lighting.
Blender typically uses an sRGB view; meaning you just grab light from 0 to 1, and curve it a bit. In practice that means that lighters are going to tweak their lights so that all the color data is concentrated in that small range - which is not true of reality. The view that Troy provided is based on the ACES standard, and grabs a lot more of the scene, giving me way more dynamic range to work with. The result is stunning! Sergey had to fix a small bug in one of the compositor nodes that was clipping to 1, thus only working with sRGB, and now I have a massive amount of range to work with.
some other nice things about ocio is that it has tools to convert color grades into nodes; you can then use previous grades as 'looks' for similar shots, which will have a unifying effect on our entire pipeline.
I'm still experimenting here, so expect some refinement in this regard. Eventually this has to be converted into tutorial/documentation form (much like the timelapse stuff) so that other lighters can use it.
Here's a recent render in video with some temp sound:
Grain and Light Leaks
This is currently in R+D phase, but there are some tests to see.
A final touch to rendered frames is some subtle defects that dirty up the clean CG look (once enough samples have been rendered). There's quite a few, some easily provided via compositor nodes or cycles itself, such as depth of field, motion blur, lense distortions, bloom, etc. A couple of 'extra' ones that involve another step of compositiong or rendering are grain and lightleaks.
Grain simulates the look of film (or video) typically, and is a pattern of characteristic noise. Lightleaks (and lenseflares) are light bouncing around inside the lense of a camera.
For Wires for Empathy I want to use grain as almost a double exposure effect, grabbing media grains other than film/video. Our grain will be 'book grain' from printed pages. I'll be doing a old book photography run, but in the meantime, I've been using some images from the New York Public Library maps project.
Lightleaks can also be handled as images, but we could theoretically render them; however it's unlikely that I'll do this as they take too long to converge. Basically you have to model a lense (really!) shine some light into it sideways, and wait for the caustic bounces to resolve into something. I might try luxrender, a free/open source renderer, since I believe it has a bidirectional mode that renders caustics faster than cycles, and it might even do spectral rendering.
In other news Tim's been making really good headway on the design of the sky/clouds based on his scripts and blender smoke sims. Here's a current version:
What's really nice here (For me) is the circular smeared star-field at night, resembling night sky long exposure photography.
Using the same (if you can believe it!!!!) method I used to make the light shafts behind gilga in the lighting example, Henri has been making some crowd timelapse experiments. His latest uses both positive and negative emissions, to simulate shadows:
Chris and Luciano have been working on animations of 'gore' and secondary animations in some scenes (actually the gore animation in Luciano's shot has so far been made by Karen Webb, a local animator) The animations here could be considered 'spoilers' so ... should I reveal them in the next post? or just provide links?
A quick non-final test of the surveillance camera assets for the time-lapse sequence. Not much is left to do on them:
Fix a few small glitches
Test Linking them into shots (I'm sure some small bugs will need to be fixed)
Test caching their animations (I'm sure some small bugs in the meshcacher will need to be fixed)
After that they are done, testing will mean populating their main shot, but they will be then usable in the handful of other shots they are visible in. Only the first camera in the sequence is seen close up, the rest will be at longer distances in the frame.
Don't be surprised if they look a bit different in the final film, as grading/ color/ lighting will be different in the shots - even rendering style will have subtle blurs, glitches and composited grain and effects.
I'm evaluating whether to have the paint flakes 'pop off' in the time-lapse (shown) or peel off (not shown) - I may go for a combination of both, at least for close objects.
Now I need to document how to use our tools to get results like this, so expect (once the dust has settled) documentation of timelapse toolbox and meshcacher - two python addons, the first of which was heavily used in making those cameras, and the second will be used for placing them in actual shots.
Early last summer I was faced with a problem: As we completed a set of pre-planned and conceptually regular animation shots, our original approach to handing out tasks to artists started to resemble a research project: Delve into the files and preview, identify a 'high priority' item, then break it down to bite-sized tasks that could be reasonably executed by one person.
But each of these bite-sized tasks depend on one another; so the assignment often fell into order-of-operation problems - before you can rig something you need to model it, and before you light something you need to have it textured, etc. Since multiple assets link into multiple shots, and often times you need to do the same 'type' of task on one shot, this gets really complex to figure out- and once you've done it once, it's good to be able to store those relationships in a logical way for future reference.
Helga, our web based production tool, has a good attempt to fix this. But it is hardcoded to a specific workflow, and tends to isolate individual shots and assets so they don't reflect their interdependence. Each shot and asset has a task tree that looks like this:
So the next step is using spreadsheets. This is what Caldera - the previous drome project - used to do, essentially supplementing Helga with google docs. We did quite a bit of that too, often using libreoffice to make spreadsheets, and sharing them using google docs - in the future we'd like to use an opensource document collaboration platform, the likes of which collabra and libreoffice are implementing as we speak!
The problem with spreadsheets: Everything is on a neat grid layout. That makes it easy to enter and read information, but it actually hides the structure of the data underneath. Our data consists of tasks that depend on each other in a specific order, in a kind of network that has a direction to the links. In computer science, there's an obvious data structure to use for this: It's called a DAG, short for Directed Acyclical Graph . Basically a network of nodes ( a graph) where each link has a direction (from node, to node) and you cannot have a cycle - either directly or indirectly, you can't have an infinite loop of nodes (imagine if rendering depended on animation, but animation depended on rendering - you'd be stuck in a loop and could never finish the project! ). Any proper representation of our task list should reside in such a graph, fitting the data to the data representation:
So, as luck would have it, Blender has a programmable DAG editor - the node editor. You've probably seen it in screenshots, or used it yourself, to make shaders in cycles, materials and textures for blender internal renderer, or to composite images and renders. In addition to these 'normal' uses of the node editor, there is a hidden feature: Blender allows you to create entirely new node network types and node types in python. This has been used to make excellent addons, such as sverchok for procedural modelling, and animation nodes for procedural animation; in the future it might be the basis of all rigging, modelling and animation in future versions of Blender.
But for our needs, it's a convenient way of organizing the project! By creating a new node tree type in python - dependency nodes, and a new node type - a task node, we can give each task some properties and some dependencies, such as:
owner - the name of the artist or coder working on the task
time - estimated time needed to complete the task in person-days
type - type of task: is it animation? rendering? texturing? etc.
reference - what shot or asset is the primary reference for the task (refers to shots and assets in the helga asset list)
Completion - Is it done or not?
Dependencies - These are links to other, similar tasks that much be completed before this one; other tasks might have this one as dependency in a similar fashion.
Getting Data In and Out
The primary way of data entry is right in the node editor: Use Shift-A, or the handy panel on the left to create new nodes. Copy, paste, and duplication all work, as does the usual way of connecting nodes.
However, we recognized early on that we'd probably need some other connectivity. I created a JSON file format for tasks, and some simple operators to export tasks or import them from JSON files. This helped automate data entry from sources we already had available.
We also know that many people find spreadsheets far more user friendly then nodes - and not everybody has to deal with the dependencies. So we made spreadsheet import and export - currently using .csv files (this could be improved a lot - we aren't even using csv libraries in python) - but it works fine for our current needs. You can export all or part of the graph to spreadsheet/s, edit those spreadsheets (or create new ones) and then import the changes back into the graph. This makes communicating with the rest of the team fairly simple.
In order to make import and export easy, We have a 'Search and Select' function that lets you search for specific things, for instance, you can search for all character animation tasks, and then export a spreadsheet just for those. This is handy to then use for communicating with animators and animation supervisors. We can even modify the spreadsheet - assigning animators, or adjusting estimated time or completion, and then re-import back into the graph.
Search is of course, also useful when working directly in the graph, without needing to import or export anything.
Lies, Damn Lies, And Statistics
Finally we have have stats and reports.
If Nothing is selected, it adds all the uncompleted task times, giving a total project estimate (in person / time units) Note that our current total is a bit inflated as I tended to pad tasks out - especially tiny ones - things that might take an hour or two have a whole day. This number also assumes only one person working, and no corners being cut.
If you have a selection it displays the time for the selected task/s and all its/their dependencies. Thus you only have to select the final render for a specific shot, and see how much time it takes to complete it.
While writing this code I wanted to get something 'up and running' really fast. At the same time, I feel like this could be more useful in a bigger system. So, to describe the data structure of an individual task, I decided to keep all of this in one really simple class/module, that could then be modified to hook into another library, or to change class types for a given project, etc. without having to touch the rest of the code. While the name is inspired by blender's sDNA and sRNA systems for data storage and access, this is in no way as elaborate or cool. But it still allows for really quick and nice additions. If you're looking to take this and integrate into your own system, look at the file taskdna.py first!
In the future I'd like to tie this as a small piece of larger asset/task management systems. That means that there needs to be an api to connect to various project databases, and the taskDNA also needs to be part of that api, allowing the system to define not just the tasks but the actual data structures.
A small part of this that might be cool is enabling image previews in the nodes, reflecting the current status of those tasks.
The current version is zipped and installable via blender's user preferences->addons panel, get it here: tasker version 0.2
You can also download this project along with a lot of other addons for tube from my gitlab: tube addons project
The files are located in the folder tasker/
Once downloaded either:
make a zip file of the entire tasker folder then install the zip as an addon in Blender
if you're technically inclined, make a symbolic link within the Blender addons directory directly to the tasker/ folder - that way you can git pull it and see the changes directly in blender without having to re-install.
From Politicians to Tasks!
In the import function I wanted to sort the nodes based on the dependencies, so they could be displayed in a nice layout in the editor. DAGs have well known sorting functions, but I didn't want to implement my own. So, I borrowed code originally written by Paul Tagliamonte for the Sunlight Foundation - It turns out that following the trail of money and influence on politicians is also a DAG, and Paul wrote some beautiful BSD licensed Python for sorting and cycle detection. Thanks Paul!
As a bonus image, here's the rendered frame produced via the screenshots above:
Finally thanks to everybody here - I hope this post satisfies those of you who, like me, are geeky about this stuff. To them, and everyone else I promise more new cool artwork in the next update!
* The current name is 'tasker' but I'm switching to orgnodes as a pun on emacs org mode.
Last summer, while I was visiting the Urchin crew during my summer trip, I've spent a few days working on Tube in the nerdodrome. With Bassam, we had a discussion about cracks generation for the timelapse sequence. The environment for Wires for Empathy is mainly composed of concrete and we thought it would be kind of cool to have some nice cracks growing on the walls. Generating cracks is not that hard, we can do procedurally, with some great results. This tutorial explains how to do it fairly simply.
However, animating the cracks is a little bit harder and we cannot rely on the procedural method any more. We searched for examples and papers of people who already worked on this problem and found some great material. Unfortunately, most of the papers we found were based on heavy research and would include some serious C/C++ coding in order to have these tools inside Blender.
The python approach
Our first idea was to write an OSL shader that could generate cracks with some growing parameters that artists could use to control the speed and shapes of the cracks. Moreover, our rendering pipeline is entirely CPU based so using some OSL would be a problem. I'm sure it is doable in OSL, but I don't know this language at all and even though I'm willing to learn this shading language, I would have spent a lot of time of time learning it, without being sure that I could achieve the result I wanted.
So I chose to use Python and to create a script that would manipulate Bezier curves to generate cracks in a procedural fashion. The generation algorithm is fairly simple and a based on a "branch" approach. Complex cracks can be splited into small simple "branch" that can be easily generated and manipulated. To do so, we have a very basic recursive algorithm that create a branch a determine the position of its children on it. We then repeat the branch generation on the children and determine its ow children and so on. The following image shows how a complex cracks can be seperated into those branches.
The following image explains our approach to approximate the shape of a crack. We can see that crack can be split into big segments (in blue) that can be split again into smaller segments (in red).
In Blender, we define a general direction for the branch, and generate points along it with small angle variations between each point. By default, every 5 points generated, we created a bigger angle variation (corresponding to the blue one in the image). We then convert theses points to a Curve Object.
When each point is generated, we generate a random value between 0 and 1 and look if this number is smaller than the Children Probability defined by the User. If it's the case, we create a child branch at the point position and store it's relative position on the master branch. For example, if the tenth point of a branch composed of forty points has a children branch, we create this new branch with a 'relative position in the master branch' of 25%. This will be very useful when dealing with the animation.
As the algorithm for branch generation is recursive, we need a way to stop it. To do so, we simply decrease the child probability at every generation, so each generation of children is ten times less likely to have children than its parents.
More displacement on cracks
The previous image shows how, even if we have generated cracks with big angle values, it still appears too smooth. We need to add more displacement on the cracks. To do so, we subdivide each curve several times, select randomly some points on it and move them using the proportional editing tool with a random falloff. After some test, we found out that we have much better result by repeating this operation with small values several times instead of doing it in one go.
The result is far more convincing with little displacement along the cracks. However, this step currently produce a small but annoying bug where some roots of some children branch get disconnected from their parent because they are displaced under the effect of the proportional editing tool. This bug is currently being fixed.
The animation system is fairly simple, we key the End Bevel Factor parameter to animate the growth of a branch. However, by doing such, the growth is very linear and robotic. So, we have adding a parameter to control the speed of growth and make it randomly go faster and slower during its generation. We simply subdivide the F-Curve for animation and change the position of the keys in the Y axis as illustrated in the following image :
The result is visible is the following video :
The script is used to generate the cracks and their animation as you can see in the following video. We use custom attenuation and displacement on cracks to get a more believable result.
In order to make cracks interact with other object and surface, we need to export it as a sequence of images with transparent background. We will then be able to plug this sequence into a material or a modifier in the scene and generate cracks easily in the scene. Below we have an example of one of these image. To render it, we simply do an OpenGL Render from the top view with an orthogonal camera. We also have applied a simple material with a black&white colorRamp so we can destroy the edge of the cracks later on.
The image sequence is then used in the scene file, as a factor for displacement modifier on a highly displaced plane and as a mix Factor on the concrete material. As we have loaded an image sequence, we only have to set the right number of frames to use and Blender will automatically refresh the image number to match the current frame and we have our animated crack !
Todos and limitations
Currently, the tool is very limited and could be largely improved by adding new features like cracks generation snapped directly on a 3D surface. This is doable as we first generate a point cloud, we could snap each point on the target surface as we generate them.
One of the biggest limitation is the fact that we don't have access to all the modifiers we want as we manipulate curves. For example, we don't have access to dynamic painting or boolean modifiers. Keep in mind that this a tool to generate background and secondary animation in some timelapse sequence. Our needs are quite limited and our plan is to generate a few different cracks that artists can then plug easily into their scene and add some details without spending days with painting cracks manually.
The script can be download here. For now, it is only a script, so you'll have to load the file in the blender text editor and run the script from here. Cracks Generator is added as a new tab in the tool-shelf of the 3D View.