How I did “COSA RESTA” pt 2
This pt2 will concentrate more on the post production side of things. As usual there was bucket loads of it and it took a very long time to get things done.
Rendering times were bloody huge. This is due to the output resolution I chose and the processing power of my computer, a quad core 3ghz. When you’re rendering stuff out, unless you’re into 3D, all it matters at the end of the day it’s the engine running under the bonnet. I wish I had a 8 or 16 core Xeon based system but that was no option for now so I had to stick to my limited system.
Things started to go really heavy as soon as I chose to output footage in the theatrical 2.39 anamorphic format, meaning 2554×1080. Why this choice. Well, I had to make one in first place. I had 1:78 footage from the shots using spherical lenses and I had 3.55:1 footage from the Kowa. Cropping horizontally the anamorphic footage all the way down to 1.78 looked nothing but a non sense; I knew I had to crop, period. Problem was how much involved and how much I would have scaled up the 1.78 footage (loosing resolution). In this scenario 2.39 seemed like the best choice, even if I had a little horizontal cropping to do as well. I know it’s a bugger but that the way it is if you have two different sources in terms of size.
In my case the ration anamorphic-spherical was not really helping much in decisions terms, pretty much 50-50 between the one and the other!
One thing you have to bear in mind is that going from 1920×1080 to 2544×1080 will inevitably slow your computer platform down, clamping processing power. More pixel involved. Crappy AVCHD heavily compressed format.
Even if you go the hack way on the GH2 and you get some less compressed footage. One way to work around this would have been to transform all your clips into uncompressed .mov (Quicktime) files, .avi files or .DPX sequences. It all translates in less strain to your core CPU.
In this scenario though, you have to have tons of free hard disk space. In my case, I had a fresh 2TB drive but since another drive went bust (!) I had to quickly relocate 1TB worth of precious files.. and I can tell you that 1TB will go in a breeze once you do all the conversion.. especially if the song is 5:30 mins long and you shot a lot. So, in my case, it wasn’t going to be an option and I had to put up with slow rendering times and no fluid playback unless I had pre-rendered the sequence. Really not funny.
One quick word about a why I did you not go 1920×816. Straight answer is: no way,unless you render for the web. In this case even if you lose vertical resolution (from 1080 to 816 pixel) you are squeezing your anamorphic footage so it gains sharpness .. and you will be cropping footage from spherical lenses vertically with no resolution loss (re-framing, basically). But if you’re aiming at a projection, tv or cinema, then you don’t wont to go this route.
All the VFX was done in AE. The initial sequence was shot during the day and then color corrected to fake night shots .. adding some virtual rain, lightning & thunders. That’s an easy job you can find many tutorials on line even without looking hard enough. It was indeed more complicated to make a convincing morph of the actor’s face into a skull. Aside from motion tracking, it was a matter of finding the right skull (the scariest the better) and carefully CC it to match the rest of the scene. It took quite some of time.
Some fog was added as well, in all the shots, to give it more of a mysterious and dramatic look.
All the close up shots involving actors required the skin to be smoothed out. Once upon a time, there was the “pantyhose trick” or some Tiffen filters helped achieving this effect, or you could use some soft look lenses (Lomo makes some as well) but these days you can remove blemishes and imperfection digitally.. working with AE or using some plug ins. Whatever works best for you, both ways are all right.
I also used extensively digital Set Extension techniques in order to give the video a more dreamy feel. Extending the set helped me removing parts of the scene I did not like very much or I had absolutely to get rid of (like a motorway going all the way behind the dancers in the wide valley). Tracking here had to be spot on and it would enormously help if you plan set extension in advance.. so you stick some tracking marks in the scene. In my case it was quite a nightmare to succeed because there were no high contrast areas or if there were, the talents would cover them at times.
These days you can do pretty much anything in post. It would be dead difficult if not possible at all to recover white pixels from over exposed footage but apart from that, you can really do lots of stuff. What matters is how your final result will look like and how much time you will put in it. I can tell you that on the ball park time proportion is 10:1 . So for every extra minute you spend on the set planning your post work ahead , you save 10 minutes of post processing time. Possibly even more than that adding render time on the top. This is to say: make up your mind as soon as possible and have a clear idea of what is your goal.
I am not going into much details about the glowing laser ball you can see in a shot from the video. I guess it would be more interesting to know how I got the suicide shot done. Well, there were 3 ways to simulate a suicide implying a man jumping out from a dam. The first one would have been green screening and mixing the footage with a real world tripod shot. Extra Strong fans needed, carefully crafted lighting to emulate day time color temperature. The second way would have been to take a mannequin, dress it like the protagonist, wig on the head and film it while falling. Apart from making a convincing “double” it wasn’t going to be easy to make the movement realistic once thrown all the way down, unless the camera was a long way far out. The third way is the one I personally opted for. I guess it was the most economic way as well. Take a HD footage of a man bungee jumping, make sure it’s the right perspective and he’s dress in a pretty similar fashion to your actor and rotoscope it to death. Obviously I had to digitally re-create the background.
I know the end results might look a bit fake, but I guess It would have been anyway one way or the other.. unless you have hefty budgets and a skilled team of VFX artists.
Another interesting effect was to add a teardrop on the model’s face. I did not think of it when shooting, I guess I was more concerned of getting usable low light footage from the cropped sensor of my GH2. But once in the editing room I thought to myself “it would be really good to have a tear falling from those sad looking eyes!” . So I made it in AE using CC mercury. It did not take that long, perhaps about 4 hours of work to get the right speed, motion and reflections but I was quite pleased with the result.
Again if I though about the teardrop thing from the very beginning, it would have taken less that a second to shoot it right!
Overall I added some extra stabilization to some shots using AE, just to smooth things a little bit more. The new CS5.5 warp stabilizer plug in is quite good and help in avoiding motion tracking.. as I used to do in the past to fix hand held shaky footage.
CC was a nightmare for all the scenes involving exteriors, in particular the ones on the dam. Mostly because the weather was so unstable and light would just change all the time, to color match a scene shot under some sun light and another one where the sun was not there was tricky. Especially when scenes had to be subsequent in the editing.
The way I do color matching it is to have AE window split in two and bring those shots on the screen at the same time… and then start to tweak RGB curves, peeping one and the other.
I am well aware there are some plug ins that should automatically color match but I tend to do it manually as to my eye results are far better.