HEAR IT: Behind the Scenes

This article dives behind the scenes of HEAR IT: I Am for an Art - if you haven't seen it, click here to watch it. 

A few years ago we started SEE IT, a series of visual interviews with photographers. The concept is simple – you get a few questions from us and have to respond with your photographs and nothing else. It has been an incredibly insightful experience. Connecting with visual storytellers from all around the world has not only been inspiring but also allowed us to dive deeper behind the scenes of the creative process. The last year has been a transformative time for our collective as we crystallized our vision and goals for the future: building a bridge of communication and understanding between artists and programmers, visual storytellers and musicians, filmmakers and illustrators. One of the ideas we hold dear to our hearts is HEAR IT, the next in the series of series, a format that allows for a creative approach to interviewing bands and establishing a deeper bond with artists who work primarily with sound.

The trick at the beginning is to figure out the most efficient way to convey your idea to others. One can carry around an idea that seems clearly formulated. But more often than not attempts at sharing that idea from one mind to another are just that – a sea of attempts. Fortunately, the collective process demands involving others. One of my childhood friends happens to be a musician, currently involved in a project with his father, a well known polish saxophonist and jazz virtuoso. When I heard they were making an appearance at the Scottish-Polish mini festival in Scotland it became clear it was a perfect opportunity for us to collaborate on a project.

And so, along with Finbarr we grabbed a few Peli cases filled with equipment and flew to Aberdeen. It would be a lie if I said we had a concrete plan formed in our heads. Naturally, we had some references in our mind, be it Tiny Desk sessions or Vulfpeck, but we had little information about the venue itself or how much control we would have over lighting or the look of the stage. We were prepared for everything and nothing. Fortunately, in the weeks leading to the event we had some discussions with the band and they reassured us that sound engineering would be on point. When we got there musicians were getting ready. As predicted, the amount of flexibility we were left with required us to feel our way into the performance, capturing it impromptu.

As guests started pouring in, the room filled with an atmosphere of sheer curiosity. There are no better conditions for a jazz performance than a room full of curious folk. Throughout the concert, one could sense being on the verge of the unknown, as it should be. But at one point, as it should be, we stepped over that edge. Mikołaj, the vocalist, began to revitalize Claes Oldenburg‘s 1961 statement titled “I Am For An Art”. The words gently glided over the stage; through means of passion and deep devotion, blurring outlines of the self, the moment and the meaning of it all.

As each moment pushed us deeper into the night, by 02:00 we found ourselves at Tangleha Artists Collective, 40 miles south from Aberdeen, in the very heart of a hippie community. While only a few of them attended the gig, everyone was in a similar trance-like state. There was an open mic setup that welcomed anyone to join on stage and to pour out whatever depths of their soul they wished. Naturally our Polish musicians took this opportunity to venture outwards even further into the darkness of the unknown.

Such a brilliant night, such wonderful souls.

It was at one point during the night that I realized the potential of the initial idea. The fantastically expressive hippies of the Tangleha community allowed me to distance myself from the corporate pursuit. Being away from London’s rat race brought a transient moment of clarity. The carousel of life will never stop spinning, if one has the privilege to take the step back and enjoy the sight, that privilege quickly turns into a responsibility. What fun! And so we shall continue building the bridges of self-expression with those outside of our comfort zone, creating meaningful connections and lasting friendships.
Friends at Tangleha, thank you. You revitalized my spirit with the wonderful chaos of that night.

We left Scotland the next day, but it won’t be long before we are back. There is something to be said about the amount of warmth and hospitality we received; something one can easily forget living in London. My experiences of Wales and Scotland made me realize how much more there is to these islands. I urge you to visit places other than the capitals, as that way you will undoubtedly discover the raw spirit of the people. Aberdeen was nothing short of fascinating, leaving us regretful for not being able to appreciate its full potential.

For those interested in places that cultivate permaculture, do check out Tangleha Artists Collective website.

Breakdown

A few days later I was reviewing the footage on the train from London to Cardiff where I was stationed at the time, working as a digital imaging technician on some tv shows. Recent years of being a DIT naturally oriented me towards understanding the relationship between data signals and its translation into imagery we all relate to. As it morphed into becoming my profession, the creative spirit within guided me towards slightly more unprecedented ways to manipulate the image. With recent developments in machine learning bringing us such tools as DAIN or waifu2x, it only calls for (at the very least) experimentation with them. The following is a simple recollection of that process (beware, this might be incredibly boring to anyone who doesn’t care about upscaling or sees vectorisation as not much more than a fancy instagram filter).


1. Preparation

The moment described by me as blurring the lines of meaning was the one we decided to polish and release. We shot it on two cameras, BMPCC4K and BMPCC6K, in BRAW with 8:1 compression at ISO3200, using Sigma 18-35mm/f1.8 for the wide and Sigma 50-100mm/f1.8 for the close up. Footage was brought into Resolve, sync’d with audio (had to solve the issue of 44.1kHz vs. 48kHz), quickly edited in the multicam setup and refined a few times across a few days. Once assembly was locked and approved I did a primary grade on it and made sure no information was lost (yet) – once accepted I revisited the grade, crushed the blacks a bit and added noise reduction before grade was applied. This was then exported into a QuickTime DNxHD RGB 444 safety copy and a separate PNG sequence that would serve as the main source for manipulation.


2. SDfx App

This may be a personal fixation of mine, but ever since I saw this post on reddit a few years ago I have been hooked on the idea of being able to somewhat replicate the process behind making of Linklater’s Scanner Darkly (2006) or Waking Life (2001) but without crazy amounts of rotoscope animators. I’ve experimented with Ricardo Corin’s software many times and while it has a long way to go I was incredibly excited to see he released a new Mac version of the software (you can check it out here).

Now, I’m not a huge fan of Apple products but as of recently happened to be in a possession of a 2014 Mac Mini – which was enough to process 12 thousand frames within 17 hours or so. Unfortunately the software is still in early stages and it kept on crashing whenever a frame had too little detail in the shadow parts of the image. This forced me to lower the detail threshold resulting in lowered resolution (around 720p instead of 1080p). Now, for independent frames the vectorisation preset allows for SVG export; sadly this is not yet an option for a batch process, and while I entertained the idea of using some sort of a Macro Keystroke emulator to achieve that, in the end I figured it would probably be a bit of an overkill. Instead I decided to look into another app I’ve been meaning to experiment with.


3. waifu2x-caffe

Having spent some time in the wonderfully helpful communities of DAIN on Discord and Video2x on Telegram, over time I grew confident about using these tools. Being a digital imaging technician means one has to continually learn and expand one’s knowledge about video signals, light and optics as well as file formats, encoding and colour science – all of that is mixed with practical knowledge of being on set, solidified through a hands-on approach. But let me tell you, there are people out there who are not DITs and yet their knowledge transcends everything you thought you knew. And now these people are part of numerous communities spread across Telegram, Reddit or Discord – and it also turns out they are some of the nicest people on the planet.

waifu2x-caffe improves resolution of video and images using Deep Convolutional Neural Networks. It was put together by lltcggie on github a few years ago and has since been primarily used by anime lovers to upscale their favourite rips of anime from 720p to full HD or even UHD resolution – this combined with DAIN is what is behind those weird high quality videos on youtube that changed the normal look of anime to eerily smooth motion in ridiculously high resolutions. It has a few pre-trained models, the one that worked best for me ended up being the Y Model.

The final settings that you can see the result of were:
Y Model / 3.0x upscale (1280×720 to 3840×2160) / De-noise Level 2 / 256 Batch Size with 2 Split / no TTA


4. EbSynth

While SDfx has presets that allow for smooth motion across nearby frames, the vector preset certainly doesn’t achieve that. This is where the difference from Linklater’s animations is clearly present. The flicker is artificial and not man-made. But it has some beauty to it, especially in conjunction with the music itself. As we were simultaneously working on another project, Henge’s Mushroom One music video, which heavily uses EbSynth processing as the core process, I thought it might be a good idea to use some frames processed with SDfx as keyframe input for EbSynth – at least at the beginning, before the music starts. EbSynth is a piece of software developed by a team of lovely people from Prague, Secret Weapons. We have been hooked on it since our first encounter with it in July 2019.

And so 300 frames or so were processed to get that smooth beginning. There are still issues with it and one can see artifacts produced by the software but luckily this project is so abstract anyway that I decided to let it go. If any DITs are reading this – please do bear in mind I am fully aware this is a destructive process, especially working with PNGs. While they are uncompressed each stage introduces artifacts unique to each bit of software and we are limited to 8 bits of colour per channel but this piece is all about degredation. Which is nice for a change, there is a sinister sense of satisfaction to breaking the rules.


5. Lyrics

A friend of mine and a member of the collective, Dora Schluttenhofer-Lees, has a brilliant body of work that features a lot of scanning. She worked on numerous music videos which feature her handwriting driving the aesthetics and overall tone of the videos. I reached out to her asking if she would be interested in collaborating on this piece and without a moment’s thought she offered to help. She had handwritten every word of the lyrics, scanned it and arranged it with timestamps on a timeline in Premiere Pro. Her compositions were similar to her previous work, such as BE WHAT I WANNA BE or I FIND IT HARD TO WAKE UP FROM THE GRAVE. Upon review I worried there was a bit too much focus on the words which took away from the performance and being in the moment which felt important for what the music speaks of. Recognizing my own fault I decided to animate the lyrics in After Effects, in a manner that could indicate words being part of the stage. Upon importing the Premiere Pro project file it became a terrifying prospect – hundreds of keyframes across hundreds of properties with no ability to make changes once they are all there. It was clear this was the perfect time to revisit expressions. I wrote a simple expression to animate all of the independent layers which you can find below.

Adobe After Effects 8.0 Keyframe Data

Transform	Scale
Expression Data
wordSize = thisLayer.name.split("/")[0];
sizeCtrl = thisComp.layer("Controller").effect(wordSize)("Slider");
scaleDur = (outPoint - inPoint) / 3;
scaleIn = ease(time,inPoint,inPoint + scaleDur,0,[sizeCtrl, sizeCtrl]);
scaleOut = ease(time,outPoint - scaleDur*2,outPoint,0,[sizeCtrl/2, sizeCtrl/2]);
scaleIn - scaleOut;

End of Expression Data

Transform	Y Position
Expression Data
wordSize = thisLayer.name.split("/")[0];
seedRandom(index,true);
clipDur = timeToFrames(outPoint - inPoint,12);
maxMove = 5000/(thisComp.layer("Controller").effect(wordSize)("Slider"));
startPos = thisComp.layer("Controller").transform.yPosition + random(-20,20);
endPos = thisComp.layer("Controller").transform.yPosition + (random(-(maxMove),maxMove));
easeOut(time,inPoint,outPoint,startPos,endPos);

End of Expression Data

Transform	Rotation
Expression Data
wiggle(6,3)

End of Expression Data

Transform	Opacity
Expression Data
fadeDur = (outPoint - inPoint) / 2;
fadeIn = easeIn(time,inPoint, inPoint + .1,0,100);
fadeOut = ease(time,outPoint - fadeDur,outPoint,0,100);
fadeIn - fadeOut

End of Expression Data

Transform	X Position
Expression Data
seedRandom(index,true);
wordSize = thisLayer.name.split("/")[0];
clipDur = timeToFrames(outPoint - inPoint,12);
startPos = thisComp.layer("Controller").transform.xPosition + random(-30,30);
endPos = thisComp.layer("Controller").transform.xPosition - ((clipDur/2)*(thisComp.layer("Controller").effect(wordSize)("Slider")*4) + random(400,600));
easeOut(time,inPoint,outPoint,startPos,endPos);

End of Expression Data

End of Keyframe Data

Let’s briefly review the code. I used the layer name to link them to Sliders on the CONTROLLER NULL. There were three sizes: big, medium and small. clipDur is based on the duration of each layer, so that the longer it is the bigger the word and the further it travels away from the singer (though most layers were of similar length it has introduced some variance). Scale and opacity drive the initial moment of each word appearing near the mic. X position dictates how far a word travels while Y position adds a bit of variance and randomness so that it doesn’t look like every word appears in line, one after another. Finally, a simple wiggle was added to the rotation property to make it less stiff and add a bit of vibrance.

Screenshot of the AEP file that shows the naming convention of layers that drive size of words through expressions.

6. Compositing

The last stage was to bring it all together. This also allowed for a secondary colour grade. I isolated skin tones to keep them more or less consistent across the piece and changed hue and saturation across the whole video to make it look like the lights were changing during the performance. A little bit of glow here and there, added an intro and end credits, worked with the musicians on mastering the audio file a little bit and trimmed the end to get rid of unnecessary fluff. Last thing that was left to do was to posterize time. The thing that makes the animation feel less artificial is playback at 12 frames per second. This kicks in after the introductory ebsynth sequence (a few keyframes, each output fading into the next) as we get the SDfx flickering.


Conclusion

Overall, working on this project has been intuitive and natural in regards to its development. It took around two days to fly to Aberdeen, shoot and fly back, roughly 48h of editing, reviewing and figuring out the pipeline – stretched across four months – and finally 24h of SDfx processing, 8h of EbSynth processing and 27h of upscaling with waifu2x-caffe. In the future we look to collaborate with all kinds of bands in a process somewhat analogous to how we feature photographers and their work in the SEE IT series. For now, if you’re interested in chatting more about the process behind this video, do drop us a line.

Leave a comment

Item added to cart.
0 items - £0.00