Updated: Sep 8, 2022
It's been a while since my last blog post, but I hope it will be worth the wait..
In this fourth installment of my Pro Producer's Tips series, I will be covering what morphing is and how you can employ it to create unique sounds using a variety of different techniques, from the most basic to the more esoteric and unconventional with a bit of side-thinking.
As always it's not DAW-specific but I'm an Ableton user, so I will explain the concepts in this workstation trying to be as generic as possible because my goal is to give you inspiration and ideas out of the specific software used; there will also be some freebies accompanying this topic for you to download at the end of the article, which has plenty of audio examples, images and video tutorials. I try to schedule these blog posts once per month in order to be able to prepare the contents and hopefully give back quality and rich amount of information.
Now I'm excited for this one so here we go!
What is Morphing?
Talking in general, morphing is a fluid, gradual and seamless transformation between two different shapes and, in our specific context, is about transforming and blending one sound into an other at the simplest form.
And here comes the second part of this blog's title: sonic fingerprints and signatures. It's a metaphor to suggest how you can imagine the process by thinking of every sound as a complex and unique conglomerate of various informations, and by retrieving them you can then be able to give that 'sonic imprinting' to a totally different sound.
How can you create this? Well, here comes the fun and creative part as once grasped the idea you can experiment and come up with lots and lots of solutions. I've prepared a well dozens of examples, not exhaustive of course, but enough to get your head around this beautiful technique.
2. Starting From The Basics
I'm assuming that you must be well aware of what the dry/wet control is and does: it blends in percentage the mix between a dry, non processed version of a signal to its processed version (think of a reverbed signal for example) usually in a linear fashion. This is one of the simplest form of morphing if you think of it.
When DJing, the crossfader, as the name suggests, can blend one track into an other and sometimes the software lets you select different curves of crossfading, not just linear.
Inside of a DAW, the crossfade that you apply at the start and end points of an audio clip lets you create in a specific amount of time the transition from complete silence to full volume or the opposite. This is also a form of morphing even if you never thought of it (I like thinking of silence as a form of 'sound').
Clip Crossfade In & Out
3. Moving Forward
Now, imagine playing a synthesizer's patch that gradually morphs and evolves into something else as you play it. That would be cool, right? I like when designing presets for my soundbanks using macros, as by controlling various parameters of a patch all at the same time works wonders for transforming sounds and making them more compelling, fluid and less static. The beauty of sound and music afterall is arguably that it evolves through time.
Let's take this concept and think of using multiple instances of a synthesizer, each playing its own sound, and gradually blending between them. In this case you wanna maybe try similar kind of sounds for a more cohesive result, as you would get from using macros in a single synth, or totally different ones maybe processed afterwards together in a bus for sonic glue if that's what you are after (sometimes contrast and drastic changes work great too).
For this scenario in Ableton Live instrument racks are fantastic tools because you can create multiple parallel chains (as we saw in The Art Of Generative Music topic) and map them to different 'zones', called by the chain selector:
Simple Instruments Crossfade
In the image above you can see each chain occupying the entire range of 'zones' from 0 to 127 but the interesting fact is that they have opposite crossfade curves so when the selector points 0 only the first chain plays, when it points 127 just the second plays and each interval in between is a mix between the two.
In this way you can populate each chain with a different synth sound and merge them together by automating the chain selector.
Note here that we can also map the 'key' ranges, much like in a sampler, to create a multitimbral patch that plays differently based on the register you are playing, and 'velocity' ranges, to alter the sound in occasion of velocity change.
I will provide deeper explanations and examples for them, but for now start thinking of how many ways you can alter a sound into an other by simply linking some parameters together (in this case key range, velocity or chain values).
Try experimenting with delaying the starting of one of the two instances to create an offset between the two so that you can get interesting interactions and mutual counterpoint.
There is an empty 'Instrument Morphing' rack for you to download at the end of the article to try out this technique for yourself with your synths of choice and an audio 'FX Morphing' rack where I had fun with various mappings of multiple effects, so the chain selector, called 'Morph' in the rack, acts as a complex dry/wet knob, more as a sort of multieffect since each value detects a different combination of effects blended in:
HydraTek - FX Morphing Rack
4. More Applications:
- Sample Morphing
In the audio example above you can hear a robotic 16th note snare pattern turned into a more varied and interesting timbric variation, for a more organic and 'live' feeling. This technique is usually known as the 'round-robin' effect and some advanced samplers such as NI Battery 4 have got it already and it's a great way to generate variety usually in a drum programmed part by triggering slightly different versions of the same sound.
In this case I've applied a process similar to what I explained earlier to generate the same effect with 'Sample Morphing'. How was it done?
After having mapped three snare samples in the zone window inside of 'Sampler' (same as the instrument rack seen before) with different crossfading curves, I used an LFO with a sample and hold curve to modulate the sample selector, rate at 1/16 as the pattern and on retrigger mode: in this way every single hit sounds slightly different as the LFO chooses one value from 0 to 127 (or less if you don't set the amount of the modulation to full) in the zone window.
LFO Modulating Sample Selector
Samples Mapping & Crossfade Curves in Blue
- Velocity Morphing
In the above aural example I'm running three instances of Ableton's Wavetable synth in parallel and mapping them to different velocity ranges so that I can create an evolving soundscape based on how hard or soft I play.
The interesting fact here is that I can create a different timbre for each note of a chord by giving each of them a separate velocity value; at 0:12 note how the chord stays the same but the timbre changes.
Velocity Mapping Of Three Synth Instances
As you can see from the mapping above there are two main 'zones': below the 64 value and above it where there are two variations and blending of two different patches, the only exception being in the middle where all three instances play at the same time.
- Vector Synthesis
Korg Wavestation VST
Vector synthesis is no novelty, as it came out in the late '80s with brands as Sequential Circuit, Yamaha and Korg.
This is a wonderful application of the morphing concept when designing sounds: blend between 4 different sources conceptually arranged as the extreme points of X and Y axes by dynamic cross-fading with the infamous joystick of the Korg Wavestation, as I did in the example, and you instantly can get evolving pads and atmospheres perfect for ambient pieces. You can move the joystick manually or with envelope generators and LFOs.
- Spectral Morphing
Until now we have seen morphing applied mainly as a form of volume crossfading but it can be so much deeper than that. With the advance of modern technology and Machine Learning, research has been made for training algorithms and making models of various instruments to analyze what perhaps is the most complex out of all musical parameters: timbre.
Google and Magenta team realized recently an experiment called 'Tone Transfer' that lets you apply the analyzed model of a source signal to its destination. You can try it for yourself and have fun at this link: https://sites.research.google/tonetransfer.
Right now there are just a few models available and this is what I came up with:
I used some of my own samples to transform them in their trumpet, flute and saxophone versions. It turned out by layering them together that the pitch and dynamic analysis were pretty accurate! It can be a nice way to enrich your tracks with the help of some artificial intelligence 😉.
I've found an other free tool pretty interesting that you can try, it's called Spect Morph (visit https://spectmorph.org/ ). As the other it mainly has models of acoustic instruments but, while Tone Transfer has just a few controls to customize the end result, this one can be pretty deep and more customizable.
Yes, the GUI is not the most inviting and the raw sound is pretty dry and bad but with a touch of nice reverb some great pads can be crafted.
Here I've used an LFO module to morph between the pan flute and bassoon models, with additional vibrato and unison to make them more exciting:
Spect Morph Software
Talking of paid software, I suggest 'Morph' by Zynaptiq, which lets you morph between two sources in a matrix much like using the joystick as we saw in the vector synthesis tip, but under the hood the processing applied is quite complex and you can really hybridize sounds, not just crossfading them, with the additional help of a formant shifter module, a reverb and five algorithms to choose from.
Zynaptiq - Morph
In the following example, in order to get a more cohesive and harmonically compelling result I chose two loops in the same key, one lead line and a pad in C minor, then I added the reverb inside of the plugin to merge them more.
Different algorithms convey different results and I suggest experimenting with very different sounds, like a voice with a percussion loop, and find the 'sweet spot' in the matrix to mix the two sonic worlds.
It can be a really powerful tool to create unique sounds out of your samples and sound different from others, personally I also like using this technique for really weird bass design.
- Wavetable Synthesis
Where you most probably have already used a form of morphing during the synthesis process of your sounds is inside of wavetable synths where, by automating the wavetable position, you are actually 'morphing' between one state of oscillation into an other, to generate evolving set of harmonics and tones.
In the following video, I've employed the wavetable editor inside of Serum to create four single wavetables, different to each other, and then I used the 'Morph - Spectral' function to generate a total of 256 single wavetables that are a transitional state between them, so by automating the wavetable position you can get a seamless result. Preset is available to download at the end of the article ;)
- Morphing Filters
Filters are the bread and butter of sound sculpting and what is most often overlooked is the fact that you don't have to stick to just one filter type at any one time, but you can actually create parallel filter chains or select a morphing filter; if you use Ableton the Auto Filter device and pretty much any instrument with a filter module, such as Sampler, has got it available, also modern synths feature SVF or State Variable Filters where you can change the shape and type of filter dynamically to morph into an other, Serum being a great example with lots of filter options.
A cool free filter with morph function and other interesting features is Rift Filter Lite by Minimal Audio:
Minimal Audio - Rift Filter Lite
This kind of processing is particularly useful to create weird and moving tones, and I suggest when using it for Neuro Bass Design to blend a dry version to keep the low end intact and do the filter movement on the upper harmonics for interest.
- Hybridizing Sounds
An other fundamental parameter that describes a sound is its main envelope, that with a set of usually four main controls - Attack, Decay, Sustain, Release (ADSR) - tells us about the evolution of amplitude over time.
Early electroacoustic experiments of the '50s with the so called 'musique concrète' in France with Pièrre Schaeffer were made using tape machines: cutting and slicing the attack of a sound, reversing it, applying it to the sustain portion of an other to create hybrid results, or what was called the 'objet sonore'.
That was a craft work and required a lot of time, but with today's digital workflow doing that is a breeze and can still surprise us because that technique is incredibly powerful.
In particular removing the attack portion of a sound and listening directly to the remaining truncated part is very weird, because we can't easily tell what instrument is playing. That's because the attack portion of a sound is the most crucial for identifying lots of aural informations, so by applying a different one to it is key for achieving hybrid and original, unheard sounds. Definitely worth a try.
In the example below I've layered some one-shots of mine to create some cinematic effects, great for crafting interesting and rich textures.
- Image Synthesis
There are fantastic softwares that let you 'paint' sounds with the visual help of a spectrogram, like Iris 2 from Izotope, Photosounder and the free Coagula synth.
It can be incredibly fascinating both for the interconnection of sound with the concept of painting, but also and mainly for the results you can get: imagine isolating the upper harmonics of a guitar part and adding reverb and modulation to create lush pads and atmospheres.
This sort of easter egg for some of you may look similar to sampling: finding sounds, or group of frequencies in this case, and extracting them from a full sound or recording. It's an art form in itself indeed.
Melodyne is known for vocal correction mainly, but it's so deep and a great sound design tool to have in its own right for manipulating time, pitch, formants and more from a sound.
Others do a form of resynthesis by analyzing the sound's partials and reconstructing them using individual sinusoids, that can be freely adjusted afterwards like the research free program SPEAR - Sinusoidal Partial Editing Analysis and Resynthesis, being the acronym.
I suggest doing some investigation for yourself as less known tools can bring you apart from the competition and let you shine in your own way.
Izotope - Iris 2
- Groove Extraction
Continuing on the red thread of this topic, an other form of morphing in terms of extracting data from a sound and applying it to an other can be groove extraction. Ableton Live has this feature built in and I find it so useful, in particular when using different samples that you want to merge more for sounding tighter.
Groove is the combined information of timing and velocity, so how 'off' the grid a sound may be, expecially if syncopated or played live, and the dynamic interplay of the individual hits. By applying these informations to an other sound immediately makes it sound more like the other, finding similarities between the two.
May be obvious for many of you, but vocoders can be the first thing you would think of when trying to transform one sound into an other. The ability to make your synth patches 'sing' with your voice or a pad sound rhythmic by triggering it via a percussion loop for example. I talked a bit about them in the last blog post Getting Creative With Noise and will certainly investigate them more in the future.
- Matching EQs
Matching EQ was born mainly as a referencing tool for finding and seeing the differences between the sonic balance of a mix and the reference, but as always using tools in the 'wrong ' ways can yield inspiring results, particularly in the creative sound design stage.
Fabfilter Pro Q3 and other modern equalizers have this cool function integrated:
Fabfilter Pro Q3 - EQ Match Function
In the following example you will hear two pairs of bass loops matched respectively: A first, then B with EQ matching to A, then B, finally A with EQ matching to B.
Listen how closer they sound to each other after the process: B when matched to A sounds more buzzy in the high end, while A on the other end sounds muffled and filtered. This is an other option when trying to 'imprint ' the sonic characteristics of a sound into an other.
- Convolution Reverbs
This is the last technique I wanted to talk about today in this wide and fascinating topic that is sound morphing: convolution reverbs.
Originally born as a more natural sounding alternative to algorhithmic reverbs, that are just pure code, convolution reverbs employ impulse responses recorded on real world spaces to generate the profile that will be applied for reverberation.
But actually this impulse response can be literally anything, even a bass sample.
As convolution is a given mathematical function, the only thing and variable you have to worry about when using it 'wrongly' is the source audio file that you want to use as the profile, and that makes for crazy experiments.
In the following video showcase I've applied a bass loop model to an other bass loop to turn it into a cinematic drone and messed around with the Live's Hybrid Reverb device to customize the result.
Little advice: always put a limiter at the end of the chain when you are doing sound design as crazy feedbacks and jumps in volume may damage your precious ears!
It was a long post this time but if you have arrived here without skipping it's my hope you enjoyed it as much as I did writing it. As always my goal is to hopefully give you inspiration for your own projects, so if anything covered is not clear feel free to comment down below or contact me, also for suggesting me future blog posts.
Enough talking (or writing I should say!), now go and have fun for yourself!
Happy Music Making! 😉
P.S: Click the artwork below to download the free resources included in this blog post! ;)