Wednesday 7 October 2015

Jacob on Acoustic Guitar Miking Techniques

When it comes to recording any instrument the choice of microphone, it's placement and the recording environment, will dramatically affect the final sound. Experimenting with these parameters provides the possibility of creating pleasing tonal properties. Whether you are recording in a professional studio or at home in your bedroom, experimentation will allow you to explore sound manipulation giving you more paths to the desired result. This blog entry will outline some simple microphone techniques for recording acoustic guitar. So, what are some of these techniques and how can we use them to create a well balanced sound?

For beginner producers it is important to understand the different microphone types and where they are typically used. The two main types of microphones are dynamic and condenser mics both of which have their own properties which make them more suitable in different situations. Condensers are the most common microphones found in studios as they have a greater frequency and transient response (the ability to reproduce the "speed" of an instrument or voice). They also require their own power source (phantom power) and generally have a louder output but are more sensitive to louder sounds. Dynamic microphones don't have a frequency and transient response as accurate as condensers however they are a lot less fragile and can handle louder sounds making them more suited for live recordings.


When recording acoustic guitar in a studio environment we would typically record using two microphones positioned at different parts of the instrument (stereo configuration). However, as many first time bedroom producers will only have the one microphone at their disposal, we will start by exploring a single microphone position. Using a condenser microphone with the polar pattern (which gives directional sensitivity information) set to cardioid, position the microphone around six inches from the guitar between the sound hole and the top of the fretboard. The cardioid pattern will concentrate the sensitivity towards the front of the mic giving a small amount of room ambience from behind the mic. By positioning the microphone closer to the sound hole it will become more sensitive to the bass frequencies giving a warmer sound. Placing it more towards the top of the fretboard will give brighter tones and string noise. By moving it between these two positions, you can determine the best position for producing an all round balanced tone. This will vary between guitars and microphones, so experimentation is crucial, but as a general rule the “sweet spot” is around the 12th fret. When deciding how far from the guitar to place the mic it is important to consider the proximity effect. This rule describes an increase in bass frequencies when the microphone is closer to the sound source and is covered in another of our blogs by Ryan Tynman under the title “The Proximity Effect.” You don't want it to sound too boomy and boxy which will lessen the high frequency detail but at the same time you don't want to loose the warmth. The single condenser technique can also be used for double-tracking where the performance is recorded twice and then each take is panned hard left and right giving a wider stereo image.




Once you have mastered the single miking techniques you can move onto the stereo miking techniques of which there are many variations. By using a pair of microphones we can accurately re-create the stereo characteristics of a recording. For the purpose of this blog we will concentrate on three of the main ones; Blumlein, Spaced Pair and XY.


The Blumlein configuration was developed by Alan Blumlein towards the beginning of the 20th century and requires two bi-directional (figure of 8) microphones placed at a 90° angle. Bi-directional mics are equally sensitive at the front and the back so with this technique a decent room ambience is important. The microphones are place one on top of the other as close as possible without touching. This technique produces a wide and clear sound that can accommodate most acoustic guitar tracking. However, if the room acoustics are not desirable it is probably best to go with another more directional technique.


The Spaced-Pair technique which involves two parallel directional mics, with one pointing at the sound hole and the other at the fretboard (around 2.5ft from each other), is a good example of a configuration which is more directional. When deciding how far from the guitar you should place the mics it is useful to consider the 3:1 rule which states that the microphones should be placed three times farther apart than they are from the sound source. So if the mics are 2.5ft from each other, they should be placed 0.83ft (around 10 inches) from the guitar. This guideline helps to minimise phasing issues. Just like with the single condenser techniques, you can play around with these distances to create the desired tone. This technique is be good for capturing fret detail like pull-offs and hammer-ons. Again see Ryan's blog entry “The Three to One Rule” for more detail.




The next technique is the XY configuration which is similar to the Blumlein technique as it involves two mics placed close to each other at a 90° angle. However, this technique uses microphones with a cardioid pattern which reduces the amount of room ambience. The XY configuration is more convenient to set-up than the Spaced-Pair technique as less time is spent deciding the best positioning for the mics although it creates a somewhat narrower stereo image. Like all stereo techniques phasing can become an issue if one microphone is slightly closer to the sound source than the other; some of the frequencies can cancel each other out. This can easily be solved with the use of a stereo bar (shown in the photo below). This ensures the mics are the exact same distance from the source.



When deciding on the technique for you it is important to consider a few things: the tone of the guitar, the frequency response of the microphone(s), the environment in which you are recording and how much time you have available. Only by experimenting with the various techniques can you learn about the tonal properties of your instrument, microphone and recording environment. These microphone placements have to be considered as guidelines, some of them will work better on different guitars, so sit down and play around with them. Your only limit is time and dedication!

Click Here to find out more about our Recording services!

Friday 18 September 2015

Jacob on Mastering Audio Techniques!

Mastering is the final process in post production before the audio is sent off for duplication. It is considered by many to be the most important step. Mastering engineer Howie Weinberg describes it as “Photoshop for audio.” This is an accurate statement as the process uses a variety of tools to enhance the recording to be as good as it can be. It needs patience and a meticulous ear for audio as the tiniest of adjustments can impact greatly on the final sound of the master. Moderation is key! Each engineer will have their own way of going about mastering however, they will all use the same kind of dynamic processing tools; EQ, compression, limiting, noise reduction and dithering to name a few. Unlike mixing which involves processing each individual track or instrument so that they fit together neatly in the stereo field, processing in the mastering stage is applied to everything.

The first step involves transferring the final mix into the preferred DAW (Digital Audio Workstation). The industry standard as always is Pro Tools, however there is a handful of software such as SADiE, Pyramix, Sequoia that have been written with mastering in mind. There aren't great differences between the DAWs, it is mostly dependant on the engineer's preferences. After that the “silence” between each track is edited. As CD players take a small amount of time to unmute after skipping a track, it is vital that there is a gap (around 300ms) of silence at the beginning so that the first transient of the song is not cut out.


The next stage is the most important and time consuming. It is the dynamic processing or “sweetening” of the audio to maximise the sound quality. This is done using the processing tools mentioned earlier. Equalization is applied in small amounts to balance the track. It is important that each frequency band is balanced with the rest so that they are complimenting each other rather than fighting for space. Compression is used to add punch and warmth to the mix as well as loudness. In mastering, a Multiband Compressor will be used. It does the same job as an ordinary compressor however it allows the engineer to compress sections of the frequency spectrum which makes it all the more accurate and efficient. Limiting will allow the loudness to be pushed further without peaking or clipping. Compression and limiting must be used in moderation, an overly compressed track that has been pushed too far will be left with a small dynamic range, making it sound flat and dull. The final stage in the “sweetening” chain is Dithering which is simply the application of a low level random noise if the audio is truncated. Truncation describes the reduction in resolution of audio i.e. from 24-bit to 16-bit. When this happens, the sound quality is diminished as the extra 8 bits are lost. By adding random noise, it helps in masking the distortion produced by truncation, making the many short-term errors much less noticeable to the listener. This is a very powerful tool should always be applied before truncation!


Once all of these steps are complete and both parties are happy, the final master can then be transferred to the final format (CD-ROM, half-inch reel tape, PCM 1630 U-matic tape, etc.) From this the song or songs can then be duplicated.


The success of mastering relies heavily on the monitoring and listening environment in which it was mastered. The better the speakers, the more detail is heard therefore the greater the accuracy. This goes for the processing tools used as well, professional standard EQ, compression and limiting help greatly in achieving a finer sound. This is why a professional recording studio set up is desirable. Mastering can be thought as the final push in highlighting what's great about a track or album. By carrying out each step correctly, it can make a good piece of audio into something well polished, professional sounding and marketable!

Wednesday 16 September 2015

Audio Restoration and Enhancement

Audio restoration is a process used to reduce noise such as hiss, crackle, hum, etc. from audio recordings. Most modern techniques of audio restoration are performed in the digital realm on audio from an analogue source. The removal of unwanted sounds is usually performed in DAWs (Digital Audio Workstations) however there are also digital standalone de-clicker's, de-noiser's and dialogue noise suppressors. Although there are automated solutions, audio restoration is still a time consuming and complex process that often requires experienced audio engineers with a background in audio post production.


Record Restoration is a particular kind of audio restoration where the audio is converted from the analogue signal on a gramophone record (either 78, 45 or 33⅓ rpm). First the record is cleaned to remove surface noise that can be caused by dirt, then it is transcribed into the digital realm, where software is used to adjust equalization and volume. This from of audio restoration is however rarely used and gets harder the older the record is, due to the nature of the medium (playback causes gradual degradation of the recordings).


Above is a before and after shot of an audio recording that has been processed with the popular de-noising/de-hissing tool izotope RX. As you can see, the waveform has changed fairly drastically which would be represented in the improvement of the audio quality. Izotope RX is an extremely effective tool for removing a static noise or hum. It works by analysing a sample of the noise selected by the user. It then 'learns' the noise and subtracts it from the entire clip, often to inaudible or negligible levels.

Audio restoration is more than just adjusting volumes and de-noising files, it can have a huge impact, depending on what the audio is used for. We have worked on a number of projects where clients had potentially incriminating audio recordings but the audio quality was so poor that nothing could be understood. Audio restoration brings out the voices in spoken word recordings so that they can be transcribed and used as evidence in court. Clean audio can make a real difference and can very well be used as damning evidence or proving your innocence.

Audio Dialogue Replacement

Automated dialogue replacement is a technique employed to enhance or alter an actor's dialogue after a scene has been shot. It involves an actor watching the original video and re-performing each line in time with their lip movements. This could be needed because of a lacklustre vocal performance or to bring about a subtle change in the actor's lines, such as a slight change in inflection, or simply that some dialogue is needed for an animation or video game. One of the most common uses for ADR is unwanted background noise, such as police sirens in the background of a medieval set. This can be negated by shotgun microphones – a highly directional microphone that rejects sound off-axis – but even some background noise can bleed through. “Apocalypse Now” supposedly had 90% of the audio as ADR, due to very noisy sets. 

Whilst some smaller, independent movies avoid ADR due to budgetary constraints, it can be indispensable since they may not be able to lock down and control the sound in certain sets in the same manner that a big budget blockbuster can. The origin's of ADR can be seen in the black-and-white days, where the pretty faces would perform in front of the camera and their less attractive counterparts would be hidden behind a microphone.



Considered by many as a necessary evil of the movie industry, it is a widely held view that the dialogue produced in ADR will rarely come close to the performance in the original shot. The actor's voice is often lower pitched and quieter (since they tend not to be as hyped up as they are on set). This can be countered by digitally shifting the pitch of the voice, recording in a more authentic environment or bringing the director into the studio to encourage the actor. There is also the challenge for the actor to get his lines perfectly in sync with his lips. The typical method involved a line running across the screen, with three beeps (followed by a fourth silent beep) counting down to the cue point. 


The whole ADR process is nowhere near as mind-numbing as it was in the pre-digital era, where each individual segment of film (which required ADR) was put on a projector and cycled again and again until the dialogue was suitable. Thanks to digital editing, the recorded dialogue can be manipulated to better synchronise with the video, and some software is able to match the peaks of the newly recorded dialogue with the original, messy dialogue, such that it perfectly fits in with the video.


Since a large proportion of many films soundscape is built in post-production, is it even worth recording sound on set anymore? Absolutely, much of the original audio can easily be salvaged and spliced into the final mix, especially the background noise and general ambience to help everything sound more natural. It will also save a huge amount of time and money; most actors would rather keep their original performance than go into an ADR studio and re-record dialogue. All things considered, it is an invaluable tool that allows film makers to help mitigate several audio issues which can ruin a movie. 

For more about our ADR facilities as well as other voice recording services at OAPP click here.