We’re excited to share a one-day promotion for Envato Elements. Usually $29, for one day only you can lock in a lifetime price of just $19 per month!What is Envato Elements?Envato Elements provides inspiring and ready-to-use photos, templates, fon…
In this tutorial, I’ll show you how to arrange and create a better transition from one song to the next by looking at:
- placing the songs in an appropriate order
- the use of silence between each song
- applying fades to the head or tail of a song
The fades, sequencing and spacing of an album are often carried out at the final stage of the creative process. An understanding of this process can push a good album into a great one.
It’s better to decide on the song order before the songs are mastered. This can help with consistency, ensuring the songs sit side by side sonically and dynamically.
In addition, a fresh perspective on the sequence could prove helpful.
The opening track sets the tone for the album. It doesn’t have to be a single or song predicted to be the album’s biggest hit, but it should be a song that clearly indicates the genre.
Seven Nation Army and Smells Like Teen Spirit were both hits and perfect tone setters.
The Body of the Album
The body of the album should have peaks and valleys, taking the listener to different places.
Some artists arrange the songs into three or four song sets, the end of each set often being a quieter and slower song. Arranging an album into sets can be less overwhelming, resulting in a better album sequence.
Ending an album doesn’t have to be like ending a live set, with the big hit often at the end. It should however, like a concert, leave the listener wanting more.
The last song is often slower, longer and for the most part textually thinner. The Tourist by Radiohead suits these elements perfectly.
There are occasions where these factors are met, and in addition, the album’s biggest hit is placed at the end of the record. Hurt by Nine Inch Nails is an excellent example of this.
There are times where lyrics dictate an album sequence. The Rise and Fall of Ziggy Stardust and the Spiders from Mars was a concept album with the character committing suicide in the song Rock ‘n’ Roll Suicide.
Wisely, it was placed at the end of the album.
Odd One Out
There are occasions where the difficult decision of leaving a song off an album has to be made for the album to work stylistically.
The song, however, doesn’t have to go to waste. It could be used as a B-side to a single, saved for the next album, or added as a bonus track for a special release.
Trial and Error
Sometimes it is better not over-intellectualise the process and simply use trial and error by shuffling the songs around.
Try playing the last 30 seconds of a song and the first few seconds of the next, taking time to critically listen to how they sit side by side with each other in terms of tempo, key, theme, and sonic character.
The silence between each song should be approached artistically. A set time for every transition should not be used.
Some Digital Audio Workstations (DAW) have a default of two seconds. It is best to deselect this.
The length of the silence depends on what has been and what is next. Use the 30-second rule discussed to help with the decision.
Fast Followed by Slow
A fast song followed by a slow song often requires a longer length of silence between the songs.
This helps to distance the relationship between the two songs, helping to make the song sound slower rather than dragging.
Slow Followed by Fast
A slow song followed by a fast song often requires a short length of silence. This often helps create a greater impact.
The genre may be a factor in the choice of length of silence. A classical album often works best with longer lengths of silence than, for example, a punk album.
If you’re spacing for vinyl or tape, artefacts like tape hiss or vinyl crackle can make the space feel shorter than listening digitally.
A listener could hear a song finish earlier than another listener due to the difference in the type of environment in which they are listening.
With that in mind, silence after a slow fade-out is often best kept short because a person listening in a car, for example, will often hear the song finish sooner than somebody listening tentatively in a quiet room.
Fades can be described as an increase or decrease in volume.
Fades are used for one of two reasons:
- to smooth the audio
- to avoid audio glitches
They are often created at the very start or end of the audio.
Two elements make a fade:
- Duration—the duration of the increase or decrease in volume
- Curve type—the variability of the rate that the volume increases or decreases over the course of a fade
Below are the four common curve types displayed as fade-ins.
Advice for each curve type:
- Linear is often the default curve in a DAW. Some DAWs automatically add an extremely short linear curve to the beginning of the audio. This is to prevent a click at the very start of the audio, which is a result of no silence at the very start of the track.
- Logarithmic is the most natural sounding of the four types.
- Exponential is good for suppressing reverb during fade-outs and helpful in creating a sudden impact during fade-ins.
- S-curves are a combination of two types of curve—exponential to the midpoint and logarithmic thereafter.
The duration of the fade is more subjective. An average duration is two to five seconds. Alternatively, try being experimental. The Low Spark of High-Heeled Boys by Traffic has a fade-in duration of 1m 22s.
Fades can be applied during the mixing or mastering stage. Advantages during the mixing stage include:
- Reducing the time needed during mastering, which could result in a lesser expense.
- Allowing for more creative use of fades. For example, you can fade in/out individual instruments at their own rate. This can make for a more interesting fade as opposed to all instruments boosting or attenuating in sync.
Advantages during the mastering stage:
- Helping to create a more suitable transition from one song to the next as you can hear how songs sit side by side.
- Allowing for more creative use of sequencing. For example, cross-fades between two songs.
The purpose of a cross-fade is to make a seamless transition.
You may use them for one of two reasons:
- transitioning from one song to another
- making two timbres sound like one
Cross-fades can use any of the four curve types.
An example using two linear curves:
For song transition, cross-fades work best when the end of the music is textually thin. For example, a drone or sustained note bleeding into the preceding song.
Listen to the album Francis the Mute by Mars Volta which has a cross-fade on every song.
Share your ideas on the sequencing, fades and silence in your album. Involve the mastering engineer in the decisions you make. Get thoughts from family and friends.
A lot of hard work, time and money are often invested into producing an album. Putting the album together is the final creative step. Time and thought should be allocated to it. After all, it would be such a shame to slip up so close to the finishing line.
Limiters are used for everyday mixing and mastering and choosing the right one for the job in hand is important and there are many great plugins for the task. I usually se up mastering as the last element of the chain. The limiters available today…
A session musician is someone who’s paid to add their instrumentation to a recording or live event on a casual basis. In terms of recording you’d previously go to a studio, often in a major city.
Whilst this still happens with the advent of the internet, plus affordable recording equipment, session musicians can now work from home and with anyone anywhere in the world.
This has opened up the industry but increased competition, In simple terms, there’s more work to go around but more people than ever with whom to compete.
In this tutorial I’ll outline the practicalities you’ll need to consider to become an online session musician.
This almost goes without saying, but let’s be clear: this isn’t a job for beginners.
You either need to be completely amazing at one style or facet of playing, or a musical chameleon, the sonic equivalent of a Swiss Army knife.
Unless you’re a specialist, someone who can build a reputation as the go-to person for a particular kind of playing, you’ll have to be prepared to turn your hand to whatever’s thrown at you.
It’s tempting to take the scattergun approach, and offer or agree to everything that’s thrown your way, especially in the early days. However, you run the risk of either declining a lot of enquiries, or worse, delivering sub-standard work because a job didn’t really fit your skill set. Neither will enhance your reputation.
It’s better to be very clear from the start as to what you’re offering. If you’re not sure how to describe your skills, take a look at existing ads from other session players. If nothing else, you may spot a gap in the market!
If you’re working with clients all over the world, you’ll be keeping some strange hours. If your home or working life is flexible enough to accommodate this, that’s fine.
If, however, your family already don’t see enough of you, doing this won’t improve matters.
If you need to keep strict hours make that clear to clients at the outset. You could even set up a Google calendar, or similar, so that potential clients can see when you’re available.
It’s Not All About You
When working on any project you’re likely to form an idea of what’s best in terms of what you can provide. Happily some clients will want to be guided by you, especially if they’ve no experience of your instrument or are new to arranging music.
Other clients, however, will have a strong opinion as to what they want, which may well differ from your own. You’ll need to be comfortable with playing what they’re asking for instead of what you want.
Respectfully offer your opinion, by all means, as the client might not have considered what you’re proposing. If they’re adamant, play what’s required of you.
Always remember: the man who pays the piper calls the tune.
Speaking of money…
Earning a Living
If you’re exceptionally talented, hard-working, network like crazy and get some lucky breaks, you might find yourself earning comfortably, thanks to some high-profile bookings.
This is exceptional.
Assuming that you’re not a high-profile professional, you can typically earn a few hundred a month.
Supply and Demand
It’s also worth knowing that, like a lot of jobs in the music world, there are good times and bad. Demand can be somewhat seasonal. For example, you might be rushed off your feet in the run-up to Christmas but find January extremely quiet.
Bearing all of this in mind it’s better to view this kind of work as an additional revenue stream rather than your sole income.
When it comes to income, don’t forget to declare any such earnings on a tax return, even if you think you’re not earning much. If you’re new to this, go online, find the local tax office and contact them for advice.
Everybody Wants Some
A good way of getting work and building a reputation is through a web-based agency or marketplace. This allows you to set-up a shop front to advertise your services as well as bringing you enquiries. A typical example, and one that I use, is fiverr.com
If you go down this route, however, you need to factor a few things into your costs.
Such sites are going to want paying for helping you find work and fees are typically 10-20% per transaction.
Unsurprisingly, some of the biggest sites are US-based and work in dollars. As exchange rates will fluctuate, earnings won’t be fixed or guaranteed.
Some sites will pay you via companies such as PayPal, so you’ll need to be registered accordingly. They in turn will take their own commission for handling your money.
Factoring this in, plus commission fees, means that you’ll receive around 60% of what the buyer originally paid. You’ll need to set your prices to offset at least some of this whilst still remaining competitive.
Waiting to Get Paid
As with a lot of financial transactions, buyers are allowed a cooling off period after a purchase, in case they change their minds. This means that you’ll probably have to wait 7-14 days before payment is released to you.
Online session work can become a way of earning some extra money, provided you have the time and the skills to pursue it. In summation, bear in mind the following:
- Be good at what you do
- Be clear as to what you’re offering
- Regular hours can be a struggle
- Work isn’t guaranteed
- Play what’s required
- Declare earnings, whatever they are
- Factor fees into your prices
In the next tutorial, I’ll show you what you’ll need to get started, such as equipment.
In this tutorial you’ll learn how to synchronize interview audio and video within Final Cut Pro X.
Usually when you record dual-system sound you end up with a situation like this:
Many little video files: you have several, possibly many, video clips. These videos have the audio recorded from an on-camera shotgun mic or the camera’s built-in mics. The
levels are a low and so is the sound quality.
- One (or a few) big audio file: you have a lavalier or a boom mic that goes directly to an external recorder. This recording sounds a little bit better.
Even on a single shot, you might have two (or more) video clips if your
camera has a clip limit. Some DSLRs have a 10-12
minute clip length limit (a 30 minute clip limit in Europe). Or maybe
and started a recording during the interview just to help yourself
organize. So it’s very common that you’ll end up with multiple video
clips and one long external audio recording.
PluralEyes is a little single-purpose program that is really good at
syncing audio. It’s especially good for complicated shoots where you
have multiple cameras and audio sources. Maybe you’re recording a live
performance with three or four cameras and a line from the sound
board, and then everybody has their own microphones on top of their cameras.
PluralEyes can synchronize between all that video and audio very quickly.
First I’m going to show you how to use PluralEyes. Then I’ll recommend a few ways to do this within Final Cut that makes it easier
and simpler, without having to use PluralEyes.
How to Synchronize Audio and Video with PluralEyes
The first way we’re going to synchronize audio is to send the project from Final Cut to PluralEyes, and back again. This is called a “round trip” between the applications.
Round Tripping Between Final Cut and PluralEyes
1. Create the Timeline in Final Cut Pro
Select your video
clips and place them on the timeline. Then place your audio on the timeline underneath the video. Choose File > Export XML and save the file.
2. Import to PluralEyes and Process
In PluralEyes, choose File > New Project from
Final Cut Pro, and select the file you just saved. The timeline you created with Final Cut will load into PluralEyes. Go to Sync > Synchronize to start the process. For short videos with clear audio, it sometimes takes less than a
3. Export and Return to Final Cut Pro
Now export the corrected timeline back to Final Cut Pro: File > Export.
Select Final Cut Pro X XML, Create multicam clips, and Open Event/project automatically in Final Cut Pro. You can deselect Create an Event with audio content replaced in video clips.
We selected our media assets, processed them in PluralEyes. It’s a great start: you now have the video and audio synced within a multi-cam
clip. PluralEyes also spits out a new project where your video and audio are
synced. You’re ahead of the game!
How to Synchronize Audio and Video in Final Cut Pro
It doesn’t take much for PluralEyes
to do this kind of work for interviews. However, going to another program can be overkill on smaller, simpler projects. It is also another expense, in addition to what you’ve already paid for your
license to Final Cut. Luckily, you can sync audio and video in Final Cut
without using PluralEyes.
The Easy Way
The first way to sync is simple: to select your video clips
and your audio clip from your Library, then right-click, and choose Synchronize Clips. In the modal box that pops up, give the new clip a name, tick Use audio for synchronization, and click OK.
One thing to
keep in mind with this method is that it retains the original on-camera audio. If you go to your Inspector and select this new synchronized interview
clip, you’ll see that Final Cut keeps our old audio from on top of the camera,
in addition to the good audio from the external recorder. So you’ll want to
uncheck it to make sure that we don’t hear the bad audio anymore.
The Multicam Clip Way
The second way that we can synchronize interview audio
and video within Final Cut is to create a new multicam clip. Final Cut
will actually synchronize the video and audio within the multi-cam clip automatically.
As before, select your audio and video files. Right-click, and select New multicam clip. Call this something like
“synchronized interview multi-cam clip,” and make sure that Use
audio for synchronization is selected. Leave everything else in its
Now if you go to the resulting multi-cam clip you’ll see that in Angle 1 you
have your original two video clips, and then your one long external audio
clip, and they’re synced perfectly.
The Manual Way
Those are the two automated methods of using Final Cut to
sync interviews, and they work 90 percent of the time. When they don’t it can be really frustrating. You can end up spending a lot of time trying to force the
automation to work when it would be really much easier to just do it manually.
Locate the audio and video clips to sync. Right-click your first video clip and select Open in Timeline. The video and audio within the clip from the clip have been added to the timeline as separate elements.
Now drag the external audio track below them. Try to move this audio track as close to what
looks like a sync visually; you’ll manually fine tune the sync frame by
frame, left and right.
So visually we can move this back. You can see that this is
the hand clap sync. You’ll notice there are other peaks here that look pretty
similar to each other, so we can visually just kind of find that peak. And get
it as close to where we think it’s synced – you don’t have to get it super
close, but close enough.
Start playing the playhead (L), and use
your comma (,) and period (.) keys to nudge the external audio clip left a frame or
right a frame until it’s in sync. You’ll know it’s in sync when the audio
sounds like it has a phase effect on it: it sounds like it’s a little
tinny or like a space alien. You’ll know it when you hear it (like we do in the video above).
Now that you have everything in sync, don’t delete the
original audio. You may need to reference it, and in any case,
you don’t want to delete data in editing anyway. But you can hide it by using
your V key. If you go back and drag this clip into your project you will have the
good audio married to the video now.
Sync Multiple Clips Manually
If you have multiple video clips, one really easy way to manually
sync them step by step is to go back into Open in Timeline.
When you reach the end of the video clip, just Blade (B) the leftover segment of
the interview from the audio recorder. Cut and delete the gap, and then go into the
second clip. Open that in the timeline, and now you can paste and start to move
this audio closer to where the peaks are in sync.
And now you are all set: your original video clips
now have the good audio synced to them. In all of the methods here you didn’t generate any new clips, which is really great for organization. Anywhere that these clips go now, the synced
audio will follow them.
I hope you’ve learned how to synchronize interview audio
in Final Cut Pro X. See you next time!
A couple of years ago I started to create a framework for music creation. I was doing it unconsciously during the practice in starting creating new tracks.
I’ve compiled these tips and strategies. You can use them for inspiration, imitating other tracks and learning new elements from music.
Here’s the complete workflow that I work through.
During this process there are three stages:
- Breaking down
- Putting them together
There are none steps for reverse engineering:
- Tempo and rhythm signature
- Musical scale
- Chord progression
- Sound design
Listen to music with active thinking. If you don’t hear the details then train your ears.
With self-development you’ll hear all details, but you need time for this. One or two days is not enough; think long term.
I used to have basic musical hearing that I developed over a number of years. If you do the practice consciously, you can shorten this time by orders of magnitude.
Usually I search for a high quality music video on YouTube, then set it to repeat for analysing and MIDI programming. You can also download these videos with some websites.
On Beatport you can usually find the tempo and musical scale of the music.
Mixed in Key and Rapid Evolution
These are programs which show tempo and scale.
For a long time I used Rapid Evolution. Then in last December I bought the latest Mixed in Key.
I recommend it very much. It’s a very fast and precise software application and is easy to use.
Piano Type Instrument
For MIDI programming I use a piano sound from any plugin or sampler from my DAW (Ableton Sampler or Simpler).
You don’t need to play the piano. It’s enough to record a four-note melody on a MIDI keyboard.
This can take five to 50 minutes depending on the given music, your skills and musical intelligence.
If you don’t have a MIDI keyboard, buy at least a two-octave version. This is enough for melodies, basses and short chord progressions.
Complete MIDI files
Search the Internet for free or paid MIDI files. You can quickly make an inspired music, bootleg or remix with these.
1. Tempo and Rhythm Signature
You can find the tempos in this table
|76–108 BPM||Andante||Walking speed|
|168–176 BPM||Vivace||Very fast|
The most common rhythm signature is 4/4. Most music uses this.
2. Musical Scale
These scales are the most popular in western music:
- Major (7 notes), eg. C-major
- Minor (7 notes), eg. A-minor
- Major pentatonic (5 notes)
- Minor pentatonic (5 notes)
C-major and A-minor scales are the most simple. You only need to use the white keys on a keyboard.
3. Chord Sequence
Often chords are written for these bar-lengths:
- 4 Bars
- 8 Bars
- 16 Bars
In a lot of music that are only 4-8 chords, so you don’t need to work too much to figure them out.
Types of popular harmonies and chords:
- Dichord (2 notes)
- Trichord (3 notes)
- Tetrachord (4 notes)
Examples of most used chord sequences:
|C – G – C – G||I – V – I – V|
|C – G – F – G||I – V – IV – V|
|D – C – G||V – IV – I|
|G – C – D – C||I – IV – V – IV|
|G – Em – C – D||I – vi – IV – V|
|Em – D – C – B||vi – V – IV – III|
Usually the bass uses the root note of chords. Sometimes it is varied a little.
The first and fifth degree (I and V) are the most important notes when talking about stability in a scale.
- C-major: C and G
- A-minor: A and E
- Mood: agressive, passive, neutral, light, playful, slow, fast
- Instrument selection
- Keeping interest
- Question-Answer formula
7. Sound Design
- Sound type
I recommend listening to these elements during mixing:
- Volume balance
- Using reference track
Answer your own questions regarding mastering:
- Reference track
How to Do It
To analyse, I set the track to repeat. I actively listen to instruments to identify them. The technical part is the sound design and mixing.
If I don’t understand, or don’t hear, something then I’ll repeat the listening process. And listen to it again.
With each active listening I find more and more detail.
You should always start with these things to set the base of the music and mood.
- Tempo and rhythm signature
- Musical scale
- Chord progression
C) Quick Changes
The goal is to get results quickly, without obstacles. The solution is to loop the music and, during this, you record the chords and bass.
Then you change these step by step. You change one or two semitones at a time. By performing these steps it’s possible to reach the base of the track.
At first it may seem difficult but it will come with practice. Just start it and do the work. You’re allowed to make mistakes. Often, the best music is based on small errors; happy accidents.
Fine Tuning the Composition
Five to ten minutes is enough for this and it’s useful to tune up or down a couple of semitones if needed. You can also change the rhythm as well.
Inspiration, Copying or Theft
“A good composer does not imitate; he steals.” – Igor Stravinsky
Because ideas can’t be copyrighted and you can’t make it into a standard. This is the case with musical ideas.
“Immature artists copy, great artists steal.” – William Faulkner
The ideal solution is a middle way: you can take ideas but you should work with them a bit, change and modify them. If you take ideas from a lot of musics, people will think about you as a genious.
In this tutorial I’ve shown you the steps to break down and analyse music.
The skill for this doesn’t come in one or two days. Plan for the long term.
Picture it now: graceful and exciting footage captured by a drone as it sweeps over a landscape. Now imagine the theme to Seasame Street playing over the top of it. That’s an entirely different vibe and it’s why your choice of track matters.
Undeniable Awesomeness Vehicle
We’ve put together our 15 favourites from Envato’s Audio Jungle to help you find the sound that’s right for your UAV video.
Timelapse Background is a modern, dynamic electronic track featuring cinematic sounds, breakbeat drums and deep bass; perfect for any creative video project.
A great inspirational composition for background music on your drone video. Instruments featured include: cello, violins, and brass.
A hopeful, motivating piece that begins softly and builds into cinematic, positive and inspirational sounds. Piano and a full orchestra create a powerful simplicity that is meant to inspire, move and uplift. Includes pre-cut edits with no bleeds.
Representing the path of a firefly, this track makes use of bells and synth sounds to create something simple but ethereal.
Heaven on Earth is an emotional track, designed to complement a dramatic production. This track builds to provide power and drama.
This is an emotional and contemplative piano-based track, filled with uplifting sections, making it perfect as background for any inspirational project. The piece works well with sentimental, nostalgic or romantic projects as well as motivational or call to action video.
Dubai begins calmly, before transitioning into something more complex. The piece inspires imagery of Dubai: skyscrapers, heat and affluence.
Chillout is a clean and inspiring, ambient track with a strong and steady rhythm. Close your eyes and imagine flying over mountains to this piece. Then pop it over your drone video and you won’t have to pretend.
Suitable for a wide range of projects, is an inspiring, powerful, piano pop track. “Uplifting Emotional Piano Pop” climbs steadily with uplifting piano, echoey vocals, and atmospheric guitars.
This piece starts off slow and steady before building to an uplifting crescendo. Slow-mo your drone flying heart out to this one, before upping the pace and hitting them with the big finish.
Relax, as you might have gleaned from the name, is a calm, smooth and slow track, with some mid-tempo rhythm so your audience won’t drop off mid-film!
12. Abstract Motion
Abstract Motion is a soft, electronic track featuring organic sounds and electronic textures.
13. The Epic
Stirring and emotional, think Hobbits finally reaching Mount Doom. If there are no hobbits (or wizards, or elves) in your production, then this would sound great over some stunning scenery.
This download includes four versions to suit your needs. Gentle piano music will provoke feelings of peace and nostalgia, so wipe the tear from your eye and add this to your emotional video for some heart-wrenching reactions.
Electronic is a modern, upbeat and abstract piece and could easily be edited to loop if required.
Would you like some more help and inspiration with your drone projects? We thought you might, so here are some other tutorials we’ve picked to help you out.
DronesRight Drone for the Job: How to Pick Cameras and GearCharles Yeager
DronesAre Drones Right for Your Project? Here’s How to DecideCharles Yeager
DronesHow to Plan a Drone Video FlightCharles Yeager
DronesFly That Drone Inside the Lines: Technical LimitsCharles Yeager
Drones15 Tips for Cinematic Drone VideoCharles Yeager
The modern world is full of three-letter acronyms, or TLAs, and other jargon. The world of music and audio is no different.
Every given topic area has its own specific abbreviations and terminology that can, at first, be impenetrable to some. In this tutorial I attempt to demystify a few of the terms that you’re likely to read in our tutorials.
In digital audio recording, thousands of individual “samples” are recorded every second. Added together these make up the digital audio signal.
One of the frustrations in recording is searching for just the right sounds. Percussion elements can very daunting to sort through. In this tutorial series…
In part two of the DIY Sample Series I’ll show you how to record big impact elements and program them in NI’s Kontakt.
A type of output from a DAW or console that allows signal to be routed to external devices.
Sends usually have returns which accept signal coming back from the external device, the external device typically being processors like reverbs etc.
In live sound, sends can be used for monitor mixes, alternative board mixes for other devices, and cue mixes in theatre sound.
MIDI effects, in Cubase, offers a number of plugins that can be used to modify your MIDI data and to create something new from it. In this tutorial, I’ll…
Sibilance is a hissing sound produced when pronouncing S and Z.
Sibilance is undesirable in professional sound reinforcement and can be controlled through the use of a de-esser like Valley Audio’s 401 Microphone Processor, 815 Dynamic Sibilance Processor, or 730 DynaMap Digital Dynamics Processor.
Some vocalist pronounce their S’s more than others. For us recording enthusiasts, this can pose a problem. S sounds have more energy and can be annoying to…
In this guide, I’ll show you how to use EQ and multiband compression to control and shape the tone of a lead vocal part.
Sidechaining a compressor for example, to duck music out of the way for speech.
You send music through a compressor, but send the vocal mic into the sidechain. When the announcer speaks, the compressor pushes the music out of the way.
Robert Anthony recently started using Studio One 2.5 in his studio, and has been exploring the DAW’s bread-and-butter features. In this quick tip, he shows…
Choices, choices. Should you put the guitar upfront, like the egomaniacal guitarist you are, or should you actually put the vocal in the forefront, where it…
Audio that is made up of two channels—left and right.
In this tutorial, I’ll show you the versatile DrMS mixing plugin by Mathew Lane along with some audio examples showcasing some of its more unique features.
This screencast is a quick tip on quick and dirty stereo effects and how to set them up.
An audio test signal used to adjust levels, test signal quality, identify signal pathways and so forth.
In the world of iOS, great live tones require great apps. In this tutorial I’ll show you some of the popular choices and suggest the best ones to choose.
Looking through equipment and any number of plugins might lead you to question what, really, is necessary and what is superfluous. In this tutorial, I’ll…
Any device that converts energy from one form into another. Microphones and loudspeakers are both transducers.
Simply by moving the speakers you can create a more accurate listening environment. There’s a lot of complicated science behind this, though in this guide…
We’ve published a few lists of microphones for those in the market before, but we’ve never given you a proper introduction to microphones before today….
Audio frequencies which are too high to be heard by humans (above approximately 20,000 kHz).
I began to get into bit depth and sample rate in my final mixing/mastering tut and although we are not necessarily digital audio engineers, some basic…
This tutorial focuses on some of the mechanics behind real tape and introduces you to the characteristics of classic tape machines and their virtual…
Pertaining to the human voice.
Backing vocals are important, yet so many people neglect them. When they are mixed well, they play a supportive role that enhances the whole track—but when…
A songwriter shouldn’t underestimate the importance of finding the right demo singer for his or her song. Check out my tips to assure that you maximize the…
Virtual Studio Technology, or VST is a software interface that integrates software audio synthesiser and effect plugins with audio editors and recording systems.
VST, created by Steinberg, and similar technologies use digital signal processing to simulate traditional recording studio hardware in software.
As audio engineers, we often have to walk the line between the artistic and the scientific, the subjective and the objective. However sometimes keeping our…
In this era where EDM, Dubstep, Techno and other forms of electronic music are flourishing, a MIDI instrument or device will help you keep up with the trend….
A Volume Unit is a unit used to measure the volume of an audio signal.
In the world of iOS, great live tones require great apps. In this tutorial I’ll show you some of the popular choices and suggest the best ones to choose.
In this article I will show you a rendering test with both FL Studio and Ableton Live and Suite DAWs to find out the truth about the mixing engines.
The length of a wave, measured from any point on a wave to the corresponding point on the next phase of the wave.
A lockable connector, available with various numbers of pins. The most common XLR in audio work is the 3-pin XLR.
Dreaming of starting a podcast? Then you’ve got a lot of work to do. Besides planning content, finding co-hosts and negotiating advertising deals, you’ve got…
Audio on iPad? I was a little dubious back in January 2010 when I asked "Will Apple’s iPad revolutionise music production?" Things have changed….
What You’ll Be CreatingFMMF is a freeware FM synthesizer made for the KVR Developer Challenge 2009. It is 32-bit and Windows only, so you will probably need a wrapper for running. I used jBridge for using it in Ableton. You can download it from th…
Locating Events and ClipsSometimes you may have to figure out which events in a
project refer to a clip, you can use the Select
in Project option in the right-click context menu of a clip. As soon as you
select this option, all the events in the projec…