Mastering is usually used in sound production to make the track sound good on different platforms. The only thing you need to check is if the podcast is loud enough for streaming services. How do you mix a podcast with background music which has vocals. Home » How do you mix a podcast with background music which has vocals. EQ, Compression, and Mastering When editing the podcast you should center and how overall sounds and not think about whether you used more eq or compression in some parts.
The things that you should think when editing a podcast are, clipping, background noise, EQ, compression, and mastering: Clipping When you talk too loud through the microphone and it sounds distorted that is when the audio is clipping.
EQ An EQ plugin is a simple tool that allows you to raise or lower the volume of certain frequencies. Commit to good sounds early and avoid endless tweaking later in the mixing stage. Picture a yellow school bus. Now picture it with a bunch of sounds riding it. This is what a bus is in a mix.
By sending multiple sounds to one track the bus you can apply the same processors to them all at once. Try it out on a drum bus. This allows you to process all your drum sounds as one unit. Or Set up a delay or compression bus. Experiment with which sounds you send to what bus. Time to give your mix a little haircut.
A little snip here, a little trim there. Drop the drums for a bar, crank up that vocal for a verse. Get loose. Get a basic balance of your levels before you go crazy with effects processing. Think about headroom early. Keep a final goal in mind as you balance all of your tracks. This will give you a rough idea of how each track will eventually fit together. Processing will smooth out your rough ideas. So what is panning? Panning helps you control the width of a mix.
Panning allows for sounds to be placed in your mix properly. Either to the left, or the right of the stereo center. Keep your heavier or lower sounds near the centre. This means the bass or the kick. Use them as a centring force that you can work around.
If everything is panned centrally, your mix will sound flat or crowded. The meat of your mix can be broken down into three basic areas. EQ , Compression and Reverb. They are the donkey work of mixing. Perfect these 3 areas and everything else will come naturally. Every sound is made of frequencies. Frequency is measured with Hertz Hz. Equalizing is the art of boosting, cutting and balancing all the frequencies in a mix to get the sound you want.
Bass instruments have a very low-heavy, boomy sound. Their output is mostly low in the frequency spectrum. Alternately, a snare or a high-hat are often a lot more tinny, so they will typically appear in the mid or high frequencies. Even though we can place these sounds in the general high and low categories, all sounds will have important information in both the highs and the lows. Use filters. They clean your frequencies up with surgical precision.
The best place to start with corrective EQ tools are high-pass and low-pass filters. The rest is left behind. Remember that every track will need special attention.
For instance a tom drum is going to need an entirely different EQ treatment than a Rhodes piano. Listen to learn. Figure out what adjustments you need to make with your ears. Carving EQ may seem similar to corrective EQ. Only in this step you are correcting your frequency with the other tracks in mind. Everything will start to fit a bit better in this step.
The pieces start to interact. This might sound crazy but good carving EQ sometimes means taking good parts of a frequency out. Do it so that all your tracks will mesh better. No worries. This is because you carved your tracks down with the other tracks in mind.
Think of your song like a novel. There have to be some other characters to fill out the story. Carving puts your characters in order. You might have two elements battling each other at the same frequency. Like vocals and synth.
Carve a space for each by cutting the frequencies on one while boosting the same range on the other. This is the final, and most creative stage in your equalizer journey. Dress them up. There's an equalizer for just about everything.
Now is the time to make your vocals jump out of the speaker. Or make your kick bash and your snare explode. Or make those synth lines extra heartbreaking. Some instruments don't need to be ducked if their freqency range does not overlap with the dialog.
I would arrange the routing so that the instruments that need to be ducked pass through a single bus and then apply ducking to that bus, with fast attack and release. If some instruments are particularly distracting, then I would duck them separately, and less subtly, with slightly longer release time.
Kev Certainly ducking is the way to do it, but it's not always necessary to duck everything. It is very rare to have access to the music in stem form in order to process only certain parts of the music. It does work a little more like that in major films but rarely in a TV series. Maybe in some of the bigger series but often not.
The fact is the there should be nothing distracting to start with under any of the dialogue if the composer has done their job well. You often not only get an edit of the scene but the dialogue is usually there to alert you where it comes in goes out.
From experience ducking seems to be more a radio thing and in many of the docos I was involved with they never ducked the music either. They automated it. Sometimes a music cue can lower its level quite slowly but then comeback up a little faster. If you are ducking you have to really set those attack and release parameters well.
Often the attack is not slow enough either to create the effect of the music easing down as opposed to jumping down. Kev's approach is good though if you are creating the music in your DAW and you have the video playing and even be lucky enough to have the dialogue track playing at the same time. But instead of ducking elements down in order to stay out of the way of the dialogue why not remove them all together.
Re arrange the music so the melody lines sort of come to rest just prior to dialogue and picking up after it stops. This is still way better than ducking. Because the offending sounds are not even there to start with. Dialogue will often be in large blocks. Also if you are working with production library music often the better libraries offer an underscore version and the tracks will be identical time wise.
Except all the melodic information has been removed. You can always have both on your time line and cross fade from the melodic tracks to the underscore tracks quite nicely and all seamlessly too. I have done that before too.
Another thing to watch is effects too. If there are huge effects things going on you may want to keep the music very simple at that point too. I think Jeff said everything that can be said on this topic.
So will do nothing more than recap. Short of that, or possibly in conjunction with it, you can do selective frequency ducking on the source material so as to allow sonic space for the dialog. This ducking can happen either via an actual frequency dependent ducker or via automation envelopes with a good EQ plugin.
A combination of 1 and 2 is probably your best bet. I thought someone would have mentioned Vocal Rider from Waves as well. Or better? Or worse? Vocal Rider is about keeping a constant vocal level, not ducking.
Anderton Vocal Rider is about keeping a constant vocal level, not ducking. This would not be the same as compression? Brian Walton.
Not exactly. It is supposed to be more like riding the volume control to maintain the same level but doing the work for you and more efficiently. Compression is a different effect. I tried it a while ago and that is what I recall. Max Output Level: I'm a little confused. Unless you have access to have the music written in a way to naturally leave room for narration vox there really isn't any way to practically do what you're asking without lowering the music volume via whatever technique works.
I'd reconsider your premise that the music always needs to be out front in relation to your narration unless you have a wacky plan for one or the other doing an experimental treatment on the narration vox for example. The OP is really just asking how to automate the process without having to sit through some long track pushing the volume up and down when each clip of narration start and stops.
The volume changing is the effect he wants, it is the time consuming way of doing it that he is trying to avoid. Thus why you see responses related to processes that automatically make such changes based on certain variables.
It's something to think about. The music plays practically all the time, and eventually there is some narration. You mean in this case that music should always be limited to, e.
0コメント