How to ACTUALLY Set up Your DAW (Sample Rate and Buffer Size)

DAW Settings: Bit Depth, Sample Rate and Buffer Size

 
Click here to skip straight to the infrographic
 

I can remember how much sample rate, bit depth and buffer size confused me when I was younger.

I knew most of the settings worked from trial and error. And back then that was good enough for me.

But I didn’t understand the benefits of a higher bit depth or sample rate. I didn’t understand what buffer size meant, or what I should set it to. Sometimes there would be a delay when I was monitoring, and other times there wouldn’t. And I didn’t know why.

Then I decided to sit down and really get to grips with these parameters and what they meant. That’s when I realised that it really isn’t that complex.

I could spend all day arguing about the pointlessness of super high sample rates, and the theoretical versus practical dynamic range of 32 floating bit.

But you don’t need to know all of that!

All you need is a basic understanding of what these terms, and when to use higher settings.

So let’s keep it simple:

 

Bit Depth

Bit depth (not to be confused with bit rate) is how many ‘bits’ the computer uses to record the audio.

A higher bit depth means a larger dynamic range. A larger dynamic range means a bigger difference in volume between your recorded audio and the noise floor.

What’s the noise floor? A very small amount of noise generated by all of the electronic components of your recording gear.

To put it simply – a lower bit depth means more noise.

If you were to record at a low bit depth and turn the track up, you would hear more noise than if you recorded at a high bit depth and turned the track up.

This means that at a higher bit depth you can record at lower levels and not have to worry about noise when increasing the volume of your recording.

Still with me?

 

Sample Rate

Sample rate is how many times your computer takes a snapshot of the audio every second.

It’s kind of like a video. A moving image consists of lots of still photos shown very quickly in concession (frames per second).

A higher sample rate means more ‘frames’ in your audio. This is great if you want to stretch the audio out and slow it down in your DAW.

If you stretched audio with a low sample rate, you would hear the gaps between the ‘frames’.

A higher sample rate can also capture ultrasonic frequencies. Some people argue that the lack of these frequencies interferes with your audio.

 

Buffer Size

And finally – buffer size.

The buffer size is the amount of time you allocate to your DAW for processing audio.

There is no ‘good’ or ‘bad’ setting for buffer size. It doesn’t affect the quality of your audio.

It all depends on what you’re doing at the time.

When you’re recording and monitoring, you want to hear the audio back as quick as possible.

Here’s an example. You’re recording a bass guitar by plugging it straight in to your audio interface. The bass isn’t amplified, so you need to monitor it via your DAW. This way you can hear what you’re playing in the speakers or in your headphones.

Imagine if there was a delay between when you played your instrument and when you heard the audio? How annoying would that be!

For this reason, you would want to allocate the computer a very small amount of time to process everything. So, you set the buffer size as low as it will go.

Now imagine you’re mixing. You’ve finished recording and you start loading up plugins and effects. You want the computer to have as much processing power as possible.

In this situation, you would set the buffer size as high as it will go.

Some people like to set the buffer size somewhere in the middle and forget about it. I like to adjust the buffer size depending on the situation.

 

File Size

A higher bit depth and sample rate can be very beneficial, but there are downsides.

By turning these settings up you’re capturing more digital information. This means your files will be a lot larger.

It’s important to find a balance between file size, sample rate and bit depth. If you can afford the space on your hard drive, record with higher settings.

In most cases, though, there is little need to go above a 48kHz sample rate at 24 bits.

 

Infographic

Too much to take in?

Here’s all the information you’ll ever need in a pleasant visual format. 

You don’t need to fully understand these terms to know which settings to use. Just take a look at the infographic below every time you open a session.

Download it, reference it, share it.

Heck, print it and stick it on your wall!

 

How to set up your DAW the right way (bit depth, sample rate and buffer size)
 
 
 



 

Share this Image On Your Site


 

I’d love to hear from you. Tell me below what settings you’re going to use in your next session. Now is the time to take action!

Don’t let your friends waste time setting up their DAW incorrectly. Share this post using the buttons below.


 

Don’t panic! There’s more to read…

Get my list of the best audio editors (and they’re all free).
 
Alternatively, you could have a browse through recent articles in the blog section.

10 Responses to “How to ACTUALLY Set up Your DAW (Sample Rate and Buffer Size)”

  • Thank You! I’ve always had a few fundimental questions. You article is excellent! I’m the lead singer and run recording to mastering. I find recording in my studio at 24 bit @ 96hs with 1024 buffers works perfectly! Might depend a bit in the room with vocals. Awesome. Thanks Again.

  • In response to Aaron – The human ear is generally capable of hearing frequencies up to about 22kHz, and thanks to the Nyquist-Shannon sampling theorem, we know that we only need to sample at twice the maximum frequency to correctly represent the frequencies in the range. The only reason to use frequencies above 44.1kHz is if you intend to make inaudible (ultrasound) frequencies audible by pitch shifting downward or slowing the audio.

    You are correct about the human hearing range which is around 22k but in response and with respect the reason a higher sampling rate is beneficial is even though we can hear up to 22k it doesn’t mean higher frequencies don’t exist and as these frequencies make life a complete misery by cause aliasing. These frequencies will get shifted down the frequency spectrum and create havoc with the sounds you can hear,they are still there and recording at a higher bit rate such as 192 allows us to hi pass them out before they hit the recording and stop them from causing so much annoyance and destruction not just to me but to my poor dogs who can actually hear them and even worse for fecking bats lol.

  • This post is a year old, but from my own recent research and testing makes perfect sense. I recently switched from 24bit x 96khz to working with 32bit x 96khz settings throughout my track/mix/master process, exporting and doing any/all conversions and dithering offline with my Weiss Saracon. However, concerning Ableton Live 9… I have run into a “bit” of a roadblock with trying to get a straight answer from anyone about using samples.

    Most people I’ve discussed this with give an answer such as 24bit x 44khz is “just fine”. The thing is, I don’t want “just fine”, I want the very best quality possible! In the Ableton manual it states using 32bit samples is advised for obtaining optimal sound quality (reason being, Ableton’s internal processing is 32bit). It would seem to be the obvious right answer then, use 32bit samples.

    The issue I have is, as a strictly ITB producer using only samples (and VST plugins, which create or have the option to create audio/samples at 32bit making those ready to go), there are VERY few sample sets available at 32bit x 96khz – these days almost all are 24bit x 44khz (and older samples at 16bit x 44khz). Mostly I am referring to one shot individual drum samples which get further editing, heavy processing and “resampling” accordingly in a typical House music project mix in Ableton Live.. I make everything else in a project from scratch with VST plugins.

    Maybe I just need confirmation and clarity on this because nobody seems to be doing it this way and searching the net yields answers scattered across the board, with lots of misinformation about. According to the Ableton manual, I should first UP-sample all individual sample files to 32bit x 96khz offline using a 3rd party program, save and then import them into my project – reason being, “ALL samples imported to a project should be one/the same bit rate as well as match my internal project settings”, which is 32×96 (for both preferences and export settings). Otherwise, importing lower sample rates and applying any editing, even a +/- .1 gain change, fade, pan etc., will be a “non-neutral” operation resulting in altered/degraded sound quality (leaving SRC and dithering to Ableton – which I want to avoid at all costs!).

    Anyway, so my question to you is this, I have compiled thousands, maybe tens of thousands of samples over the years, to achieve this “optimal” sound quality should I upsample all of them to 32×96?

    Your thoughts on this would be much appreciated!

    Cheers

    • As for sample rate. Unless you pitch shifting or time stretching significantly, 44.1kHz is as good as anyone’s ears can hear. The human ear is generally capable of hearing frequencies up to about 22kHz, and thanks to the Nyquist-Shannon sampling theorem, we know that we only need to sample at twice the maximum frequency to correctly represent the frequencies in the range. The only reason to use frequencies above 44.1kHz is if you intend to make inaudible (ultrasound) frequencies audible by pitch shifting downward or slowing the audio.

      The reason PC sound cards and integrated audio support playback frequencies above 44.1kHz is because it is easier to make a good high cut filter with a higher cutoff frequency, high cut filters are required to prevent aliasing while sampling audio.

      As for bit depth, 32-bit floating point numbers actually have 24 bits of precision! The rest of the bits are used for an exponent. Floating point numbers are useful because they allow you to fearlessly mix, amplify, and process tracks without clipping, the actual precision of the number is determined by the “mantissa” (a.k.a. significand or coefficient). Using 24-bit (signed integer) samples in your 32-bit floating point processing pipeline makes perfect sense.

      To understand 32-bit floating point numbers, Wikipedia has a helpful article. https://en.wikipedia.org/wiki/Single-precision_floating-point_format#IEEE_754_single-precision_binary_floating-point_format:_binary32

      And again, for playback, humans are not generally capable of distinguishing more than 16 bits of dynamic range, so for playback of a finished program, 16 bits is perfectly fine. 24-bit sampling is popular for recording because it gives you more headroom while retaining good precision.

  • If my sound card has the highest bit depth of 16bit and I set my DAW at 24bit, will the recorded audio have low noise floor than if i had set the daw to 16bit? does it make a fifference????

    • You need an interface that will record at 24bit I’m afraid!

      In the scenario that you have outlined, your recording would only be at 16bit. You might also have compatibility issues between your DAW and interface.

      • huh, thanks for the help. but i would like to know, what does the converting, a device in the soundcard or a sofware(sound driver) in the computer???

        • The DAW will do the converting and your project settings will dictate the bit depth that you record at. If you record at 24bit, you will also need to dither your audio down to 16bit when exporting to avoid any damage.

Leave A Comment

Your email address will not be published.