Last updated on June 20, 2020 by

I can remember how much sample rate, bit depth and buffer size confused me when I was younger.

I knew most of the settings worked from trial and error. And back then that was good enough for me.

But I didn’t understand the benefits of a higher bit depth or sample rate. I didn’t understand what buffer size meant, or what I should set it to. Sometimes there would be a delay when I was monitoring, and other times there wouldn’t. And I didn’t know why.

Then I decided to sit down and really get to grips with these parameters and what they meant. That’s when I realized that it really isn’t that complex.

Get industry-quality every time (steal this framework)

I’m guessing you’re here because you want to make your mixes sound professional. Well, you don’t need expensive gear or software to do that – you just need the right knowledge.

We put together a brief training that covers a totally new approach to music production. Until now, everyone has been teaching production totally backward.

Just click below to watch.


But if you just want to learn about DAW Setup specifically, keep reading.

Alright. Let’s keep this simple…

How to set up your DAW the right way (bit depth, sample rate and buffer size)


Bit Depth

Bit depth (not to be confused with bit rate) is how many ‘bits’ the computer uses to record the audio.

A higher bit depth means a larger dynamic range. A larger dynamic range means a bigger difference in volume between your recorded audio and the noise floor.

What’s the noise floor? A very small amount of noise generated by all of the electronic components of your recording gear.


To put it simply – a lower bit depth means more noise.

If you were to record at a low bit depth and turn the track up, you would hear more noise than if you recorded at a high bit depth and turned the track up.

This means that at a higher bit depth you can record at lower levels and not have to worry about noise when increasing the volume of your recording.

Still with me?



Sample Rate

Sample rate is how many times your computer takes a snapshot of the audio every second.

It’s kind of like a video. A moving image consists of lots of still photos shown very quickly in concession (frames per second).

A higher sample rate means more ‘frames’ in your audio. This is great if you want to stretch the audio out and slow it down in your DAW.

If you stretched audio with a low sample rate, you would hear the gaps between the ‘frames’.

A higher sample rate can also capture ultrasonic frequencies. Some people argue that the lack of these frequencies interferes with your audio.



Buffer Size

And finally – buffer size.

The buffer size is the amount of time you allocate to your DAW for processing audio.

There is no ‘good’ or ‘bad’ setting for buffer size. It doesn’t affect the quality of your audio.

It all depends on what you’re doing at the time.

When you’re recording and monitoring, you want to hear the audio back as quick as possible.

Here’s an example. You’re recording a bass guitar by plugging it straight in to your audio interface. The bass isn’t amplified, so you need to monitor it via your DAW. This way you can hear what you’re playing in the speakers or in your headphones.

Imagine if there was a delay between when you played your instrument and when you heard the audio?


How annoying would that be!

For this reason, you would want to allocate the computer a very small amount of time to process everything. So, you set the buffer size as low as it will go.

Now imagine you’re mixing. You’ve finished recording and you start loading up plugins and effects. You want the computer to have as much processing power as possible.

In this situation, you would set the buffer size as high as it will go.

Some people like to set the buffer size somewhere in the middle and forget about it. I like to adjust the buffer size depending on the situation.


File Size


A higher bit depth and sample rate can be very beneficial, but there are downsides.

By turning these settings up you’re capturing more digital information. This means your files will be a lot larger.

It’s important to find a balance between file size, sample rate and bit depth. If you can afford the space on your hard drive, record with higher settings.

In most cases, though, there is little need to go above a 48kHz sample rate at 24 bits.


Conclusion: Sample Rate

Hopefully you’ve got a better understanding of how to set up your DAW.

Next Steps

If you want to dig deeper into music production and learn what it actually takes to make mixes that sound pro…

And you’re an intermediate or advanced producer…

Be sure to check out the free masterclass:


UPDATE: Learn about 7 DAW mistakes that can really hold you back in our recent video:

I can remember how much sample rate, bit depth and buffer size confused me when I was younger. I knew most of the settings worked from trial and error. And back then that was good enough for me. But I didn't understand the benefits of a higher bit depth or sample rate. I didn't understand what buffe

Leave a Reply

Your email address will not be published.

22 comments on this article

  • Avatar

    I have a question about songs that I recorded a long time ago in Cakewalk using an Echo Gina 20bit audio interface and Cakewalk was 32bit back then. So the settings were Sample Rate= 96000 and Bit rate= 20bit (of the audio interface as that was as high as it would go), I am not sure what Bit Rate the DAW was set to record at, and the driver I believe was not ASIO but something else like the WMD or? The problem I have is that all the songs when loaded into my Calkewalk by Bandlab DAW (that is a 64bit DAW) play back way too fast and it does not seem to mater if I load the wave file or the Cakewalk bundle of the files – they all play back too fast? I have tried to set both the DAW Sample Rate and the current Audio Interface (a Roland OctaCapture) to 96000 but songs still play too fast. Have also tried changing the Sample Rate in both the DAW and the Audio Interface to 44100 but still the problem persists. What am I not understanding that would allow me to load these songs into my 64bit DAW to hear them at the correct speed and then master them and export them as waves for CD recording or as MP3 for streaming??

    • Avatar
      Shane Suenderhaft says:
      August 23, 2021 at 04:04:41 pm

      have you found a solution to this problem yet because im having exactly the same issue as you only im usindg a focusrite scarlet 6i6 interface and mixcraft pro 9 DAW.

  • Avatar

    I found this trying to figure out what the heck buffer size is because latency has been killing me. I had an epiphany reading this and I’m itching to start tracking with my new-found knowledge, thank you!

  • Avatar

    Great article, very clear and easy to follow!

  • Avatar

    Helo. I have a question. When I export my mix I have two buffer sizes to add. One of them is asio buffer size. Do they have to match?

  • Avatar
    Mikael Stromkraft says:
    May 11, 2019 at 10:51:43 am

    Great article! A good source for many newcomers to digital audio, for sure.

  • Avatar
    Mikael Stromkraft says:
    May 11, 2019 at 10:47:34 am

    It’s not at all correct that “the only reason to use frequencies above 44.1kHz is if you intend to make inaudible (ultrasound) frequencies audible by pitch shifting downward or slowing the audio”.

    This notion ignores intentional Saturation and distortion or any other processing that adds harmonics. When these harmonics hits the Nyquist Frequency they bounce back downwards into fully hearable range causing aliasing, which is really bad in many instances. When you double the sample rate you also move the Nyquist Frequency typically preventing this, provided the Saturation/Distortion processor moves its processing with the sample rate.

    While recordings in double sample rate also can cause hearable Intermodulation Distortion depending on material it does give us the opportunity to gently filter off above the hearable range while also avoiding aliasing when using Saturation and Distortion.

    Basically this means that if you have any material at any production stage that risks hitting the Nyquist Frequency, you’re better off using the double sample rate if your gear can tolerate the extra work that comes with this.

  • Avatar

    Exellent Information

  • Avatar

    I’m running an i7 gtx 1060 and 24gb of ram with what I believe should be plenty of power and I still get popping and clicks (software lag) when recording, I use alot of vsts in my chain so I know that has somethig to do with it but ive also heard of and seen situations where people run even more than I am and run protools in live autotune or even edit tracks with much much more than me with no problems. I try to run 48khz at 24 bits and between 64 and 256 buffer

  • Avatar

    For fast guitar playing, how can i get video at 60fps without losing image quality???

  • Avatar

    Thank You! I’ve always had a few fundimental questions. You article is excellent! I’m the lead singer and run recording to mastering. I find recording in my studio at 24 bit @ 96hs with 1024 buffers works perfectly! Might depend a bit in the room with vocals. Awesome. Thanks Again.

  • Avatar

    In response to Aaron – The human ear is generally capable of hearing frequencies up to about 22kHz, and thanks to the Nyquist-Shannon sampling theorem, we know that we only need to sample at twice the maximum frequency to correctly represent the frequencies in the range. The only reason to use frequencies above 44.1kHz is if you intend to make inaudible (ultrasound) frequencies audible by pitch shifting downward or slowing the audio.

    You are correct about the human hearing range which is around 22k but in response and with respect the reason a higher sampling rate is beneficial is even though we can hear up to 22k it doesn’t mean higher frequencies don’t exist and as these frequencies make life a complete misery by cause aliasing. These frequencies will get shifted down the frequency spectrum and create havoc with the sounds you can hear,they are still there and recording at a higher bit rate such as 192 allows us to hi pass them out before they hit the recording and stop them from causing so much annoyance and destruction not just to me but to my poor dogs who can actually hear them and even worse for fecking bats lol.

  • Avatar

    This post is a year old, but from my own recent research and testing makes perfect sense. I recently switched from 24bit x 96khz to working with 32bit x 96khz settings throughout my track/mix/master process, exporting and doing any/all conversions and dithering offline with my Weiss Saracon. However, concerning Ableton Live 9… I have run into a “bit” of a roadblock with trying to get a straight answer from anyone about using samples.

    Most people I’ve discussed this with give an answer such as 24bit x 44khz is “just fine”. The thing is, I don’t want “just fine”, I want the very best quality possible! In the Ableton manual it states using 32bit samples is advised for obtaining optimal sound quality (reason being, Ableton’s internal processing is 32bit). It would seem to be the obvious right answer then, use 32bit samples.

    The issue I have is, as a strictly ITB producer using only samples (and VST plugins, which create or have the option to create audio/samples at 32bit making those ready to go), there are VERY few sample sets available at 32bit x 96khz – these days almost all are 24bit x 44khz (and older samples at 16bit x 44khz). Mostly I am referring to one shot individual drum samples which get further editing, heavy processing and “resampling” accordingly in a typical House music project mix in Ableton Live.. I make everything else in a project from scratch with VST plugins.

    Maybe I just need confirmation and clarity on this because nobody seems to be doing it this way and searching the net yields answers scattered across the board, with lots of misinformation about. According to the Ableton manual, I should first UP-sample all individual sample files to 32bit x 96khz offline using a 3rd party program, save and then import them into my project – reason being, “ALL samples imported to a project should be one/the same bit rate as well as match my internal project settings”, which is 32×96 (for both preferences and export settings). Otherwise, importing lower sample rates and applying any editing, even a +/- .1 gain change, fade, pan etc., will be a “non-neutral” operation resulting in altered/degraded sound quality (leaving SRC and dithering to Ableton – which I want to avoid at all costs!).

    Anyway, so my question to you is this, I have compiled thousands, maybe tens of thousands of samples over the years, to achieve this “optimal” sound quality should I upsample all of them to 32×96?

    Your thoughts on this would be much appreciated!


    • Avatar

      As for sample rate. Unless you pitch shifting or time stretching significantly, 44.1kHz is as good as anyone’s ears can hear. The human ear is generally capable of hearing frequencies up to about 22kHz, and thanks to the Nyquist-Shannon sampling theorem, we know that we only need to sample at twice the maximum frequency to correctly represent the frequencies in the range. The only reason to use frequencies above 44.1kHz is if you intend to make inaudible (ultrasound) frequencies audible by pitch shifting downward or slowing the audio.

      The reason PC sound cards and integrated audio support playback frequencies above 44.1kHz is because it is easier to make a good high cut filter with a higher cutoff frequency, high cut filters are required to prevent aliasing while sampling audio.

      As for bit depth, 32-bit floating point numbers actually have 24 bits of precision! The rest of the bits are used for an exponent. Floating point numbers are useful because they allow you to fearlessly mix, amplify, and process tracks without clipping, the actual precision of the number is determined by the “mantissa” (a.k.a. significand or coefficient). Using 24-bit (signed integer) samples in your 32-bit floating point processing pipeline makes perfect sense.

      To understand 32-bit floating point numbers, Wikipedia has a helpful article.

      And again, for playback, humans are not generally capable of distinguishing more than 16 bits of dynamic range, so for playback of a finished program, 16 bits is perfectly fine. 24-bit sampling is popular for recording because it gives you more headroom while retaining good precision.

  • Avatar
    Jéllodz Ferdinand says:
    January 1, 2016 at 11:18:00 pm

    If my sound card has the highest bit depth of 16bit and I set my DAW at 24bit, will the recorded audio have low noise floor than if i had set the daw to 16bit? does it make a fifference????

    • Avatar

      You need an interface that will record at 24bit I’m afraid!

      In the scenario that you have outlined, your recording would only be at 16bit. You might also have compatibility issues between your DAW and interface.

      • Avatar
        Jéllodz Ferdinand says:
        January 4, 2016 at 07:34:00 pm

        huh, thanks for the help. but i would like to know, what does the converting, a device in the soundcard or a sofware(sound driver) in the computer???

        • Avatar

          The DAW will do the converting and your project settings will dictate the bit depth that you record at. If you record at 24bit, you will also need to dither your audio down to 16bit when exporting to avoid any damage.

  • Avatar

    Straight to the point and on point.