Server 2012, 2012 R2 & 2016 Disable Or Remove Deduplication On A Volume

Update – 11/11/2020: I’ve added a link here to Microsoft’s updated Server 2016 documentation that details deduplication – Understanding Data Deduplication

I just thought I’d post about this, as it’s something I’ve come up against recently, how to disable deduplication on a volume on Server 2012, 2012 R2 or 2016 and inflate the data back to it’s original form. In this example, the volume in question is E:

So let’s start with step one;
If you disable dedup on the volume first, you simply stop new data being processed, rather than rehydrating your already deduplicated data.

So with that in mind the, step two would be to run the following command in PowerShell;
Start-DedupJob -Type Unoptimization -Volume E: -Full

When that job has completed, which you can check with the Get-DedupJob
command, you’ll then find that deduplication has been disabled on the disk. Since there’s still the garbage collection job to run, we need to rather counter-intuitively turn dedup back on for the volume with the following command Enable-DedupVolume -Volume E:

Once this is done, the next step is to run the following command to start your garbage collection on the volume;
Start-DedupJob -Type GarbageCollection -Volume E: -Full

Finally, after that, the final step is to turn off dedup on the volume with the following command;
Disable-DedupVolume -Volume E:

And that should save you any unnecessary drama.

When all this is done, the volume will still show in some places like server manager sat at 0% deduplication rate, which is fine, as we’ve turned it off. I would guess this is just a bug, but it seems once a volume has been touched by the deduplication processes, it never goes back to a blank value for dedup rate.

45 thoughts on “Server 2012, 2012 R2 & 2016 Disable Or Remove Deduplication On A Volume”

  1. Hey Mark,


    What if this was already done? Should I turn it back on? or Just run the unoptimization after the fact.

    Thank you,
    Spencer Lemm

    1. Hi Spencer, generally if you’ve turned off dedup already, you should be safe to turn it back on, making sure any optimisation schedules are stopped, then run the un-optimisation and garbage collection routines, then turn dedup off again. Hope that helps.

      I should have added, and when you’re running the un-optimisation and garbage collection jobs, make sure the in-optimisation finishes, then run the garbage collection, then turn off dedup when the garabge collection job is done.

  2. Like you mentioned in your last reply, check the schedule for background optimization and turn it off first.
    After letting the unoptimization run, then re-enabling before garbage collection it started to optimize again for me right after enabling. Had to stop that job (stop-DedupJob -volume D:) and unoptimize again.

    Thanks for the post.

  3. I have to rebuild my home server (2012r2) with larger hard drives. I’ve made my backups by hand, copy to additional drives and stack in the corner. Now when I set up my RAID (I use a card, not a software RAID) and start to repopulate my array, will the deduped files pay a penalty for not rehydrating them first?

    1. Hi John, if you’ve copied the data off a deduped volume to disks that are not deduped, then in effect they’ve already been re-hydrated. When you then recreate the RAID array and enable dedup on the volume on that array, and copy the data back into them, it will first be written in full form, and then deduped in the background according to the usual schedule you set, either at specific times or background optimisation. Windows Server dedup is only per volume, so if you copy data between dedup volumes, or from a dedup volume to a non-deduped volume, the data will in effect be rehydrated. Hope that makes it a bit clearer.

  4. Hi, We have a S2D Cluster and experiencing performance issues and need to remove Dedup as a test. I have already disabled Dedup so no new tasks are running. I’m aware that we dedup the volume (CSV) but I was told that moving the VM Storage to another Volume would dedup this. Is this correct?, If this is correct then it would make sense that some vhdx files would actually be deduped and some would not within the CSV. Do you know a way of running a report of which files/folders are actually in a deduped status. We have around 111 VMS across 6 CSVs.

    1. My understanding is, because dedup is done on a per volume basis, if you migrated the VM storage to another volume that doesn’t have dedup enabled on it, that will effectively re-inflate the VHDX file, as it no longer lives on a dedup enabled volume. In the same way if you have two separate dedup enabled volumes and you move the VHDX file for a VM between the two, it will be re-inflated in transit and then re-deduped on it’s target volume.

      You could also possibly set an exclusion, so name some specific VHDX files in the exclusions. Check the number of in-policy files with Get-DedupStatus in PowerShell and then wait for the excluded files to fall out of policy and be re-inflated. If you want to know what is being worked on directly, I find looking in Resource Monitor and under disk utilisation, filtering down to the fsdmhost process, this will then show what files are being touched by the dedup process. This would rely on dedup being enabled though for it to work.

      As for a report to see which files are in a dedup state, I’m not aware of one.

  5. Hi there,

    First of all, thank you for all the usefull information.

    I followed the steps (it took many hours), but Server manager still shows 4% of deduplication rate and 305Gb of Deduplication Savings. I need to get both parameters zeroed. I re-run the commands, but the numbers are stucked.

    Any ideia what can I do to definitely disable deduplication for all files?

    Thank you in advance.

    1. I would check the current dedup job schedule with;


      If any of them are enabled, run;

      Get-DedupSchedule | foreach {Set-DedupSchedule -Enabled $false -Name $}

      Then, when you run the Get-DedupSchedule command again, it should show all schedules as disabled. Then run through the unoptimisation commands again and see how it goes.

  6. Hey Mark! I hope this comment finds you well!
    So, I´m looking for help! My File Server WinSrv2012R2 is not in a good shape, let me say. It´s running the dedup job and I want to move the disk from this server to a new one. Can I detach the deduplicate disk and attach the disk into a new server (2016 or 2012R2)? Is it feasible?

    Thank you in advance,

    1. Hi Wagner, yes it’s possible to remove the deduplicated disk from one server and attach it to another. As long as it’s just a simple disk, it should be be ok, just make sure the deduplication role is installed on the new server before attaching the disk and that there’s no running deduplication jobs when you detach it from the old server. I’ve done this a few times without any problems in virtual environments, everything for the deduplication is contained within the volume on the disk by design so it can be moved between systems if needed. Obviously I’d recommend a full backup of the data before you do it, but I’d guess you’re doing those anyway, so maybe just check it’s all backed up correctly.

    1. Once you’ve gone through the process of disabling deduplication on the volume and re-inflating the data, you should just be able to remove the role, either from within Server Manager or via PowerShell with the Uninstall-WindowsFeature cmdlet.

  7. Thanks Mark, that worked wonders.

    I had a 14TB volume hosting backups with only 200GB free with 8TB of actual data. I disabled and stoppped all the optimization tasks and ran the garbage collection first, I got my 6TB of missing space back. After that I started with the second step to disable dedup entirely and everything worked as intended. Rehydrating files with only 200GB left could have been…. interesting.

  8. Hi Mark,
    We need a quick help with our Windows 2012R2 File server. We had a Windows 2012R2 File server where unfortunately data Deduplication role was installed. Now we did a migration to Windows 2016 server and all the file shares were restored from backup but unfortunately the new server did not had data deduplication installed and most of the excel/pdf files were not accessible. we than installed the role but still we are having accessing files. Please let us know what can be done to resolve this issue.

    1. All the information for dedup to work should be contained within the disk. Normally it’d just be a case of installing the Deduplication role, and then restarting the server to get it to pick up the disk with dedup enabled.
      How was the migration done, upgrade or fresh install? If you did it as an upgrade the backup restore probably wouldn’t have been needed. If you’ve installed to a new server, then restored from backups, my question would be what backup software was being used and was it compatible with Windows Server Deduplication? Most are, but I think I have seen some instances where they’re not.

      1. Hi Mark,
        Really appreciate your quick reply but i think i was not able to clearly mention the question. So we had installed a new Windows 2016 server in a different site and originally the dedub role was not installed on it. now the old server had dedub role installed on it and we use netbackup to backup and restore. Now the data was restored on the new server but we were not able to open the excel/pdf files even though the permission were correct. reason dedub role was not installed. So we decided to install the same and rebooted the server but we are still not able to access some excel/pdf files. So wanted to know could installing the dedub role latter after restoring the data would have helped us?

        Thanks once again

        1. My guess here, is when you’ve done the NetBackup of the files, it’s not followed the file reparse points through.
          When you dedup a volume, reparse point stubs are left behind in the file locations, these point back through to the chuck store for dedup which lives in “System Volume Information”. If NetBackup has not followed the reparse points, which not all software does, and therefore not backed up the data from the chunk store too, then that would cause the problem you’re having. There’s a bit of info from Microsoft on how deduplication works, but the bottom line is reparse points can cause various problems if they’re not handled correctly;
          Understanding Data Deduplication
          If you’ve not got a backup of that data in the chunk store, then you might be out of luck here, and even if you did, I’m not sure how you’d go about restoring it.

          1. Thanks Mark !!!

            So I believe what you are saying is that we should do is uninstall the data duplication role from my old Windows 2012R2 server. Then take the backup of the volume and restore it on my new Windows 2016 Server. Since I am not sure NetBackup follows the file reparse points through. Let us know if uninstalling the role will help or we have to do something else too.


          2. If you don’t think NetBackup will follow the reparse points through, then yes, if you disabled dedup and went through the process to re-inflate the data, then took a backup of that and restored it to the new server, that would work. Other options are you can use robocopy to copy the data from the old server to the new one, as long as you’ve got network connectivity between them. Obviously all this depends on volume of data to be moved.

            I should also caveat all this by saying, obviously I don’t know your environment, so these are only suggestions on approaches you could take.

          3. I should have also said, good luck, and I hope this all works out for you and you get the data recovered and successfully migrated.
            I’m sure everyone in tech at some point, at the moment you realise something has gone terribly wrong, has had that feeling when your stomach drops, at least I know I have.

  9. If we have a server with a 2Tb volume with 1.5Tb of used space and the dedup savings are 1Tb of space.
    When turning off the the dedup do we have to hydrating the files? from the MS Kb it says the files and data are still accessible.
    Do we have to have more than 1Tb free space for the hydrating of the files? So the data is duped again?

    1. You can just disable dedup and leave it like that, and yes you’ll still be able to access the files, but be aware that they are still in effect deduplicated. You’d still have to leave the Deduplication role installed on the server for them to be accessible. When you disable dedup you’re essentially just stopping any further deduplication, without affecting files that are already deduplicated. If however you removed the Deduplication role from the server, then you’d not be able to access the files.

      As for if you did want to re-hydrate the files, you’d need at least 2.5TB, more in all likelihood, as files will be written back in full when they unoptimized, but some large parts might still reside in the dedup chunk store until the final garbage collection is done.

  10. So I disabled Dedupe, then started the unoptimize task. 5 hours later and it had filled up almost 1.5tb, but was only 2% rehydrating a 6tb share that was saving 1.5tb of duplication space. I see in the error logs that it was having issues because deduplication was disabled. Do I just re-enable dedupe, disable the schedule, then run garbage collection and the unoptimize? Or do I need to allocate more space and unoptimize THEN garbage collection?

    1. Also, seeing a lot of “Data Deduplication service could not unoptimize file “F\”. Error 0x80070002, “The system cannot find the file specified.” errors in the logs.

    2. Correct, re-enable dedup, and then disable the schedule and background optimisation. Assuming you still have enough free space to re-inflate the data, then run garbage collection and unoptimise, then a final garbage collection run to clean up what’s left after the unoptimise process, then disable dedup. If you don’t have enough space to re-inflate, then allocate more space, and go through the same steps.

      1. Hi Mark
        I’m having to do the same thing, disable deduplication.
        I already disabled deduplication on Volume D and now running Start-DedupJob -Type Unoptimization -Volume D: -Full
        When I run: Get-DedupJob, I get the results below.
        —- ———— ——— ——– —– ——
        Unoptimization Manual 0 % Queued D:
        Unoptimization Manual 0 % Queued D:
        Unoptimization Manual 12:12 PM 0 % Running D:

        How can I tell when the process is finished and what do I do next to ensure all my existing files are deduplicated


        1. Get-DedupStatus will show the current progress for each job. It’ll only run one at a time, which will show as running, the other will just be queued.

  11. Hi Mark,
    I really need your help otherwise I will get fired…
    I installed the Dedup feature but not configured, now when I am deleting anything from the Volume the space not increased.
    I can see Chuk Folder in System Volume Information but how and what steps I should follow to replain the space.


    1. Was dedup enabled and running against the drive, but then disabled? I’d try running a garbage collection job against the effected drive, using the command Start-DedupJob -Type GarbageCollection -Volume E: -Full. Obviously change the drive letter to the correct one in your case.

  12. Because dedup jobs queue up, and unoptimization disables dedupe,
    enable-dedupvolume E:
    start-dedupjob -type garbagecollection -volume E: -Full
    Start-dedupjob -type unoptimization -volume E:

    then even if optimization jobs get scheduled later, they won’t run because dedup will be disabled during the unoptimization run.

  13. I’ve got a 2016 server that was getting low of disk space, had at the time 5TB’s total with 450GB free and dedupe enabled. Dedupe reported a savings in the 400ish GB range, forgot the number but I think somewhere in the 5%. I added 2+ TB to the disk, bringing it up to over 7TB’s. I turned off optimization. Then kicked off the unoptimization process. I’m now below 900GB’s of free space, had over 2TB’s before I kicked off the unoptimization process. This just doesn’t seem right.

    1. Make sure you run the garbage collection task, or let it run on it’s default schedule, which I think is the weekend. After that, I’d expect more space.

  14. Hello Mark,

    I want to start by saying that you are a saint for posting this walk through. I was hoping you could provide insight on my issue…

    I have a volume with deduplication enabled that I eventually want to disable. That being said, the volume is almost full simply from the deduped files. I have only 840GB available on an 8 TB volume. I’m afraid I won’t be able to rehydrate the data if I run Start-DedupJob -Type Unoptimization, which is fine. If I could delete all of it, I would do that. However, the data is in E:\System Volume Information\Dedup\ChunkStore\{Long-String}\Data. I can’t seem to gain access to the System Volume Information directory. How would I go about simply deleting this data?


    1. The data within E:\System Volume Information\Dedup\ChunkStore is the deduplicated data, essentially each block that is deduplicated is stored in there and files then reference these blocks, so if the same block of data is used by 10 files, the block is stored once in the chunkstore and then the 10 files all reference that for the block, rather than each file containing a copy of the block.

      So, long story short, do not delete what’s in that folder, that is your data.

      Your only options here sound like either add more space if it’s a virtual disk, or remove some data if it’s not and run a garbage collection job Start-DedupJob -Volume "D:" -Type GarbageCollection -Full to see where that gets you in terms of space.

  15. Hello, I ran the command you need to run the command: Start-DedupJob -Type Unoptimization -Volume E: but the progress does not leave 0%, is there a way to speed up this process?

  16. We upgraded our file server from Server 2012 R2 to Server 2019 and then on to Server 2022, Data Deduplication was enabled throughout all server versions. We’re now experiencing sporadic disk access slowdowns to just this server (it’s a Hyper V VM on a cluster shared volume with no other servers exhibiting symptoms of access issues). I’m thinking something didn’t go quite right with the upgrade and perhaps data deduplication is the cause of the access issues (PCs with a mapped drive to this server freezing, documents saying not responding for moments at a time fairly regularly throughout the day).

    Anyone possibly come across this sort of issue? Thinking about removing the Data Dedup role and reinstalling but any advice greatly recieved!

    1. Hi Colin,

      Did you manage to fix this issue? as i’m planning to do a windows upgrade on the same VM from 2012R2 to 2019 with data de-duplication enabled.

  17. Hi Mark
    We are running ConfigMgr Distribution point and a forgot one folder to exclude from deduplication is there a way to do this whit out copy alla content to a folder thats not deduped and then move them back after a excluded that folder?
    Regards Ricky

  18. Hi Colin,

    did you manage to fix your problem?
    I’m planning to upgrade server OS 2012R2 to 2019 and I have a large volume with deduplication enabled on it. what is the best practice to do this upgrade?

    thank you

  19. Hi Mark,

    I have a 7TB store easy server with server 2019 OS. Data about 4TB. sicne this server is live 24×7 what will happen if i turn deduplication off ?


Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.