Pro tip: Defragmenting only works on spinning drives because it puts the data nearer to the spindle so seek times are shorter. Solid-state drives wear out faster if you defragment them, since every write involves a little bit of damage.
Random reads are still slower than sequential in SSD. try torrenting for a year on SSD, then benchmark then defragment then benchmark. it will be very measureable difference. you may need some linux filesystem like XFS as im not sure if there is a way to defrag SSDs in windows.
That’s because the drive was written to its limits; the defrag runs a TRIM command that safely releases and resets empty sectors. Random reads and sequential reads /on clean drives that are regularly TRIMmed/ are within random variance of each other.
Source: ran large scale data collection for a data centre when SSDs were relatively new to the company so focused a lot on it, plus lots of data from various sectors since.
Pro tip: That tip has been obsolete for a long time now.
Running the defragmentation tool on an SSD in Windows optimizes the drive (pretty much just running TRIM). It’s not possible to defragment an SSD in Windows (maybe there is a way using some register hack but that’s out of scope)
You just don’t want to do it regularly. It was an issue for a brief time when SSDs were new, but modern operating systems are smart enough to exclude SSDs from scheduled defrags.
Defragging an SSD on a modern OS just runs a TRIM command. So probably when you wanted to shrink the windows partition, there was still a bunch of garbage data on the SSD that was “marked for deletion” but didn’t fully go through the entire delete cycle of the SSD.
So “windows being funky” was just it making you do a “defragmentation” for the purpose of trimming to prepare to partition it. But I don’t really see why they don’t just do a TRIM inside the partition process, instead of making you do it manually through defrag
i used Defraggler, after nothing else worked to allow diskmgmt to shrink it, including all the normal stuff like disabling page files, snapshots, etc. it shows me how it was reordering parts of the ssd.
That kinda makes sense. Putting all the partition sectors together would probably make it easier to resize. But as standard maintenance it’s like changing the oil on an electric car.
Pro tip: Defragmenting only works on spinning drives because it puts the data nearer to the spindle so seek times are shorter. Solid-state drives wear out faster if you defragment them, since every write involves a little bit of damage.
I was about to throw hands, but then I learned something new about how SSDs store data in pre-argument research. My poor SSDs. I’ve been killing them.
No you didn‘t. All somewhat current operating systems do not defrag SSDs, they just run TRIM and it does not kill them.
Most modern OSeses do defragmentation on the fly and you don’t really need to do it anymore.
Which makes me sad because I have so many memories of watching a disk defragmenter do its thing from my childhood.
Here’s a little game I made because I missed it too. https://dbeta.com/games/webdefragger/
That was super cool.
Thanks. It was a silly toy, but it scratched an itch, and was good for at least one chuckle.
It’s just Paint behind it, isn’t it?
I’m guessing you were making a joke, but the real answer is it is a Godot tile map.
I loved watching disk defragmenter doing it‘s job as a kid. I miss it too!
real actually. definitely one of the most memorable progress bars. well, that and the bios update progress bar
Random reads are still slower than sequential in SSD. try torrenting for a year on SSD, then benchmark then defragment then benchmark. it will be very measureable difference. you may need some linux filesystem like XFS as im not sure if there is a way to defrag SSDs in windows.
That’s because the drive was written to its limits; the defrag runs a TRIM command that safely releases and resets empty sectors. Random reads and sequential reads /on clean drives that are regularly TRIMmed/ are within random variance of each other.
Source: ran large scale data collection for a data centre when SSDs were relatively new to the company so focused a lot on it, plus lots of data from various sectors since.
Pro tip: That tip has been obsolete for a long time now. Running the defragmentation tool on an SSD in Windows optimizes the drive (pretty much just running TRIM). It’s not possible to defragment an SSD in Windows (maybe there is a way using some register hack but that’s out of scope)
Defragging is about… defragging: making the data contiguous (a continuous stream along one arc of the same radius) so it doesn’t have to jump around.
well, defragging my ssd was the only thing that let me shrink the windows partition safely when i dualbooted… tho maybe thats just windows being funky
You just don’t want to do it regularly. It was an issue for a brief time when SSDs were new, but modern operating systems are smart enough to exclude SSDs from scheduled defrags.
Defragging an SSD on a modern OS just runs a TRIM command. So probably when you wanted to shrink the windows partition, there was still a bunch of garbage data on the SSD that was “marked for deletion” but didn’t fully go through the entire delete cycle of the SSD.
So “windows being funky” was just it making you do a “defragmentation” for the purpose of trimming to prepare to partition it. But I don’t really see why they don’t just do a TRIM inside the partition process, instead of making you do it manually through defrag
i used Defraggler, after nothing else worked to allow diskmgmt to shrink it, including all the normal stuff like disabling page files, snapshots, etc. it shows me how it was reordering parts of the ssd.
That kinda makes sense. Putting all the partition sectors together would probably make it easier to resize. But as standard maintenance it’s like changing the oil on an electric car.
i see