Advantage #1: A hard drive containing multiple partitions allows you to *lower* your drive's effective access time, providing you with a more responsive system.
Advantage #2: The main reason people prefer a drive with multiple partitions over one with only a single large partition is because having a separate system partition, containing only your operating system (Windows) and programs, allows you to reformat your system partition (should something go horribly wrong with Windows) and reinstall Windows without losing all the data on the drive.
Advantage #3: A drive with multiple partitions allows you to defrag only those partitions that actually need defragging. This saves wear and tear on your drive, and may even help keep it from failing prematurely.
Advantage #4: [my favorite] Imaging: A drive with multiple partitions allows you to easily create & restore images using programs such as Norton Ghost or PowerQuest's Drive Image.
Advantage #5: Multi-boot. Like mentioned earlier, if you want to dual- or multi-boot different operating systems, you *must* create separate partitions for each O/S.
Advantage #6: Security. The vast majority of people install Windows to their C drive. Hackers know this and target the C drive. You are less likely to be hacked if Windows resides on a drive other than C. And you will need more than one partition to get a drive letter other than C.
though some malware as implied earlier can and does write itself to each drive
howeverthat isnt all that common and if you employ a dual boot, the 2nd OS is likely to be uncompromised and they can cross scan each other, I keep a secondary install either on the same drive or another for security and repair\rescue
and my Optimizing Tutorial Cut and Paste 101
To realize the full potential of your data storage, you need to understand not only the strengths and weakness of your own components, but also how your applications employ them,
where one application requires a certain access to the HDD for best performance, another may do better with the opposite
"The opposite of a correct statement is a false statement. But the opposite of a profound truth
may well be another profound truth." -- Niels Bohr
there are few "single" attributes that dont have tradeoffs elsewhere, there are a few however spindle speed for one (rpm)
Optimizing through Partitioning
1st Click Here
That is a representation
of Zoned Bit Recording
, your Partition order starts at the outer edge and works inward, for the purposes of this tutorial will will say each color represents a partition (unless specifically described differently)
the concentric circles represent tracks, and the sections within a track would be sectors
Since the drive spins at the same speed constantly there are some basic access attributes that would occur if the head\arm
doesnt jump partitions to access a different part of the disk
Dark Blue, this area will have the highest density of data passing under the head for the given speed, and thus will have the best sustained transfer rate (STR), so the largest files benefit from this placement
Whereas the Innner zone (Red Section) has the fewest sectors and since the rotation is fixed, less data will be passing under the head, so it has the lowest STR
So smaller files, will do relatively well here. or files that the reduced transfer rate doesnt impact the application, like music or media that is just being read, not written in realtime.
Also the file "Density" itself can be an advantage with small files when you consider seeks and latency, in comparison to larger files on the outside zones, if they are truely smaller files, the number of files passing under the head could be comparible to the tracks further out, while in the representation the outer tracks have 16 sector (in reality many many more, varies with the capacity of the HDD) in the Red zone that is reduced to nine sectors, this optimization can offset the actual seek time for a given file, since that many more files would be passing under the head for the same rotation
Now if each color band was a partition any data on that partition would have to be contained to a much smaller area, so regardless of how fragmented it was, the arm and head only have to move through a few degrees of arc to seek it (the access latency), whereas a larger partition (say dark blue through green) might have a part of single file on the outside track (dark blue) and more of it located in towards the green, several degrees more the arm has to move to seek the track(s) and of course any miss with the latency of waiting for it to come around again.
With a good defragmentation application like O&O Defrag Pro, this can be extended further, I have the option to defrag by NAME, so by placing applications or Data in folders in an alphabetical order, I can determine their placement within the partition.
If you picture those basic factors and match the type file being accessed to it you have optimized your disk (seek, latency, sustained transfer rate) that also goes along way towards explaining why keeping a HDD defragmented helps so much
and of course partitions specialized to contain different types of data fragment differently, some barely at all, others (like P2P) alot, but if they are contained, defragmenting them goes faster
Fragmentation and Defragmentation
big contigious files that just need to be accessed once and transfered will do best on the outside edge (or as close to that as possibel) having the highest sustained transfer rate
Swapfiles truely come into their own in a workstation where typically large files are being manipulated in realtime (graphics)
and they are generally located at the outside edge of the disk
review Virtual Memory in XP
so far all my links have been to the [PCGuide HardDrive Section (its the one repeated as Storagereview's reference section
But Id also highly recommend you read As the Disk Spins
@ LostCircuits as it covers all of this much more comprehensively and additional nuances (like queing, command overhead ect)
also review these basic terms of performance metrics
Access Time = Command Overhead Time
+ Seek Time
+ Settle Time
@ Storagereview FAQ
all in all, the mantra more RAM still applies
Optimizing Physical Configuration
being on the same channel there are a few considerations
IDE\ATA\ATAPI is sequential
meaning first the HDD reads a part of the file until the HDD's Cache is full then writes it to the Second HDD,
then that repeats each taking its own turn
then its unlikely its reading the file from a single location, its probably fragmented, and when it writing it, its also writing it to multiple locations, that introduces the latency and access times of both drives into it
if your going to be transfering alot of data inbetween two HDDs on a regular basis, its best if they are on their own channels, writing from a HDD to a Optical drive is alot better, the optical can only deal with a maximum of 33MB/s Burst (UDMA mode2) whereas the HDD is probably at UDMA mode5 100MB/s burst (50>30MB/s Sustained), in short the sequential issues arent enought to effect the burn speed with modern software (and reads arent really an issue either) both cant saturate the bus
of course those are just interface speeds and are not the sole consideration of HDD performance
there is a myth about putting optical drives on the same channel as HDDs, it is just that a myth, but it keeps getting reinforced by the way Windows deals with ATA\ATAPI issues
basically with Independent Device Timing
two devices (master\slave) both transfer their data at their own highest speed, but, they both either have to be PIO (which is glacially slow) or UDMA, if one defaults to PIO because of some issue, Windows will default the other as well. There was a time when CDROMs where only PIO, and HDDs where DMA, for that period of history you didnt want to share a channel, but modern opticals are UDMA mode2 so there is rarely any issue
some of the reasons a device might default to PIO
DMA Mode for ATA/ATAPI Devices in Windows XP
however if possible it is ideal
(for data integrity if nothing else)
to have each device as a master on its own channel
whenever possible consider from what source to what target the large files are being transfer on a regular basis,
and try to adapt your physical configuration to accommodate that