Results 1 to 10 of 10

Thread: Partition Fun (dd & cp)

  1. #1
    Very good friend of the forum Virchanza's Avatar
    Join Date
    Jan 2010
    Posts
    863

    Default Partition Fun (dd & cp)

    Defragmenting a partition

    I've been thinking about using the following method for defragmenting a partition:

    First let's say that the partition you wanna defragment is /dev/sda1.

    Step 1: Boot up a LiveCD of Linux.
    Step 2: Get an external USB drive
    Step 3: Make a partition on the external USB drive, and give it the same filesystem as the partition you're gonna defragment (let's call the USB partition "/dev/sdb1"). This partition needs to be at least as big as the data that's stored on /dev/sda1.
    Step 4: Copy all the data across: cp -pr /mnt/sda1 /mnt/sdb1
    Step 5: Delete everything off /dev/sd1: rm -r /mnt/sda1/* (or just use mke2fs)
    Step 6: Copy all the stuff back: cp -pr /mnt/sdb1 /mnt/sda1

    All done. I'm pretty sure this would work fine if you're defragmenting a Linux partition.

    I wonder however if you'd have trouble doing this with an NTFS partition containing MS-Windows? I think there's something special about MS-Windows partitions whereby the "io.sys" file has to be located at a certain sector or it won't boot up. Anyone know?

    Copying a partition

    I know "dd" is the handiest way of making a copy of a partition:

    Code:
    dd if=/dev/sda1 of=/dev/sda2
    The only downside of this is that dd doesn't understand filesystems. Say for instance a partition is 100 gigs in size but it only contains 800 MB of data, well dd will copy the entire 100 gigs instead of just copying the 800 MB of data.

    Anyway, I've been looking at another way of copying a partition

    First use df -h to find out how much data is actually stored on the partition you're gonna copy.:

    Code:
    Filesystem            Size  Used Avail Use% Mounted on
    /dev/sda1              52G   12G   40G  23% /mnt/sda1
    Create a new partition, /dev/sda2, and make it about 13 gigs in size, and give it the same filesystem type as the source partition. Format the new partition also.

    Next, copy the boot sector from /dev/sda1 to /dev/sda2:

    Code:
    dd if=/dev/sda1 of=/dev/sda2 bs=512 count=1
    Next just copy all the data from the source to the target:

    Code:
    cp -pr /mnt/sda1 /mnt/sda2
    That should be it. I'm pretty sure this will work for a Linux partition, but again I'm not sure about an MS-Windows partition. Any input?
    Ask questions on the open forums, that way everybody benefits from the solution, and everybody can be corrected when they make mistakes. Don't send me private messages asking questions that should be asked on the open forums, I won't respond. I decline all "Friend Requests".

  2. #2
    Super Moderator Archangel-Amael's Avatar
    Join Date
    Jan 2010
    Location
    Somewhere
    Posts
    8,012

    Default

    My only question or response would be why would you defrag a linux partition since it really isn't necessary. Furthermore one could use fsck -a at a reboot to look at the files system and make any repairs that may need to be made.
    But rarely if ever would one need to defrag a linux partition.
    To be successful here you should read all of the following.
    ForumRules
    ForumFAQ
    If you are new to Back|Track
    Back|Track Wiki
    Failure to do so will probably get your threads deleted or worse.

  3. #3
    Very good friend of the forum Virchanza's Avatar
    Join Date
    Jan 2010
    Posts
    863

    Default

    I reckon it's possible for a Linux partition to become fairly defragmented if you're always copying around big files.

    For instance let's say you've got a movie collection or a massive music collection. Some of your movie files might be 2 gigs in size. If you're constantly sharing movies with friends, copying them back and forth, then it's possible that such big files could become fragmented because there isn't 2 gigs of contiguous space available on the partition.

    I have to say it's been maybe a decade since I defragmented a drive, but it definitely made a noticeable performance difference back in the days of MS-Windows 95.
    Ask questions on the open forums, that way everybody benefits from the solution, and everybody can be corrected when they make mistakes. Don't send me private messages asking questions that should be asked on the open forums, I won't respond. I decline all "Friend Requests".

  4. #4
    Very good friend of the forum hhmatt's Avatar
    Join Date
    Jan 2010
    Posts
    660

    Default

    I would try this out on a windows partition first. Then reboot the machine in windows and view the defragmenter to see if this infact defragments the drive. It's very possible that even though your rewriting the data back that it will fragment the files anyway.

    Wouldn't this also cause unnecessary stress on both drives?

  5. #5
    Super Moderator Archangel-Amael's Avatar
    Join Date
    Jan 2010
    Location
    Somewhere
    Posts
    8,012

    Default

    Quote Originally Posted by Virchanza View Post
    I reckon it's possible for a Linux partition to become fairly defragmented if you're always copying around big files.
    Anything is possible however that it self is subjective.
    The lack of fragmentation in linux partitions is one of its strong suites.
    Take a look at any distro and you will not notice there really is no "command" for defraging a drive, in addition you will find little if any tools available for this.
    geekblog has an excellent (although little long) on the subject.
    However for a short and sweet on it see here . And like it says hire the woman.

    I have to say it's been maybe a decade since I defragmented a drive, but it definitely made a noticeable performance difference back in the days of MS-Windows 95.
    We won't go into the fact that the OS was garbage but stick to the fact that it was notorious for not being able to write files to the disk for crap.
    But if you look at windows today you will notice that this has changed for the better in the newer OSes.
    To be successful here you should read all of the following.
    ForumRules
    ForumFAQ
    If you are new to Back|Track
    Back|Track Wiki
    Failure to do so will probably get your threads deleted or worse.

  6. #6
    Very good friend of the forum Gitsnik's Avatar
    Join Date
    Jan 2010
    Location
    The Crystal Wind
    Posts
    851

    Default

    Note: I am aware of the threadmancy involved here.
    Quote Originally Posted by archangel.amael View Post
    The lack of fragmentation in linux partitions is one of its strong suites.
    The lack of it causing a problem is by far the stronger. Linux fragments just as easily as Windows (indeed it's easier but it's relative to the harddrive and a whole bunch of other factors. The key difference is* where MS made the decision to write data sequentially: (word)(word)(word)(excel)(excel), Linus and co made the smarter choice and wrote in blocks: (vi)(vi)(blank spaces)(emacs)(emacs).

    When you write more data to the word file than it was created for: (word)(word)(excel)(excel)(word) you can see the fragmentation. This is all obvious to anyone who knows their stuff, and I am being careful here not to call anyone in particular down - I'm just relating information (if you want a more detailed look at it, follow amael's links).

    The point to note here is that, if you think about it, it is entirely possible for the same fragmentation to occur on a Linux partition - eventually you are going to start creating fragmentation** - and slow your system down. It's harder to do, it takes a little longer. But it is still possible, and should not be touted as a "never need to do it"*** - Virchanza has gotten some good info going here. Want some proof perhaps? Take a look at the fsck output (probably without the -a for automatic repair) and it will show you fragmentation.

    My Red Hat 8.0 server has a 76% fragmentation last time I checked. It's taken years to get to that point (as opposed to Windows' Months or Weeks), but you can still get there.

    Ok, now that's out of the way:

    NTFS expects a lot of things to be in a lot of places, trying to defragment it is dangerous and problematic, especially if you try it when the system is not booted. I have, on two occasions, even suffered a system failure because I have defrag'd system drives for the "other" system in a dual boot scenario.

    The two ways I defrag it out-of-system are either through Ghost, or the Mac free software: WinClone. Anything else I've come across - including every other third party defragger - has killed the system.

    *I'm going from memory of writing FS drivers a long time ago, nothing recent
    **Point to note: This makes Linux's way of doing things a far better option for SS drives - no continual overwriting of the first n sectors should help reduce the burn out.
    ***Never need to do it should apply to Solid State and Flash drives. The access times are quick enough to be almost unnoticeable on anything outside of a SAN, and defragging the system is just one more write to the sectors it touches.
    Still not underestimating the power...

    There is no such thing as bad information - There is truth in the data, so you sift it all, even the crap stuff.

  7. #7
    Very good friend of the forum hhmatt's Avatar
    Join Date
    Jan 2010
    Posts
    660

    Default

    I've read both links posted by archangel.amael now and have a very good understanding of the differences of how both OS's write files to the hard disk.
    In theory Linux should never need defragmenting because it doesn't seperate the file in parts. Hence the term file fragments, and of course the term defragmenting would be the process of putting those files back to a whole.
    There are rare situations where file fragmentation can occur in linux.

    1. The disk is near full and has insufficient free space to place the file in 1 piece.
    2. Sector/Cluster/Block corruption, In theory you've just lost your file/s or at least part of it. This may or may not be pertinent I don't completely understand how linux or even windows for that matter handles this situation. I will have to read more about this later.
    3. Files are Spanned across multiple platters. This situation occurs when there IS sufficient free space for the file although it will "run off" the end of the platter and continue on the next.

    With this in mind I don't see how you can even obtain a 76% fragmented rate unless your hard drive is severely corrupt.

    I'm posting the output I'm getting from fsck here. This is an ext3 linux partition on a hard disk(not volatile flash memory).

    Code:
    bt ~ # fsck /dev/hda1
    fsck 1.39 (29-May-2006)
    e2fsck 1.39 (29-May-2006)
    /dev/hda1 is mounted.
    WARNING!!! Running e2fsck on a mounted filesystem may cause SEVERE filesystem damage.
    Do you really want to continue (y/n)? yes
    /dev/hda1 was not cleanly unmounted, check forced.
    Pass 1: Checking inodes, blocks, and sizes
    Inode 4228798, i_blocks is 64, should be 8. Fix<y>? no
    Pass 3: Checking directory connectivity
    Pass 3A: Optimizing directories
    Pass 4: Checking reference counts
    Pass 5: Checking group summary information
    /dev/hda1: ***** FILE SYSTEM WAS MODIFIED *****
    /dev/hda1: ***** REBOOT LINUX *****
    /dev/hda1: *********** WARNING: Filesystem still has errors ***********
    /dev/hda1: 186674/9388032 files (0.1% non-contiguous), 1768113/18751863 blocks
    Are you referring to the non-contiguous statistics?

    I rebooted to ensure my system didn't get any severe file damage as fsck warned me about. Everything still seems to be running properly.

    I strongly believe that linux does not need to be defragmented.
    I can see how its possible that windows can have "issues" being defragmented while it is not up and running. But, these files should still be intact and properly indexed.

    This being said I'm open minded and would like to see some more drive statistics showing us some real numbers. I'll also be running some tests of my own.

  8. #8
    Very good friend of the forum Gitsnik's Avatar
    Join Date
    Jan 2010
    Location
    The Crystal Wind
    Posts
    851

    Default

    Quote Originally Posted by hhmatt81 View Post
    Are you referring to the non-contiguous statistics?
    Code:
    8827 files, 92844 used, 160971 free (17995 frags, 23702 blocks, 75.9% fragmentation
    This from a system that gets approximately 500,000 writes a day and is constantly filling up and running out of inodes.

    Never said it was easy, just said it was possible.
    Still not underestimating the power...

    There is no such thing as bad information - There is truth in the data, so you sift it all, even the crap stuff.

  9. #9
    Super Moderator Archangel-Amael's Avatar
    Join Date
    Jan 2010
    Location
    Somewhere
    Posts
    8,012

    Default

    Quote Originally Posted by Gitsnik View Post
    Never said it was easy, just said it was possible.
    Yes I would agree with this as well. But at the same time the likely-hood of that happening on a given server is not as high as one might be lead to believe. Especially if that server is running a *nix version. Compared as you stated above to MS. While I am all for the creative use of commands to do "something" in *nix, gratuitous read and write cycles to a HDD of both varieties is obviously something one would want to minimize.
    As for a performance "gain" in the *nix world that would be highly subjective. As any one who has spent some time over clocking their computer/s knows the bottle-neck or slowest part on a given computer is generally those same read/write cycles of the HDD. So what gain may be achieved by having some or moderate fragmentation in a drive is probably going to be negated by the actual drive's speed itself. Now with SSD's this performance "gain" is increased by the read write cycles, whereas defragmenting the drive may act against (if anything) wear leveling technologies that are built into such drives. As you know the wear leveling is designed to "spread" data across the entire drive in an even type of distribution so that one part does not wear out faster than an other enhancing the entire drive's life.
    So I again am not sure that there would be a beneficial gain to actually doing such a thing as defragmenting a linux drive.
    To be successful here you should read all of the following.
    ForumRules
    ForumFAQ
    If you are new to Back|Track
    Back|Track Wiki
    Failure to do so will probably get your threads deleted or worse.

  10. #10
    Member PeppersGhost's Avatar
    Join Date
    Jan 2008
    Posts
    204

    Default

    Quote Originally Posted by Virchanza View Post

    I wonder however if you'd have trouble doing this with an NTFS partition containing MS-Windows? I think there's something special about MS-Windows partitions whereby the "io.sys" file has to be located at a certain sector or it won't boot up. Anyone know?
    The OS boot record program searches for the boot loader first and then Io.sys for Win 9x/Me and MS-Dos. For WinNT/2000/XP it is Ntldr which can be found in the \i386 folder on the cd. Ntldr will then check the boot.ini file for multi-boot instructions. Of course the MBR is on its on, where as the other system files are always on the root partition which is bootable via the +s system attribute. I could not tell you what uncle Bill is using these day's but it is something to think about. Good work Virchanza, I like people who think outside of the box, that is how things get done.
    <EeePc 1000HA BT4/W7 USB boot Alfa500 GPS BlueTooth>

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •