Skip to content

Instantly share code, notes, and snippets.

@the-mikedavis
Last active November 6, 2024 03:15
Show Gist options
  • Save the-mikedavis/3864c07874fb4af856ad923b2d69c6f5 to your computer and use it in GitHub Desktop.
Save the-mikedavis/3864c07874fb4af856ad923b2d69c6f5 to your computer and use it in GitHub Desktop.
`ddrescue` a directory

ddrfind - ddrescue on a directory

A non-recursive approach to using ddrescue on a directory.

Chances are, if you have a currupted set of memory, it's not just a few files. It's probably a whole disk. Wanna save it? You can use ddrescue, but that doesn't work on directories, only files. If you want to use it on a whole big directory, you'll need to do each file one by one. That's exactly what this script does.

Installation

$ fish ddrfind.fish
function ddrfind --description 'ddrescue on a directory (using find)'
if not test -d $argv[1]; or set -q $argv[2]
echo "Usage: ddrfind <input-dir> <output-dir>"
exit 1
end
set -l dest (pwd)"/"$argv[2]
mkdir -p $dest
cd $argv[1]
for file in (find . -name "*")
if test -d $file
# make a new sub-directory
mkdir -p $dest"/"$file
else
# rescue the file
echo $dest"/"$file
ddrescue $file $dest"/"$file > /dev/null
end
end
cd -
end
funcsave ddrfind
@mik13ST
Copy link

mik13ST commented Apr 29, 2023

I used this a bunch and I have some improvements:

  • Stops on error instead of ignoring it (e.g. file not seekable, mapfile not writable).
  • Shows progress (I like watching ddrescue at work).
  • Saves mapfiles so if a file has many bad sectors and you don't need it rescued, you can stop, delete the file from inputdir and run it again which will skip already rescued files, because mapfiles exist for each of them.
    • Don't forget to delete the mapfiles when the rescue is done.
  • Retries indefinitely. I am watching the process closely so I will intervene if it gets stuck.
  • Preserves last modified date and stuff (useful for photos). For keeping owner and permissions see https://askubuntu.com/a/56810/543926.

The changes made:

     else
       # rescue the file
       echo $dest"/"$file
-      ddrescue $file $dest"/"$file > /dev/null
+      ddrescue $file $dest"/"$file $dest"/"$file.mapfile -r -1; or return
+      # preserve last modified date
+      touch --reference=$file $dest"/"$file; or return
     end
   end
   cd -

The altered function:

function ddrfind --description 'ddrescue on a directory (using find)'
  if not test -d $argv[1]; or set -q $argv[2]
    echo "Usage: ddrfind <input-dir> <output-dir>"
    exit 1
  end

  set -l dest (pwd)"/"$argv[2]

  mkdir -p $dest

  cd $argv[1]
  for file in (find . -name "*")
    if test -d $file
      # make a new sub-directory
      mkdir -p $dest"/"$file
    else
      # rescue the file
      echo $dest"/"$file
      ddrescue $file $dest"/"$file $dest"/"$file.mapfile -r -1; or return
      # preserve last modified date
      touch --reference=$file $dest"/"$file; or return
    end
  end
  cd -
end

funcsave ddrfind

@helli-42
Copy link

Hey. This is exactly what I was looking for but ran into a problem. Never used fish or ddrescue before...
I try to copy a folder from the failing drive to another disk using:
ddrfind '/media/[user]/xyz/! DL Save/2022-12/' /media/[user]/Backups/WD-Updates/

But everytime I try it, the script keeps creating "media/[user]/Backups/WD-Updates/" as new folders inside the source folder (2022-12) instead of on the backup drive and starts copying to the already damaged drive...

I know, I must be missing something pretty simple here, but I just can't figure out how to route the output to the new drive...

@mik13ST
Copy link

mik13ST commented May 31, 2023

I also had these issues. There is something wrong with the paths of this script. I think you had to use the target path relative to the source directory. Like ddrfind /media/failingdrive/important ../../gooddrive/rescued.

@helli-42
Copy link

Thank you so much, it works! I was trying to get this working for hours now before bothering someone else.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment