-
-
Save bruvv/d9edd4ad6d5548b724d44896abfd9f3f to your computer and use it in GitHub Desktop.
btrfs fi show -d | |
(/dev/mapper/vg1000-lv) | |
syno_poweroff_task -d | |
(or: umount /volume1) | |
(or2: umount /volume1 -f -k) | |
Check to see if all us unmounted: | |
df -h | |
mdadm --stop /dev/vg1000/lv | |
btrfsck /dev/vg1000/lv | |
btrfs check --repair /dev/vg1000/lv | |
btrfs rescue super-recover -v /dev/vg1000/lv | |
vgchange -ay | |
e2fsck -nvf -C 0 /dev/vg1000/lv | |
fsck.ext4 -pvf -C 0 /dev/vg1000/lv | |
(or: e2fsck -pvf -C 0 /dev/vg1000/lv -C O) | |
(do not do this: -C fd) |
Hy Delete the 2 files of Metadata in the plex directory and try again . It should work 👍. When its finished update the Plex Library.
Jeez, replying to my message via email, made your response pretty bad to read in the thread. Please read and reply in gist instead. Doing from the mail, does not work properly.
Coming back to your response, I think you haven’t read my post properly. See what I wrote:
Then I removed the 2 files that were corrupted (I don't care about them), just in case it was aborting the scrubbing because of them, as a kind reddit user told me it could be the case.
As you can see, I already did and didn’t work as when I repeated the scrubbing it failed the same way…
@eduarcor do you have snapshots enabled on those two files? Might be that the snapshots are interfering?
I don’t have any snapshot. I am not using snapshots
With DSM 7.2.2-72806 Update 3:
# to unmount
synostgvolume --unmount -p /volume1
# to mount the logic volume (I read it somewhere...)
vgchange -ay
# to just check
btrfs check /dev/mapper/cachedev_0
# to enable repair
btrfs check --clear-space-cache v2 /dev/mapper/cachedev_0
# to repair and pray
btrfs check --repair /dev/mapper/cachedev_0
P.S.: there are problems that can't be repaired. Then you should backup (before trying to repair), remove the volume and start again.
Hello guys,
Hope you could help with a problem I am having in my NAS. Have the feeling is related to this gist theme.
TLTR: When I try to do a manual data scrubbing, after several hours it aborts. I don’t know the reason and how to solve it!!
First a little bit of context. I am running an xpenology with DSM 7.2.2 (last version) in SA6400 platform (i5 12400, Asrock H610M-ITX/ac board, 32Gb DDR4) with Arc Loader 2.4.2, I have RAID 6 with 8 x 8Tb at 62% of capacity. Being running xpenology for many years with no problem, starting from a RAID 5 with 5 x 8Tb, which I had to change several times faulty drives with new ones, and reconstructing the RAID, etc... Always successfully.
Now. When I try to do a manual data scrubbing, after several hours it aborts.
The message in Notifications is:
But the Volume health status is healthy!! No errors whatsoever... runned smart tests (quick), healthy status. Even having 3 Ironwolfs disks, I did Ironwolf tests with no errors either, showing all of them being in healthy condition.
In Notifications, a system even indicated:
This happened while performing the data scrubbing, 2 files had errors: one belonging a metadata file of database inside a plex docker container. And other was an old video file.
As there were no other reason why the data scrubbing aborted, I typed these commands in ssh:
It looks like it aborted after almost 4 hours and 13.32TiB of scrubbing (of a total of 25.8TiB used in the Volume).
As per the result of the checksum errors, I ran a memtest. I have 2x16Gb DDR4 memory. It found errors. I removed one of the sticks, and kept the other, and ran memtest again. It didn't error out so I am now having just 16Gb of RAM, but allegedly with no errors.
Then I removed the 2 files that were corrupted (I don't care about them), just in case it was aborting the scrubbing because of them, as a kind reddit user told me it could be the case.
And I ran data scrubbing again, having exactly the same message Notifications (DSM is so bad, not showing the cause of it). Now there are no messages at all of any checksum mismatch.
The result of the commands are pretty similar:
Before it ran during 3:50:45, and now 3:50:40, which is quite similar, almost 4 hours.
Now it says 1 error, despite I deleted the 2 files and is not informing about any file checksum error now in the Notifications nor the Log Center.
I have no clue why is aborting. I guess the data scrubbing process should finish the whole volume and inform of any file with any problem if it is the case.
I am very concern as in the case of a hard drive failure, the process of reconstructing of the RAID 6 (I have 2 drives tolerance), does a data scrubbing and if I am not able to run the scrubbing, then I will loose the data.
Is curious, but the system is working flawlessly otherwise. I was not having any problem, except this data scrubbing not working right now.
I will have to leave my home until next week, and will not be able to perform more test in a week. But just wanted to share this asap and try to make this thing work again, as I am a freaking out to be honest.
Thanks guys in advance.