dar-support Mailing List for DAR - Disk ARchive
For full, incremental, compressed and encrypted backups or archives
Brought to you by:
edrusb
You can subscribe to this list here.
| 2003 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(1) |
Nov
|
Dec
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2004 |
Jan
|
Feb
(21) |
Mar
(37) |
Apr
(8) |
May
(23) |
Jun
(13) |
Jul
(41) |
Aug
(12) |
Sep
(58) |
Oct
(13) |
Nov
(34) |
Dec
(17) |
| 2005 |
Jan
(49) |
Feb
(98) |
Mar
(33) |
Apr
(41) |
May
(48) |
Jun
(24) |
Jul
(45) |
Aug
(25) |
Sep
(22) |
Oct
(26) |
Nov
(60) |
Dec
(28) |
| 2006 |
Jan
(63) |
Feb
(45) |
Mar
(29) |
Apr
(44) |
May
(19) |
Jun
(8) |
Jul
(32) |
Aug
(36) |
Sep
(24) |
Oct
(61) |
Nov
(84) |
Dec
(93) |
| 2007 |
Jan
(77) |
Feb
(41) |
Mar
(24) |
Apr
(32) |
May
(25) |
Jun
(36) |
Jul
(70) |
Aug
(21) |
Sep
(37) |
Oct
(18) |
Nov
(23) |
Dec
(6) |
| 2008 |
Jan
(9) |
Feb
(13) |
Mar
(8) |
Apr
(4) |
May
|
Jun
(4) |
Jul
(21) |
Aug
(4) |
Sep
(8) |
Oct
(29) |
Nov
(24) |
Dec
(16) |
| 2009 |
Jan
(13) |
Feb
(33) |
Mar
(20) |
Apr
(21) |
May
(22) |
Jun
(5) |
Jul
(40) |
Aug
(2) |
Sep
(2) |
Oct
(10) |
Nov
(22) |
Dec
(13) |
| 2010 |
Jan
(2) |
Feb
(9) |
Mar
(13) |
Apr
(15) |
May
(26) |
Jun
(3) |
Jul
(10) |
Aug
(7) |
Sep
(5) |
Oct
(21) |
Nov
(4) |
Dec
(17) |
| 2011 |
Jan
(22) |
Feb
(23) |
Mar
(22) |
Apr
(12) |
May
|
Jun
(39) |
Jul
(16) |
Aug
(7) |
Sep
(4) |
Oct
|
Nov
(19) |
Dec
(11) |
| 2012 |
Jan
(101) |
Feb
(5) |
Mar
(18) |
Apr
(9) |
May
(3) |
Jun
(27) |
Jul
(17) |
Aug
(19) |
Sep
(4) |
Oct
(30) |
Nov
(12) |
Dec
(23) |
| 2013 |
Jan
(14) |
Feb
(5) |
Mar
(26) |
Apr
(17) |
May
(18) |
Jun
(28) |
Jul
(12) |
Aug
(11) |
Sep
(5) |
Oct
(24) |
Nov
(9) |
Dec
(1) |
| 2014 |
Jan
(29) |
Feb
(19) |
Mar
(4) |
Apr
(9) |
May
(2) |
Jun
(1) |
Jul
|
Aug
(11) |
Sep
(10) |
Oct
(3) |
Nov
(25) |
Dec
(6) |
| 2015 |
Jan
(5) |
Feb
(15) |
Mar
(5) |
Apr
(15) |
May
(9) |
Jun
(15) |
Jul
(13) |
Aug
(3) |
Sep
(33) |
Oct
(32) |
Nov
(10) |
Dec
|
| 2016 |
Jan
(11) |
Feb
(24) |
Mar
(4) |
Apr
(41) |
May
(7) |
Jun
(28) |
Jul
(17) |
Aug
(4) |
Sep
(4) |
Oct
(3) |
Nov
(9) |
Dec
(24) |
| 2017 |
Jan
(27) |
Feb
(20) |
Mar
(19) |
Apr
(24) |
May
(9) |
Jun
(5) |
Jul
(16) |
Aug
(5) |
Sep
(28) |
Oct
|
Nov
(7) |
Dec
(12) |
| 2018 |
Jan
(4) |
Feb
(10) |
Mar
(11) |
Apr
(2) |
May
|
Jun
|
Jul
(25) |
Aug
(5) |
Sep
(29) |
Oct
(11) |
Nov
(6) |
Dec
(16) |
| 2019 |
Jan
(12) |
Feb
(35) |
Mar
(1) |
Apr
(2) |
May
(31) |
Jun
(12) |
Jul
(14) |
Aug
(40) |
Sep
|
Oct
(20) |
Nov
|
Dec
(8) |
| 2020 |
Jan
(37) |
Feb
(34) |
Mar
|
Apr
(6) |
May
(24) |
Jun
(7) |
Jul
(13) |
Aug
|
Sep
|
Oct
|
Nov
(6) |
Dec
|
| 2021 |
Jan
(17) |
Feb
(22) |
Mar
(10) |
Apr
(54) |
May
(40) |
Jun
|
Jul
(20) |
Aug
(10) |
Sep
(7) |
Oct
(10) |
Nov
(11) |
Dec
(30) |
| 2022 |
Jan
(11) |
Feb
(9) |
Mar
|
Apr
(7) |
May
(22) |
Jun
(19) |
Jul
(8) |
Aug
(6) |
Sep
(7) |
Oct
(5) |
Nov
(11) |
Dec
|
| 2023 |
Jan
(1) |
Feb
(2) |
Mar
(13) |
Apr
|
May
(3) |
Jun
(42) |
Jul
(19) |
Aug
(15) |
Sep
(21) |
Oct
|
Nov
(12) |
Dec
(33) |
| 2024 |
Jan
(4) |
Feb
(4) |
Mar
|
Apr
(20) |
May
(4) |
Jun
(2) |
Jul
|
Aug
|
Sep
(3) |
Oct
|
Nov
|
Dec
(3) |
| 2025 |
Jan
(3) |
Feb
(8) |
Mar
(26) |
Apr
(10) |
May
(5) |
Jun
(7) |
Jul
|
Aug
(2) |
Sep
|
Oct
(2) |
Nov
(3) |
Dec
(16) |
|
From: aardric <aa...@aa...> - 2025-12-24 23:38:31
|
Hail, Is there a downloadable archive of all forum messages in some common format (mbox for example)? The forum contains much useful information but searching is so much easier within a local mail reader. Rick |
|
From: <og...@gm...> - 2025-12-24 20:53:49
|
I still have the problem, even with your suggestion to darrc and reading up on the Conditional Syntax. If I specify -J on the isolation command line the problem disappears. See small testcase below. Merry Christmas and regards, Ole G *sh-5.2$ cat minimal-darrc** ***all: -an -R /fs/ -K aes:secret reference: -J aes:secret *sh-5.2$ dar -N -B minimal-darrc -g f/test -c test** ***Error reading EA for /fs : Error retrieving EA list for /fs : No such file or directory Error reading EA for /fs/f/System Volume Information : Error retrieving EA list for /fs/f/System Volume Information : Permission denied Cannot read directory contents: /fs/f/System Volume Information : Error opening directory: /fs/f/System Volume Information : Permission denied -------------------------------------------- 36 inode(s) saved including 0 hard link(s) treated 0 inode(s) changed at the moment of the backup and could not be saved properly 0 byte(s) have been wasted in the archive to resave changing files 0 inode(s) with only metadata changed 0 inode(s) not saved (no inode/file change) 0 inode(s) failed to be saved (filesystem error) 22 inode(s) ignored (excluded by filters) 0 inode(s) recorded as deleted from reference backup -------------------------------------------- Total number of inode(s) considered: 58 -------------------------------------------- EA saved for 29 inode(s) FSA saved for 36 inode(s) -------------------------------------------- *# The above messages are normal for my backups** ** **sh-5.2$ dar -N -B minimal-darrc -A test -C test-cat** ***Archive test requires a password: Received signal: Interrupt *# I pressed ^C* -------- Forwarded Message -------- Subject: Re: [Dar-support] Unexpected password prompt when isolating an encrypted archive Date: Tue, 23 Dec 2025 08:36:56 +0100 From: og...@gm... To: Denis Corbin via Dar-support <dar...@li...> Thanks, Denis. I see the point, especially the case where the catalog is being isolated could need 0, 1 or 2 passwords... Regards, Ole G On 22/12/2025 21.29, Denis Corbin via Dar-support wrote: > I guess you make a confusion between -K and -J options > > -K is to be used for the object of the operation (the archive you > create, the isolated catalog you create, the archive you test, list, > extract data from...) > > -J option apply to the archive of reference (the one given to -A > option), here the archive you take as reference to isolate a catalog. > > the same way '-$' option (pay attention to the quote if given on shell > prompt), applies to the auxiliary archive of reference (see -@ option) > > note that -K is only mandatory if you want to cipher the archive (or > isolated catalog, which is just a particular type of archive) about to > be created. > > When reading an archive, if -K -J or '-$' is not specified and the > corresponding archive is ciphered, dar will issue a prompt for you > provide the password (this is what you got here without -J option). > > I would suggest setting the lennz-darrc file more or less that way: > > #------- > all: > -K aes:secret > > reference: > -J aes:secret > #------- > > I let you read the conditional syntax paragraph in dar man page for > more information on that syntax and if you want to do more funny things: > http://dar.linux.free.fr/doc/man/dar.html#CONDITIONAL%20SYNTAX > > Cheers, > Denis > > > Le 22/12/2025 à 20:30, ogreg--- via Dar-support a écrit : >> Isolating the catalog from an encrypted archive: >> >> /dar -zzstd -N -B lennz-darrc -R /proc/cygdrive -C /proc/cygdrive/F/ >> darback/lennz-1-C-0 -A /proc/cygdrive/F/darback/lennz-1-F-0/ >> Archive lennz-1-F-0 requires a password: >> >> lennz-darrc contains the line: >> *-K aes:secret* >> >> The previous steps dar -c, dar -t, and dar -d run with a similar >> command line, but are able to used the needed information from >> lennz-darrc. >> >> >> > > |
|
From: <og...@gm...> - 2025-12-23 07:37:12
|
Thanks, Denis. I see the point, especially the case where the catalog is being isolated could need 0, 1 or 2 passwords... Regards, Ole G On 22/12/2025 21.29, Denis Corbin via Dar-support wrote: > I guess you make a confusion between -K and -J options > > -K is to be used for the object of the operation (the archive you > create, the isolated catalog you create, the archive you test, list, > extract data from...) > > -J option apply to the archive of reference (the one given to -A > option), here the archive you take as reference to isolate a catalog. > > the same way '-$' option (pay attention to the quote if given on shell > prompt), applies to the auxiliary archive of reference (see -@ option) > > note that -K is only mandatory if you want to cipher the archive (or > isolated catalog, which is just a particular type of archive) about to > be created. > > When reading an archive, if -K -J or '-$' is not specified and the > corresponding archive is ciphered, dar will issue a prompt for you > provide the password (this is what you got here without -J option). > > I would suggest setting the lennz-darrc file more or less that way: > > #------- > all: > -K aes:secret > > reference: > -J aes:secret > #------- > > I let you read the conditional syntax paragraph in dar man page for > more information on that syntax and if you want to do more funny things: > http://dar.linux.free.fr/doc/man/dar.html#CONDITIONAL%20SYNTAX > > Cheers, > Denis > > > Le 22/12/2025 à 20:30, ogreg--- via Dar-support a écrit : >> Isolating the catalog from an encrypted archive: >> >> /dar -zzstd -N -B lennz-darrc -R /proc/cygdrive -C /proc/cygdrive/F/ >> darback/lennz-1-C-0 -A /proc/cygdrive/F/darback/lennz-1-F-0/ >> Archive lennz-1-F-0 requires a password: >> >> lennz-darrc contains the line: >> *-K aes:secret* >> >> The previous steps dar -c, dar -t, and dar -d run with a similar >> command line, but are able to used the needed information from >> lennz-darrc. >> >> >> > > |
|
From: Denis C. <dar...@fr...> - 2025-12-22 20:30:06
|
I guess you make a confusion between -K and -J options -K is to be used for the object of the operation (the archive you create, the isolated catalog you create, the archive you test, list, extract data from...) -J option apply to the archive of reference (the one given to -A option), here the archive you take as reference to isolate a catalog. the same way '-$' option (pay attention to the quote if given on shell prompt), applies to the auxiliary archive of reference (see -@ option) note that -K is only mandatory if you want to cipher the archive (or isolated catalog, which is just a particular type of archive) about to be created. When reading an archive, if -K -J or '-$' is not specified and the corresponding archive is ciphered, dar will issue a prompt for you provide the password (this is what you got here without -J option). I would suggest setting the lennz-darrc file more or less that way: #------- all: -K aes:secret reference: -J aes:secret #------- I let you read the conditional syntax paragraph in dar man page for more information on that syntax and if you want to do more funny things: http://dar.linux.free.fr/doc/man/dar.html#CONDITIONAL%20SYNTAX Cheers, Denis Le 22/12/2025 à 20:30, ogreg--- via Dar-support a écrit : > Isolating the catalog from an encrypted archive: > > /dar -zzstd -N -B lennz-darrc -R /proc/cygdrive -C /proc/cygdrive/F/ > darback/lennz-1-C-0 -A /proc/cygdrive/F/darback/lennz-1-F-0/ > Archive lennz-1-F-0 requires a password: > > lennz-darrc contains the line: > *-K aes:secret* > > The previous steps dar -c, dar -t, and dar -d run with a similar command > line, but are able to used the needed information from lennz-darrc. > > > |
|
From: <og...@gm...> - 2025-12-22 19:44:03
|
Isolating the catalog from an encrypted archive: /dar -zzstd -N -B lennz-darrc -R /proc/cygdrive -C /proc/cygdrive/F/darback/lennz-1-C-0 -A /proc/cygdrive/F/darback/lennz-1-F-0/ Archive lennz-1-F-0 requires a password: lennz-darrc contains the line: *-K aes:secret* The previous steps dar -c, dar -t, and dar -d run with a similar command line, but are able to used the needed information from lennz-darrc. |
|
From: Paul N. <ma...@pa...> - 2025-12-18 12:07:43
|
On Wed, 17 Dec 2025 17:42:54 +0100 Denis Corbin via Dar-support <dar...@li...> wrote: > Le 17/12/2025 à 13:43, Paul Neuwirth via Dar-support a écrit : > > On Tue, 16 Dec 2025 16:15:00 +0100 > > Denis Corbin via Dar-support <dar...@li...> > > wrote: > [...] > >> > >> Can you try reading/testing the corrupted backup in sequential read > >> mode (add --sequential-read option when testing the archive)? > >> > > > > /backup # dar --sequential-read -t backup-roothomeFull > > Final memory cleanup... > > FATAL error, aborting operation: Error while reading in-place path > > using tape > > marks//.localbackup-snapshots/localbackup-20251215-100708 is an not > > a valid path: Empty string as subdirectory does not make a valid > > path > > OK, this is a known bug in 2.7.18 (you were using 2.7.12 here), the > archive can be read normally in sequential-read mode only, using dar > version >= 2.7.18. > > This bug was triggered by the double slash in the path. > OK thanks. I solved it by just normalizing paths before calling dar in my backup scripts. good to know it was due to a upstream bug, no need to dig in openSUSE patches. > With a fixed release, you can repair the archive/backup to also be > able to read in direct access mode (if only using sequential read > mode, no need to repair): > > dar -y fixed_backup -A broken_backup [... other options if needed] > > refer to documentation for the available options in repair mode. > no need to repair, but good to know > > for the dar side of the problem, no need to go further: this is a > known bug fixed in release 2.7.18 (May 2025). You should upgrade your > version to 2.7.19 or 2.8.2. > the latest packaged version for openSUSE and also SUSE SLE is 2.7.16, would need to create a new dar project to fix that/upgrade. For now I know what the problem is/was and work around it. Thank you so much Paul |
|
From: Denis C. <dar...@fr...> - 2025-12-17 16:45:07
|
Le 17/12/2025 à 15:28, Paul Neuwirth via Dar-support a écrit : > On Wed, 17 Dec 2025 13:43:25 +0100 > Paul Neuwirth via Dar-support <dar...@li...> wrote: > >> Thank you for the effort. based on your suggestions and the changes I >> did I'll try these things: >> -installed dar + backup to a local file system (btrfs) > result unchanged >> -installed dar + gzip instead of bzip2 > result unchanged >> -dar_static with existing options and maybe the other, if those also >> fail. > copied > https://dar.edrusb.org/dar.linux.free.fr/Releases/Dar_static/x86_64_GNU_Linux/dar_static_2.7.19.libssh1_x86_64_GNU_Linux > to /usr/local/bin/dar_static > backup (bzip9, archive location on NFS mount) ran fine. these "Failed > resaving uncompressed the inode data" errors are also gone. >> and I might take a look in the patches in those -bp... rpms if error >> doesn't occur with the dar_static. > here I ran into an issue on openSUSE's website software.opensuse.org, > while those packages exist in official repositories (e.g. > https://download.opensuse.org/distribution/leap/16.0/repo/oss/x86_64/dar-2.7.15-bp160.1.3.x86_64.rpm But this would not solve the issue, you need release 2.7.18 at least for the bug fix. > ), they aren't listed on https://software.opensuse.org/package/dar > so I will need to download the src.rpm and examine the contained > patches directly. Will do that tomorrow. > > Thank you > Paul > Cheers, Denis |
|
From: Denis C. <dar...@fr...> - 2025-12-17 16:43:03
|
Le 17/12/2025 à 13:43, Paul Neuwirth via Dar-support a écrit : > On Tue, 16 Dec 2025 16:15:00 +0100 > Denis Corbin via Dar-support <dar...@li...> wrote: [...] >> >> This is more the filesystem where dar backups are stored that are of >> interest here, which one it is? > > the destination is a NFSv4 mount, btrfs on the nfs server. OK, no problem here neither writing backups to NFSv4, I use it often. [...] >> >> Can you try reading/testing the corrupted backup in sequential read >> mode (add --sequential-read option when testing the archive)? >> > > /backup # dar --sequential-read -t backup-roothomeFull > Final memory cleanup... > FATAL error, aborting operation: Error while reading in-place path > using tape marks//.localbackup-snapshots/localbackup-20251215-100708 is > an not a valid path: Empty string as subdirectory does not make a valid > path OK, this is a known bug in 2.7.18 (you were using 2.7.12 here), the archive can be read normally in sequential-read mode only, using dar version >= 2.7.18. This bug was triggered by the double slash in the path. With a fixed release, you can repair the archive/backup to also be able to read in direct access mode (if only using sequential read mode, no need to repair): dar -y fixed_backup -A broken_backup [... other options if needed] refer to documentation for the available options in repair mode. > > interesting, that suggests something different? I've tested in non sequential-read mode this same bug leads to dar reporting a corrupted archive the same way as you report. This is the root cause of the problem you met. > > >>> >>> Do you have any idea? consulting search machines and chatGPT didn't >>> result in anything useful. >> >> Well ChatGPT is good to make relations between existing information >> and find the statistical most probable answer... but not to >> investigate nor to solve problems that no one else ever met so far ;) >> >> My suggestions (ChatDenis's suggestions): >> >> 1/ Could you try reading in sequential read mode (add >> --sequential-read option when testing the archive)? >> >> 2/ Could you use the official 2.7.19 dar_static version to build your >> backup. >> https://dar.edrusb.org/dar.linux.free.fr/Releases/Dar_static/ >> >> We will see the next step with the result of these two suggestions. >> > > Thank you for the effort. based on your suggestions and the changes I > did I'll try these things: > -installed dar + backup to a local file system (btrfs) > -installed dar + gzip instead of bzip2 > -dar_static with existing options and maybe the other, if those also > fail. > and I might take a look in the patches in those -bp... rpms if error > doesn't occur with the dar_static. > > When starting those tests I noticed some flaws in my backup scripts I > need to fix along. I will report as soon I have above results. Might > take a day. for the dar side of the problem, no need to go further: this is a known bug fixed in release 2.7.18 (May 2025). You should upgrade your version to 2.7.19 or 2.8.2. > > Best wishes > Paul > Cheers, Denis |
|
From: Paul N. <ma...@pa...> - 2025-12-17 14:28:36
|
On Wed, 17 Dec 2025 13:43:25 +0100 Paul Neuwirth via Dar-support <dar...@li...> wrote: > Thank you for the effort. based on your suggestions and the changes I > did I'll try these things: > -installed dar + backup to a local file system (btrfs) result unchanged > -installed dar + gzip instead of bzip2 result unchanged > -dar_static with existing options and maybe the other, if those also > fail. copied https://dar.edrusb.org/dar.linux.free.fr/Releases/Dar_static/x86_64_GNU_Linux/dar_static_2.7.19.libssh1_x86_64_GNU_Linux to /usr/local/bin/dar_static backup (bzip9, archive location on NFS mount) ran fine. these "Failed resaving uncompressed the inode data" errors are also gone. > and I might take a look in the patches in those -bp... rpms if error > doesn't occur with the dar_static. here I ran into an issue on openSUSE's website software.opensuse.org, while those packages exist in official repositories (e.g. https://download.opensuse.org/distribution/leap/16.0/repo/oss/x86_64/dar-2.7.15-bp160.1.3.x86_64.rpm ), they aren't listed on https://software.opensuse.org/package/dar so I will need to download the src.rpm and examine the contained patches directly. Will do that tomorrow. Thank you Paul |
|
From: Paul N. <ma...@pa...> - 2025-12-17 12:43:48
|
On Tue, 16 Dec 2025 16:15:00 +0100 Denis Corbin via Dar-support <dar...@li...> wrote: > > > > exact command run: > > dar -c /backup/backup-roothomeFull -@ > > /backup/backup-roothomeFull.cat -R > > //.localbackup-snapshots/localbackup-20251215-100708 -zbzip2:9 -m 0 > > -v -D -N -w -Q --hash sha1 --user-comment '%h %d %c' > > --retry-on-change 3 -g"root" -P "*/.Trash*" -P ".Trash*" -P > > "*/.mozilla/*/[Cc]ache" -P "*/.opera/[Cc]ache*" -P > > "*/.pan/*/[Cc]ache" -P "*/.thumbnails" -P "*/.beagle" -P > > "root/.opera/" -P "root/.cache/" -P "root/clamav-quarantine/" -an > > -Z "*.dar" -Z "*.crypt" -Z "*.arj" -Z "*.bz2" -Z "*.bz" -Z "*.Z" -Z > > "*.tgz" -Z "*.taz" -Z "*.cpio" -Z "*.deb" -Z "*.gtar" -Z "*.gz" -Z > > "*.lzh" -Z "*.lhz" -Z "*.rar" -Z "*.rpm" -Z "*.shar" -Z "*.sv4cpi" > > -Z "*.sv4crc" -Z "*.ustar" -Z "*.zoo" -Z "*.zip" -Z "*.jar" -Z > > "*.jpg" -Z "*.gif" -Z "*.mpg" -Z "*.mpeg" -Z "*.avi" -Z "*.ram" -Z > > "*.rm" -Z "*.7z" -Z "*.xz" -Z "*.lz" -Z "*.lzma" -Z "*.lz4" -Z > > "*.zst" -Z "*.txz" -Z "*.tbz" -Z "*.tbz2" -Z "*.apk" -Z "*.war" -Z > > "*.msi" -Z "*.cab" -Z "*.pkg" -Z "*.snap" -Z "*.jpeg" -Z "*.jpe" -Z > > "*.png" -Z "*.webp" -Z "*.tif" -Z "*.tiff" -Z "*.mp3" -Z "*.ogg" -Z > > "*.oga" -Z "*.opus" -Z "*.flac" -Z "*.m4a" -Z "*.aac" -Z "*.wma" -Z > > "*.mp4" -Z "*.m4v" -Z "*.mkv" -Z "*.mov" -Z "*.wmv" -Z "*.flv" -Z > > "*.webm" -Z "*.docx" -Z "*.xlsx" -Z "*.pptx" -Z "*.odt" -Z "*.ods" > > -Z "*.odp" -ac > > > > (/.localbackup-snapshots/localbackup-20251215-100708/ in this case > > is a btrfs snapshot created by my backup script of the root > > filesystem) > > btrfs and snapshot is not a problem, they appear to dar as normal > filesystem to backup. > > This is more the filesystem where dar backups are stored that are of > interest here, which one it is? the destination is a NFSv4 mount, btrfs on the nfs server. > > > > everything looks fine besides these "Failed resaving uncompressed > > the inode data", but the created .dar files are unusable!? > > Then "Failed resaving uncompressed" are effectively weird: > - when a compressed file takes more space than its uncompressed size, > dar truncates the dar backup to the location where the backup for > that file was starting and then restart the backup without > compression that particular file. > > Either truncate() system call failed, or something in the lower > libdar layers (caching layer, here there is no slicing so this is > even simpler) > > Can you try reading/testing the corrupted backup in sequential read > mode (add --sequential-read option when testing the archive)? > /backup # dar --sequential-read -t backup-roothomeFull Final memory cleanup... FATAL error, aborting operation: Error while reading in-place path using tape marks//.localbackup-snapshots/localbackup-20251215-100708 is an not a valid path: Empty string as subdirectory does not make a valid path interesting, that suggests something different? > > > > Do you have any idea? consulting search machines and chatGPT didn't > > result in anything useful. > > Well ChatGPT is good to make relations between existing information > and find the statistical most probable answer... but not to > investigate nor to solve problems that no one else ever met so far ;) > > My suggestions (ChatDenis's suggestions): > > 1/ Could you try reading in sequential read mode (add > --sequential-read option when testing the archive)? > > 2/ Could you use the official 2.7.19 dar_static version to build your > backup. > https://dar.edrusb.org/dar.linux.free.fr/Releases/Dar_static/ > > We will see the next step with the result of these two suggestions. > Thank you for the effort. based on your suggestions and the changes I did I'll try these things: -installed dar + backup to a local file system (btrfs) -installed dar + gzip instead of bzip2 -dar_static with existing options and maybe the other, if those also fail. and I might take a look in the patches in those -bp... rpms if error doesn't occur with the dar_static. When starting those tests I noticed some flaws in my backup scripts I need to fix along. I will report as soon I have above results. Might take a day. Best wishes Paul |
|
From: John G. <jgo...@co...> - 2025-12-16 16:06:56
|
On Tue, Dec 16 2025, Denis Corbin via Dar-support wrote: > at current point of investigation I would not target at btrfs (as source of the > backup). Compression and resaving uncompressed are the most probable cause of > the problem, though this is something that exists for long and used by may > users... but who knows. For what it's worth, I've been using dar over btrfs on Debian for years and have never had a problem. That includes both backing up btrfs snapshots and storing archives on btrfs filesystems. - John |
|
From: Denis C. <dar...@fr...> - 2025-12-16 15:15:13
|
Le 16/12/2025 à 09:05, Paul Neuwirth via Dar-support a écrit : > affected/installed dar versions: dar-2.7.12-bp156.1.4 and > dar-2.7.15-bp160.1.3 from official openSUSE repositories > openSUSE Leap 15.6 and openSUSE Leap 16.0 > > Hello, Hi, > dar (on some of my backups, not all) creates corrupted archives: > First, thanks for your feedback! I have checked the Changelog since 2.7.12 was released, in case a known bug would have been fixed in that area, but could not find anything obviously relevant. Version 2.7.12 is not very recent (Octobre 2023, latest are 2.7.19 and 2.8.2) but this is not a problem to keep using it for the investigations. However what I wonder is what the "-bp156.1.4" suffix in the version name means in term of patching applied by Suse... > ls and dar -t output (dar -l same output) > # ls -l backup-roothome* > -rw------- 1 root root 30 15. Dez 10:14 backup-roothomeDataBase > -rw------- 1 root root 28427612 15. Dez 10:14 backup-roothomeFull.1.dar > -rw------- 1 root root 68 15. Dez 10:14 backup-roothomeFull.1.dar.sha1 > -rw------- 1 root root 40873 15. Dez 10:14 backup-roothomeFull.cat.1.dar > -rw------- 1 root root 72 15. Dez 10:14 backup-roothomeFull.cat.1.dar.sha1 > (0)(0)xxx:/backup # dar -t backup-roothomeFull > Final memory cleanup... > FATAL error, aborting operation: Cannot open catalogue: incoherent catalogue structure > (2)(0)xxx:/backup # dar -t backup-roothomeFull.cat > Final memory cleanup... > FATAL error, aborting operation: Cannot open catalogue: incoherent catalogue structure > > checksums are ok. so this is not likely a file system issue nor a disk issue > > exact command run: > dar -c /backup/backup-roothomeFull -@ /backup/backup-roothomeFull.cat > -R //.localbackup-snapshots/localbackup-20251215-100708 -zbzip2:9 -m 0 > -v -D -N -w -Q --hash sha1 --user-comment '%h %d %c' --retry-on-change > 3 -g"root" -P "*/.Trash*" -P ".Trash*" -P "*/.mozilla/*/[Cc]ache" -P > "*/.opera/[Cc]ache*" -P "*/.pan/*/[Cc]ache" -P "*/.thumbnails" -P > "*/.beagle" -P "root/.opera/" -P "root/.cache/" -P > "root/clamav-quarantine/" -an -Z "*.dar" -Z "*.crypt" -Z "*.arj" -Z > "*.bz2" -Z "*.bz" -Z "*.Z" -Z "*.tgz" -Z "*.taz" -Z "*.cpio" -Z "*.deb" > -Z "*.gtar" -Z "*.gz" -Z "*.lzh" -Z "*.lhz" -Z "*.rar" -Z "*.rpm" -Z > "*.shar" -Z "*.sv4cpi" -Z "*.sv4crc" -Z "*.ustar" -Z "*.zoo" -Z "*.zip" > -Z "*.jar" -Z "*.jpg" -Z "*.gif" -Z "*.mpg" -Z "*.mpeg" -Z "*.avi" -Z > "*.ram" -Z "*.rm" -Z "*.7z" -Z "*.xz" -Z "*.lz" -Z "*.lzma" -Z "*.lz4" > -Z "*.zst" -Z "*.txz" -Z "*.tbz" -Z "*.tbz2" -Z "*.apk" -Z "*.war" -Z > "*.msi" -Z "*.cab" -Z "*.pkg" -Z "*.snap" -Z "*.jpeg" -Z "*.jpe" -Z > "*.png" -Z "*.webp" -Z "*.tif" -Z "*.tiff" -Z "*.mp3" -Z "*.ogg" -Z > "*.oga" -Z "*.opus" -Z "*.flac" -Z "*.m4a" -Z "*.aac" -Z "*.wma" -Z > "*.mp4" -Z "*.m4v" -Z "*.mkv" -Z "*.mov" -Z "*.wmv" -Z "*.flv" -Z > "*.webm" -Z "*.docx" -Z "*.xlsx" -Z "*.pptx" -Z "*.odt" -Z "*.ods" -Z > "*.odp" -ac > > (/.localbackup-snapshots/localbackup-20251215-100708/ in this case is > a btrfs snapshot created by my backup script of the root filesystem) btrfs and snapshot is not a problem, they appear to dar as normal filesystem to backup. This is more the filesystem where dar backups are stored that are of interest here, which one it is? > > dar returned 0, suggesting success, its output: > No user target found on command line > The following user comment will be placed in clear text in the archive: > theta Mon Dec 15 10:07:13 2025 "dar" "-c" "/backup/backup-roothomeFull" > "-@" "/backup/backup-roothomeFull.cat" "-R" > "//.localbackup-snapshots/localbackup-20251215-100708" "-zbzip2:9" "-m" > "0" "-v" "-D" "-N" "-w" "-Q" "--hash" "sha1" "--user-comment" "%h %d > %c" "--retry-on-change" "3" "-groot" "-P" "*/.Trash*" "-P" ".Trash*" > "-P" "*/.mozilla/*/[Cc]ache" "-P" "*/.opera/[Cc]ache*" "-P" > "*/.pan/*/[Cc]ache" "-P" "*/.thumbnails" "-P" "*/.beagle" "-P" > "root/.opera/" "-P" "root/.cache/" "-P" "root/clamav-quarantine/" "-an" > "-Z" "*.dar" "-Z" "*.crypt" "-Z" "*.arj" "-Z" "*.bz2" "-Z" "*.bz" "-Z" > "*.Z" "-Z" "*.tgz" "-Z" "*.taz" "-Z" "*.cpio" "-Z" "*.deb" "-Z" > "*.gtar" "-Z" "*.gz" "-Z" "*.lzh" "-Z" "*.lhz" "-Z" "*.rar" "-Z" > "*.rpm" "-Z" "*.shar" "-Z" "*.sv4cpi" "-Z" "*.sv4crc" "-Z" "*.ustar" > "-Z" "*.zoo" "-Z" "*.zip" "-Z" "*.jar" "-Z" "*.jpg" "-Z" "*.gif" "-Z" > "*.mpg" "-Z" "*.mpeg" "-Z" "*.avi" "-Z" "*.ram" "-Z" "*.rm" "-Z" "*.7z" > "-Z" "*.xz" "-Z" "*.lz" "-Z" "*.lzma" "-Z" "*.lz4" "-Z" "*.zst" "-Z" > "*.txz" "-Z" "*.tbz" "-Z" "*.tbz2" "-Z" "*.apk" "-Z" "*.war" "-Z" > "*.msi" "-Z" "*.cab" "-Z" "*.pkg" "-Z" "*.snap" "-Z" "*.jpeg" "-Z" > "*.jpe" "-Z" "*.png" "-Z" "*.webp" "-Z" "*.tif" "-Z" "*.tiff" "-Z" > "*.mp3" "-Z" "*.ogg" "-Z" "*.oga" "-Z" "*.opus" "-Z" "*.flac" "-Z" > "*.m4a" "-Z" "*.aac" "-Z" "*.wma" "-Z" "*.mp4" "-Z" "*.m4v" "-Z" > "*.mkv" "-Z" "*.mov" "-Z" "*.wmv" "-Z" "*.flv" "-Z" "*.webm" "-Z" > "*.docx" "-Z" "*.xlsx" "-Z" "*.pptx" "-Z" "*.odt" "-Z" "*.ods" "-Z" > "*.odp" "-ac" [...] > > everything looks fine besides these "Failed resaving uncompressed > the inode data", but the created .dar files are unusable!? Then "Failed resaving uncompressed" are effectively weird: - when a compressed file takes more space than its uncompressed size, dar truncates the dar backup to the location where the backup for that file was starting and then restart the backup without compression that particular file. Either truncate() system call failed, or something in the lower libdar layers (caching layer, here there is no slicing so this is even simpler) Can you try reading/testing the corrupted backup in sequential read mode (add --sequential-read option when testing the archive)? > I have the same problem on 2 different computers/servers with the above > mentioned versions/distributions for two different backups (one machine > backup /root/ false, on the other backup /etc/ fails (same Failure > mode). this good that the problem is reproducible, it will be easier to understand and fix. > Never had the issue before. I created a new backup script, which runs > fine on an older install and dar-2.4.18-2.3. and deployed it after > much testing to other machines. what essentially changed from before > is compression (from gzip to bzip2) and the use of btrfs snapshots. at current point of investigation I would not target at btrfs (as source of the backup). Compression and resaving uncompressed are the most probable cause of the problem, though this is something that exists for long and used by may users... but who knows. > It happens both with -c and -A (differential to older, not-corrupted > Full-backup) yes, this is not related to what to decide to backup but how the selected data to backup is stored/compressed on filesystem. > > Do you have any idea? consulting search machines and chatGPT didn't > result in anything useful. Well ChatGPT is good to make relations between existing information and find the statistical most probable answer... but not to investigate nor to solve problems that no one else ever met so far ;) My suggestions (ChatDenis's suggestions): 1/ Could you try reading in sequential read mode (add --sequential-read option when testing the archive)? 2/ Could you use the official 2.7.19 dar_static version to build your backup. https://dar.edrusb.org/dar.linux.free.fr/Releases/Dar_static/ We will see the next step with the result of these two suggestions. > > Thank you, > > Paul > Cheers, Denis |
|
From: Paul N. <ma...@pa...> - 2025-12-16 08:21:30
|
affected/installed dar versions: dar-2.7.12-bp156.1.4 and
dar-2.7.15-bp160.1.3 from official openSUSE repositories
openSUSE Leap 15.6 and openSUSE Leap 16.0
Hello,
dar (on some of my backups, not all) creates corrupted archives:
ls and dar -t output (dar -l same output)
# ls -l backup-roothome*
-rw------- 1 root root 30 15. Dez 10:14 backup-roothomeDataBase
-rw------- 1 root root 28427612 15. Dez 10:14 backup-roothomeFull.1.dar
-rw------- 1 root root 68 15. Dez 10:14 backup-roothomeFull.1.dar.sha1
-rw------- 1 root root 40873 15. Dez 10:14 backup-roothomeFull.cat.1.dar
-rw------- 1 root root 72 15. Dez 10:14 backup-roothomeFull.cat.1.dar.sha1
(0)(0)xxx:/backup # dar -t backup-roothomeFull
Final memory cleanup...
FATAL error, aborting operation: Cannot open catalogue: incoherent catalogue structure
(2)(0)xxx:/backup # dar -t backup-roothomeFull.cat
Final memory cleanup...
FATAL error, aborting operation: Cannot open catalogue: incoherent catalogue structure
checksums are ok.
exact command run:
dar -c /backup/backup-roothomeFull -@ /backup/backup-roothomeFull.cat
-R //.localbackup-snapshots/localbackup-20251215-100708 -zbzip2:9 -m 0
-v -D -N -w -Q --hash sha1 --user-comment '%h %d %c' --retry-on-change
3 -g"root" -P "*/.Trash*" -P ".Trash*" -P "*/.mozilla/*/[Cc]ache" -P
"*/.opera/[Cc]ache*" -P "*/.pan/*/[Cc]ache" -P "*/.thumbnails" -P
"*/.beagle" -P "root/.opera/" -P "root/.cache/" -P
"root/clamav-quarantine/" -an -Z "*.dar" -Z "*.crypt" -Z "*.arj" -Z
"*.bz2" -Z "*.bz" -Z "*.Z" -Z "*.tgz" -Z "*.taz" -Z "*.cpio" -Z "*.deb"
-Z "*.gtar" -Z "*.gz" -Z "*.lzh" -Z "*.lhz" -Z "*.rar" -Z "*.rpm" -Z
"*.shar" -Z "*.sv4cpi" -Z "*.sv4crc" -Z "*.ustar" -Z "*.zoo" -Z "*.zip"
-Z "*.jar" -Z "*.jpg" -Z "*.gif" -Z "*.mpg" -Z "*.mpeg" -Z "*.avi" -Z
"*.ram" -Z "*.rm" -Z "*.7z" -Z "*.xz" -Z "*.lz" -Z "*.lzma" -Z "*.lz4"
-Z "*.zst" -Z "*.txz" -Z "*.tbz" -Z "*.tbz2" -Z "*.apk" -Z "*.war" -Z
"*.msi" -Z "*.cab" -Z "*.pkg" -Z "*.snap" -Z "*.jpeg" -Z "*.jpe" -Z
"*.png" -Z "*.webp" -Z "*.tif" -Z "*.tiff" -Z "*.mp3" -Z "*.ogg" -Z
"*.oga" -Z "*.opus" -Z "*.flac" -Z "*.m4a" -Z "*.aac" -Z "*.wma" -Z
"*.mp4" -Z "*.m4v" -Z "*.mkv" -Z "*.mov" -Z "*.wmv" -Z "*.flv" -Z
"*.webm" -Z "*.docx" -Z "*.xlsx" -Z "*.pptx" -Z "*.odt" -Z "*.ods" -Z
"*.odp" -ac
(/.localbackup-snapshots/localbackup-20251215-100708/ in this case is
a btrfs snapshot created by my backup script of the root filesystem)
dar returned 0, suggesting success, its output:
No user target found on command line
The following user comment will be placed in clear text in the archive:
theta Mon Dec 15 10:07:13 2025 "dar" "-c" "/backup/backup-roothomeFull"
"-@" "/backup/backup-roothomeFull.cat" "-R"
"//.localbackup-snapshots/localbackup-20251215-100708" "-zbzip2:9" "-m"
"0" "-v" "-D" "-N" "-w" "-Q" "--hash" "sha1" "--user-comment" "%h %d
%c" "--retry-on-change" "3" "-groot" "-P" "*/.Trash*" "-P" ".Trash*"
"-P" "*/.mozilla/*/[Cc]ache" "-P" "*/.opera/[Cc]ache*" "-P"
"*/.pan/*/[Cc]ache" "-P" "*/.thumbnails" "-P" "*/.beagle" "-P"
"root/.opera/" "-P" "root/.cache/" "-P" "root/clamav-quarantine/" "-an"
"-Z" "*.dar" "-Z" "*.crypt" "-Z" "*.arj" "-Z" "*.bz2" "-Z" "*.bz" "-Z"
"*.Z" "-Z" "*.tgz" "-Z" "*.taz" "-Z" "*.cpio" "-Z" "*.deb" "-Z"
"*.gtar" "-Z" "*.gz" "-Z" "*.lzh" "-Z" "*.lhz" "-Z" "*.rar" "-Z"
"*.rpm" "-Z" "*.shar" "-Z" "*.sv4cpi" "-Z" "*.sv4crc" "-Z" "*.ustar"
"-Z" "*.zoo" "-Z" "*.zip" "-Z" "*.jar" "-Z" "*.jpg" "-Z" "*.gif" "-Z"
"*.mpg" "-Z" "*.mpeg" "-Z" "*.avi" "-Z" "*.ram" "-Z" "*.rm" "-Z" "*.7z"
"-Z" "*.xz" "-Z" "*.lz" "-Z" "*.lzma" "-Z" "*.lz4" "-Z" "*.zst" "-Z"
"*.txz" "-Z" "*.tbz" "-Z" "*.tbz2" "-Z" "*.apk" "-Z" "*.war" "-Z"
"*.msi" "-Z" "*.cab" "-Z" "*.pkg" "-Z" "*.snap" "-Z" "*.jpeg" "-Z"
"*.jpe" "-Z" "*.png" "-Z" "*.webp" "-Z" "*.tif" "-Z" "*.tiff" "-Z"
"*.mp3" "-Z" "*.ogg" "-Z" "*.oga" "-Z" "*.opus" "-Z" "*.flac" "-Z"
"*.m4a" "-Z" "*.aac" "-Z" "*.wma" "-Z" "*.mp4" "-Z" "*.m4v" "-Z"
"*.mkv" "-Z" "*.mov" "-Z" "*.wmv" "-Z" "*.flv" "-Z" "*.webm" "-Z"
"*.docx" "-Z" "*.xlsx" "-Z" "*.pptx" "-Z" "*.odt" "-Z" "*.ods" "-Z"
"*.odp" "-ac"
Creating low layer: Writing archive into a plain file object...
Adding a new layer on top: Caching layer for better performances...
Writing down the archive header...
Adding a new layer on top: Escape layer to allow sequential reading...
Adding a new layer on top: compression...
Adding a streamed compression layer
All layers have been created successfully
Building the catalog object...
Processing files for backup...
Adding folder to archive: //.localbackup-snapshots/localbackup-20251215-100708/root
Saving Filesystem Specific Attributes for //.localbackup-snapshots/localbackup-20251215-100708/root
Adding folder to archive:
//.localbackup-snapshots/localbackup-20251215-100708/root/.dbus
[...]
Adding file to archive: //.localbackup-snapshots/localbackup-20251215-100708/root/.gnupg/pubring.gpg
//.localbackup-snapshots/localbackup-20251215-100708/root/.gnupg/pubring.gpg : Failed resaving uncompressed the inode data
[this kind of error, I never noticed before repeats for a lot of files]
//.localbackup-snapshots/localbackup-20251215-100708/root/LOCK-backup : Failed resaving uncompressed the inode data
Saving Filesystem Specific Attributes for //.localbackup-snapshots/localbackup-20251215-100708/root/LOCK-backup
Writing down archive contents...
Closing the compression layer...
Closing the escape layer...
Writing down the first archive terminator...
Writing down archive trailer...
Writing down the second archive terminator...
Closing archive low layer...
Archive is closed.
--------------------------------------------
1497 inode(s) saved
including 0 hard link(s) treated
0 inode(s) changed at the moment of the backup and could not be saved properly
0 byte(s) have been wasted in the archive to resave changing files
0 inode(s) with only metadata changed
0 inode(s) not saved (no inode/file change)
0 inode(s) failed to be saved (filesystem error)
29 inode(s) ignored (excluded by filters)
0 inode(s) recorded as deleted from reference backup
--------------------------------------------
Total number of inode(s) considered: 1526
--------------------------------------------
EA saved for 0 inode(s)
FSA saved for 1362 inode(s)
--------------------------------------------
Making room in memory (releasing memory used by archive of reference)...
Now performing on-fly isolation...
Creating low layer: Writing archive into a plain file object...
Adding a new layer on top: Caching layer for better performances...
Writing down the archive header...
Adding a new layer on top: Escape layer to allow sequential reading...
Adding a new layer on top: compression...
Adding a streamed compression layer
All layers have been created successfully
Writing down archive contents...
Closing the compression layer...
Closing the escape layer...
Writing down the first archive terminator...
Writing down archive trailer...
Writing down the second archive terminator...
Closing archive low layer...
Archive is closed.
Final memory cleanup...
everything looks fine besides these "Failed resaving uncompressed
the inode data", but the created .dar files are unusable!?
I have the same problem on 2 different computers/servers with the above
mentioned versions/distributions for two different backups (one machine
backup /root/ false, on the other backup /etc/ fails (same Failure
mode).
Never had the issue before. I created a new backup script, which runs
fine on an older install and dar-2.4.18-2.3. and deployed it after
much testing to other machines. what essentially changed from before
is compression (from gzip to bzip2) and the use of btrfs snapshots.
It happens both with -c and -A (differential to older, not-corrupted
Full-backup)
Do you have any idea? consulting search machines and chatGPT didn't
result in anything useful.
Thank you,
Paul
Further info: ldd and installed rpm versions on the newer install:
# ldd /usr/bin/dar
linux-vdso.so.1 (0x00007fedcb19c000)
libdar64.so.6000 => /lib64/libdar64.so.6000 (0x00007fedcae00000)
libstdc++.so.6 => /lib64/libstdc++.so.6 (0x00007fedcaa00000)
libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x00007fedcb0c9000)
libc.so.6 => /lib64/libc.so.6 (0x00007fedca806000)
libargon2.so.1 => /lib64/libargon2.so.1 (0x00007fedcb0be000)
libthreadar.so.1000 => /lib64/libthreadar.so.1000 (0x00007fedcb0ae000)
libgpgme.so.11 => /lib64/libgpgme.so.11 (0x00007fedcada8000)
libcurl.so.4 => /lib64/libcurl.so.4 (0x00007fedcacd0000)
librsync.so.2 => /lib64/librsync.so.2 (0x00007fedcacc2000)
libgcrypt.so.20 => /lib64/glibc-hwcaps/x86-64-v3/libgcrypt.so.20.5.1 (0x00007fedca668000)
liblz4.so.1 => /lib64/glibc-hwcaps/x86-64-v3/liblz4.so.1.10.0 (0x00007fedcac9a000)
libzstd.so.1 => /lib64/glibc-hwcaps/x86-64-v3/libzstd.so.1.5.7 (0x00007fedca5b4000)
liblzma.so.5 => /lib64/glibc-hwcaps/x86-64-v3/liblzma.so.5.8.1 (0x00007fedca57a000)
liblzo2.so.2 => /lib64/glibc-hwcaps/x86-64-v3/liblzo2.so.2.0.0 (0x00007fedcac77000)
libbz2.so.1 => /lib64/glibc-hwcaps/x86-64-v3/libbz2.so.1.0.6 (0x00007fedca561000)
libz.so.1 => /lib64/glibc-hwcaps/x86-64-v3/libz.so.1.2.13 (0x00007fedca547000)
libm.so.6 => /lib64/libm.so.6 (0x00007fedca45f000)
/lib64/ld-linux-x86-64.so.2 (0x00007fedcb19e000)
libassuan.so.9 => /lib64/libassuan.so.9 (0x00007fedca44a000)
libgpg-error.so.0 => /lib64/libgpg-error.so.0 (0x00007fedca41e000)
libnghttp2.so.14 => /lib64/libnghttp2.so.14 (0x00007fedca3f2000)
libidn2.so.0 => /lib64/libidn2.so.0 (0x00007fedca3d0000)
libssh.so.4 => /lib64/libssh.so.4 (0x00007fedca352000)
libpsl.so.5 => /lib64/libpsl.so.5 (0x00007fedca33e000)
libssl.so.3 => /lib64/glibc-hwcaps/x86-64-v3/libssl.so.3.5.0 (0x00007fedca21d000)
libcrypto.so.3 => /lib64/glibc-hwcaps/x86-64-v3/libcrypto.so.3.5.0 (0x00007fedc9a00000)
libgssapi_krb5.so.2 => /lib64/libgssapi_krb5.so.2 (0x00007fedca1c9000)
libldap.so.2 => /lib64/libldap.so.2 (0x00007fedca166000)
liblber.so.2 => /lib64/liblber.so.2 (0x00007fedca155000)
libbrotlidec.so.1 => /lib64/glibc-hwcaps/x86-64-v3/libbrotlidec.so.1.1.0 (0x00007fedca147000)
libjitterentropy.so.3 => /lib64/libjitterentropy.so.3 (0x00007fedca13c000)
libunistring.so.5 => /lib64/libunistring.so.5 (0x00007fedc9819000)
libkrb5.so.3 => /lib64/libkrb5.so.3 (0x00007fedca06d000)
libk5crypto.so.3 => /lib64/libk5crypto.so.3 (0x00007fedc9801000)
libcom_err.so.2 => /lib64/libcom_err.so.2 (0x00007fedcac6b000)
libkrb5support.so.0 => /lib64/libkrb5support.so.0 (0x00007fedca05d000)
libsasl2.so.3 => /lib64/libsasl2.so.3 (0x00007fedc97e2000)
libbrotlicommon.so.1 => /lib64/glibc-hwcaps/x86-64-v3/libbrotlicommon.so.1.1.0 (0x00007fedc97bf000)
libkeyutils.so.1 => /lib64/libkeyutils.so.1 (0x00007fedc97b8000)
libresolv.so.2 => /lib64/libresolv.so.2 (0x00007fedc97a7000)
libselinux.so.1 => /lib64/libselinux.so.1 (0x00007fedc9775000)
libpcre2-8.so.0 => /lib64/libpcre2-8.so.0 (0x00007fedc96bd000)
# for i in $(ldd /usr/bin/dar | awk '{ print ($3 ? $3 : $1) }' | sort ); do echo -e "$(rpm -qf "${i}")\t${i}"; done
libbrotlicommon1-x86-64-v3-1.1.0-160000.2.2.x86_64 /lib64/glibc-hwcaps/x86-64-v3/libbrotlicommon.so.1.1.0
libbrotlidec1-x86-64-v3-1.1.0-160000.2.2.x86_64 /lib64/glibc-hwcaps/x86-64-v3/libbrotlidec.so.1.1.0
libbz2-1-x86-64-v3-1.0.8-160000.2.2.x86_64 /lib64/glibc-hwcaps/x86-64-v3/libbz2.so.1.0.6
libopenssl3-x86-64-v3-3.5.0-160000.3.2.x86_64 /lib64/glibc-hwcaps/x86-64-v3/libcrypto.so.3.5.0
libgcrypt20-x86-64-v3-1.11.1-160000.2.2.x86_64 /lib64/glibc-hwcaps/x86-64-v3/libgcrypt.so.20.5.1
liblz4-1-x86-64-v3-1.10.0-160000.2.2.x86_64 /lib64/glibc-hwcaps/x86-64-v3/liblz4.so.1.10.0
liblzma5-x86-64-v3-5.8.1-160000.2.2.x86_64 /lib64/glibc-hwcaps/x86-64-v3/liblzma.so.5.8.1
liblzo2-2-x86-64-v3-2.10-160000.3.2.x86_64 /lib64/glibc-hwcaps/x86-64-v3/liblzo2.so.2.0.0
libopenssl3-x86-64-v3-3.5.0-160000.3.2.x86_64 /lib64/glibc-hwcaps/x86-64-v3/libssl.so.3.5.0
libz1-x86-64-v3-1.2.13-160000.2.2.x86_64 /lib64/glibc-hwcaps/x86-64-v3/libz.so.1.2.13
libzstd1-x86-64-v3-1.5.7-160000.2.2.x86_64 /lib64/glibc-hwcaps/x86-64-v3/libzstd.so.1.5.7
glibc-2.40-160000.2.2.x86_64 /lib64/ld-linux-x86-64.so.2
libargon2-1-20190702-160000.2.2.x86_64 /lib64/libargon2.so.1
libassuan9-3.0.2-160000.2.2.x86_64 /lib64/libassuan.so.9
libcom_err2-1.47.0-160000.3.2.x86_64 /lib64/libcom_err.so.2
glibc-2.40-160000.2.2.x86_64 /lib64/libc.so.6
libcurl4-8.14.1-160000.3.1.x86_64 /lib64/libcurl.so.4
libdar64-6000-2.7.15-bp160.1.3.x86_64 /lib64/libdar64.so.6000
libgcc_s1-15.1.1+git9973-160000.2.2.x86_64 /lib64/libgcc_s.so.1
libgpg-error0-1.54-160000.2.2.x86_64 /lib64/libgpg-error.so.0
libgpgme11-1.24.3-160000.3.1.x86_64 /lib64/libgpgme.so.11
krb5-1.21.3-160000.2.2.x86_64 /lib64/libgssapi_krb5.so.2
libidn2-0-2.3.8-160000.2.2.x86_64 /lib64/libidn2.so.0
libjitterentropy3-3.6.3-160000.2.2.x86_64 /lib64/libjitterentropy.so.3
krb5-1.21.3-160000.2.2.x86_64 /lib64/libk5crypto.so.3
libkeyutils1-1.6.3-160000.3.2.x86_64 /lib64/libkeyutils.so.1
krb5-1.21.3-160000.2.2.x86_64 /lib64/libkrb5.so.3
krb5-1.21.3-160000.2.2.x86_64 /lib64/libkrb5support.so.0
libldap-2-2.6.10+10-160000.2.2.x86_64 /lib64/liblber.so.2
libldap-2-2.6.10+10-160000.2.2.x86_64 /lib64/libldap.so.2
glibc-2.40-160000.2.2.x86_64 /lib64/libm.so.6
libnghttp2-14-1.64.0-160000.2.2.x86_64 /lib64/libnghttp2.so.14
libpcre2-8-0-10.45-160000.2.2.x86_64 /lib64/libpcre2-8.so.0
libpsl5-0.21.5-160000.3.2.x86_64 /lib64/libpsl.so.5
glibc-2.40-160000.2.2.x86_64 /lib64/libresolv.so.2
librsync2-2.3.4-160000.3.2.x86_64 /lib64/librsync.so.2
libsasl2-3-2.1.28-160000.3.1.x86_64 /lib64/libsasl2.so.3
libselinux1-3.8.1-160000.2.2.x86_64 /lib64/libselinux.so.1
libssh4-0.11.2-160000.2.2.x86_64 /lib64/libssh.so.4
libstdc++6-15.1.1+git9973-160000.2.2.x86_64 /lib64/libstdc++.so.6
libthreadar1000-1.5.0-bp160.1.3.x86_64 /lib64/libthreadar.so.1000
libunistring5-1.3-160000.3.2.x86_64 /lib64/libunistring.so.5
error: file /root/linux-vdso.so.1: No such file or directory
linux-vdso.so.1
|
|
From: John G. <jgo...@co...> - 2025-12-11 13:14:33
|
On Sat, Nov 29 2025, Denis Corbin via Dar-support wrote: > this now is fixed in git (branch_2.8.x) and also available as release candidate > 1 package for version 2.8.2 which you can grab from here: > https://dar.edrusb.org/dar.linux.free.fr/Interim_releases/ Thank you very much for the rapid feature add! I appreciate it. - John |
|
From: Denis C. <dar...@fr...> - 2025-12-06 22:51:01
|
And fill free to send me your feedback using a direct email, some already did. This will avoid increasing this mailing-list activity which might annoy those not interested in this project. Thanks Le 06/12/2025 à 18:36, Denis Corbin via Dar-support a écrit : > Dear dar/libdar users, > > I'm looking for use cases over the original target dar was initially > built for (home backup on CD/DVD). I remember some recently mentioned > HPC (High Performance Computing) use case, some others I think, but > that's quite long ago now, were using dar for archiving large volume of > telescope pictures...(?) > > My objective is to add a page on dar's home page listing original, > unexpected uses of dar/libdar. This may be directly using dar or through > a frontend/external application (SaraB/Baras, gdar, dar-backup...) > or from your own code in C/C++, Python or even now D language. > > If you're interested in that project, you could for example give the > following infos: > > - company/username/or stay anonymous if you prefer > - at what time this use case started (just the year is fine) and > eventually ended > - use case description > - typical size created for backups/archives > - media used (disk/tape/cloud...) > - key dar/libdar features/differentiators that lead you choosing dar for > this use case > - everything else that does not fit in the previous items > ---- > I will then make a 7 columns table of this info, if there enough entry > for it makes sense, of course. > > I have no objection to make advertisement to private or public > organization in return, thus you are welcome to add an URL/link to your > organization or home page. > > Cheers, > Denis > |
|
From: Denis C. <dar...@fr...> - 2025-12-06 17:36:47
|
Dear dar/libdar users, I'm looking for use cases over the original target dar was initially built for (home backup on CD/DVD). I remember some recently mentioned HPC (High Performance Computing) use case, some others I think, but that's quite long ago now, were using dar for archiving large volume of telescope pictures...(?) My objective is to add a page on dar's home page listing original, unexpected uses of dar/libdar. This may be directly using dar or through a frontend/external application (SaraB/Baras, gdar, dar-backup...) or from your own code in C/C++, Python or even now D language. If you're interested in that project, you could for example give the following infos: - company/username/or stay anonymous if you prefer - at what time this use case started (just the year is fine) and eventually ended - use case description - typical size created for backups/archives - media used (disk/tape/cloud...) - key dar/libdar features/differentiators that lead you choosing dar for this use case - everything else that does not fit in the previous items ---- I will then make a 7 columns table of this info, if there enough entry for it makes sense, of course. I have no objection to make advertisement to private or public organization in return, thus you are welcome to add an URL/link to your organization or home page. Cheers, Denis |
|
From: Denis C. <dar...@fr...> - 2025-11-29 14:52:54
|
Le 28/11/2025 à 21:13, Denis Corbin via Dar-support a écrit : > Le 28/11/2025 à 15:52, John Goerzen via Dar-support a écrit : [...] >> I noticed that dar --diff will notice if a file present in the catalog >> is missing from the filesystem, but will not notice if there is a file >> in the filesystem that was missing from the archive. > > I've double checked and you are right, I will see how to fix that. this now is fixed in git (branch_2.8.x) and also available as release candidate 1 package for version 2.8.2 which you can grab from here: https://dar.edrusb.org/dar.linux.free.fr/Interim_releases/ [...] >> Could diff be enhanced to do it? > I will have a look at how to enhance that, but this is more complicated > because actually the process is to scan the catalog for each entry it > has and check whether there is a corresponding entry in the filesystem. [...] Feature request added for next major release >> >> Thanks! >> >> John >> > > Cheers, > Denis > |
|
From: Denis C. <dar...@fr...> - 2025-11-28 20:13:37
|
Le 28/11/2025 à 15:52, John Goerzen via Dar-support a écrit : > Hi, Hi John, > > I'm looking into how to verify that a filesystem matches the original > source material, especially when the target is the result of applying > differential atop differential maybe every day for years. > > Previously I had been using mtree to do this, but it occurred to me that > the dar catalog for each differential actually captures the full state > of the filesystem. > > I noticed that dar --diff will notice if a file present in the catalog > is missing from the filesystem, but will not notice if there is a file > in the filesystem that was missing from the archive. I've double checked and you are right, I will see how to fix that. > > The notes hinted at doing a differential in dry-run mode, but even when > run with -v, it gives the opposite: files that were added to the > filesystem but not ones that were omitted. > > It would be great if --diff could notice both. > > I've found a workaround in: > > dar -v -c - --on-fly-isolate diff --ref t2 -R directory > /dev/null > > Then I can do: > > dar -as -l diff > > which is approximately what I'm after. This has the inefficiency that > it will try to read every modified file, which is strictly unnecessary > for this operation, but does seem to work. Yes that's correct. Note also that the dry-run method also reads the data as if it was performing a normal backup, this is only the latest stage (the lower archive layer) that is replaced by a "null_file" object instead of a plain file (fichier object). So your approach is not worth than the dry-run differential backup assuming I fix the lack of display about the files recorded as removed since the backup of reference was done. > > Is there a better way to do this? Today at short term/current release, I do not see a better solution. > Could diff be enhanced to do it? I will have a look at how to enhance that, but this is more complicated because actually the process is to scan the catalog for each entry it has and check whether there is a corresponding entry in the filesystem. To detect newly added files, libdar would have to scan the other way: check each entry present in the filesystem and compare them to what's available in the catalog (which then do not see removed files). Well, there should be possible to leverage some directory caching used in libdar and perform this reverse at the end of each directory scanning to avoid reading again the metadata from the filesystem... this is to be confirmed and will take place if possible in next major release (2.9.0). > > Thanks! > > John > Cheers, Denis |
|
From: John G. <jgo...@co...> - 2025-11-28 15:12:47
|
Hi, I'm looking into how to verify that a filesystem matches the original source material, especially when the target is the result of applying differential atop differential maybe every day for years. Previously I had been using mtree to do this, but it occurred to me that the dar catalog for each differential actually captures the full state of the filesystem. I noticed that dar --diff will notice if a file present in the catalog is missing from the filesystem, but will not notice if there is a file in the filesystem that was missing from the archive. The notes hinted at doing a differential in dry-run mode, but even when run with -v, it gives the opposite: files that were added to the filesystem but not ones that were omitted. It would be great if --diff could notice both. I've found a workaround in: dar -v -c - --on-fly-isolate diff --ref t2 -R directory > /dev/null Then I can do: dar -as -l diff which is approximately what I'm after. This has the inefficiency that it will try to read every modified file, which is strictly unnecessary for this operation, but does seem to work. Is there a better way to do this? Could diff be enhanced to do it? Thanks! John |
|
From: Denis C. <dar...@fr...> - 2025-10-26 09:19:10
|
On 25/10/2025 19:32, John Goerzen via Dar-support wrote: > Hi Denis, Hi John, > > Thanks again for all the work on dar! I have a bug report for you. > Following the recipe in the FAQ, I tried to use: > > dar -v -+ archive2 -A archive -zzstd:12 > > Which crashed with: > > Aborting program. An error occurred while calling libdar: Compression_level must be between 1 and 9 included > > -zzstd:12 works fine in conjunction with -c, as it should (zstd levels > go above 9). > > This is with 2.7.17 but I haven't seen anything in the 2.8.1 changelog > to indicate it's fixed there. Fixing it in the 2.8.x branch would be > fine (I wouldn't see the need for a backported fix to 2.7) this seems to correspond to a bug fixed in 2.7.19 and as 2.8.0 is more recent that 2.7.19, release 2.8.0 (major release) received all fixes from 2.7.x up to 2.7.19. Yes, this was not obvious, I avoid copying in the Changelog for major releases (those with the 0 as last digit) all bug fixed in the minor releases of previous branch (here 2.7.1 up to 2.7.19) while the have well been merged in the new branch, to better see the new features brought in it. For 2.8.1 and following ones, the Changelog will receive propagated bug fixes from future releases of 2.7.x branch and with them, the Changelog will be updated accordingly. This is a manual operation as the 'git merge' will merge the changes in the Changelog for 2.7.x and I have to copy it under the 2.8.x minor release... just try not to forget to do it during the period of time both 2.7.x and 2.8.x will be both maintained (~ more than one year, maybe 2 years). But if by mischance I forget doing it, you will see in the Changelog of a future 2.8.x the information on the latest 2.7.x and bug fixed in it, and you can assume all these have been integrated (git-merged) into this 2.8.x release. > > Incidentally, thanks for 2.8.1 and webdar! I have uploaded 2.8.1 to > Debian unstable and will work on webdar shortly. Then I will get them > both into trixie-backports as well. Great news!!! Thanks a lot for you work as Debian maintainer for dar, libdar and libthreadar :) > > - John > Cheers, Denis |
|
From: John G. <jgo...@co...> - 2025-10-25 17:47:50
|
Hi Denis, Thanks again for all the work on dar! I have a bug report for you. Following the recipe in the FAQ, I tried to use: dar -v -+ archive2 -A archive -zzstd:12 Which crashed with: Aborting program. An error occurred while calling libdar: Compression_level must be between 1 and 9 included -zzstd:12 works fine in conjunction with -c, as it should (zstd levels go above 9). This is with 2.7.17 but I haven't seen anything in the 2.8.1 changelog to indicate it's fixed there. Fixing it in the 2.8.x branch would be fine (I wouldn't see the need for a backported fix to 2.7) Incidentally, thanks for 2.8.1 and webdar! I have uploaded 2.8.1 to Debian unstable and will work on webdar shortly. Then I will get them both into trixie-backports as well. - John |
|
From: Denis C. <dar...@fr...> - 2025-08-29 17:57:54
|
On 29/08/2025 17:12, Petr Skoda wrote: > Dear Denis, Hi Petr, > > after several years I have again to backup many TB of data on tape and so I > started to look for progress in tape handling. I have installed 2.8.0 > without > problem. > > Looking for recent changes I have noticed you have now option to read the > first slice to get catalogue instead of last one. I am not sure if this help > in my problem anyway. This feature was requested to address huge backups stored over several removable media (probably tapes or maybe remote storage that need data "transfer" like SFTP, I should dig into the dar-support mailing-list archive to find out this original need with more details). The context was that the reading is done from local disk storage after transfer upon dar requesting a particular slice (see -E option). The problem was that dar was always requesting the last slice even when an isolated catalogue was used, because dar was reading from there the slice layout: these are the first tens of bytes located at the beginning of each slice, this is a duplicated information across all slices of a given backup. The solution brought by this feature to remove the systematic request to load the possibly huge last slice is to set the backup with an initial small size (uppercase -S option with a few hundred bytes for example) and keep this first slice beside the isolated catalog on the local storage. With the new option, the slice layout is now fetched from the first slice which does not lead the extra work and delay to load the big last slice. Then dar only ask for the slice(s) required to restore the requested data, which may be huge but are absolutely necessary. So in your context, it may make sense to use that approach... though this is not a revolution regarding tape use with dar! > > I would like to ask whether there is progress in working with tape > slices. As > you remember from our discussion in 2022 my idea was to create slices of > equal > size (= size of tape) and store each on one LTO (2.5TB - LTO6) tape. But at > that time it was quite complicated due to the need of making symbolic links > etc - in fact it never worked for me in order to be satisfied. not sure to understand whether it was not working or it was working but you didn't find this approach with symlinks simple enough...? > > Just for reminder - I wanted to make the multislice (with -s) (say 2.5TB) > use dd if=slice.n.dar of=/dev/nst0 (+ some bs=256K ...). > should I understand you want to identify the possible solutions to use dar with your tape system or do you want to stick to this approach (slicing + dd)? > Then I would be happy to use the first tape get from it the catalogue > (at that > time it was in the last slice so required reading all tapes but still the > problems was with the generic name of the backup - not reflecting the .n > parts ) requiring ln -s in an advanced hacking. this is still the same as with 2.7.x releases: - you can fetch the catalog from the last slice(s) - you can fetch the catalog reading from the first slice up to the last slice (sequential-reading mode). But in your case I would rather rely on the on-fly isolation, which during the backup operation beside the backup itself, creates an isolated catalog (see --on-fly-isolate option). for the symlink it can be automated using the -E/-F/-~ family options: -p -E 'rm "%p/%b."*".%e" ; ln -s /dev/tape "%p/%b.%N.%e' (to be tested but this is the idea: remove any previous symlink of the form of the path+slicename, create the needed one to the tape device) wait for user to confirm the needed tape is loaded (-p option). > > > Do you have some new recommendations how to deal with tape slices ? Or > is the > only simple way to use dar_split as in https://dar.sourceforge.io/doc/ > FAQ.html#tapes ? Recommendation would vary depending on the use case, context and user affinity. Thus I'll stay to the facts and let you decide what's better for you to address multi-tape backups. The possibility I'm aware of are: - dar_split - -E/-F/-~ options to fetch the requested slice from remove system (though sftp or on other storage that do not allow to expose the slice as a file like take and unlike what NFS does). - -E/-F/-~ options to create symlinks to tape devices and pause for the user to change the tape accordingly. maybe some users have found and use other methods to address that multi-tape backup need? > > (BTW in this FAQ section is a number of typos - I would suggest to > expand that > LTO FAQ by more details about using slices (or warning against it) etc... Please let me know the typos you have seen, thanks, and sure, I'll update the FAQ with the outcome of this thread. > > Just I will repeat while the dar_split is not an ideal solution. yes, this depends on the context. > From my experiments followed the need to have the *.dar file on fast > array on > machine hosting the LTO tapedrive. otherwise the transfer from remote > machine > is slow and the tape starts to stops and starts so called "shoe - shining". this problem is not directly linked to dar, but to the way (the speed) the data is transferred: a rate limiter is to be used to avoid this problem AFAIK (this was the subject of a past discussion in this mailing-list). > But if you are not able to create one large *.dar file on the hosting > computer array you cannot use dar_split. dar_split precisely removes the need to store dar backup as a files: you don't need local storage neither on the host where dar runs nor on the one where dar_split runs with the tape driver attached, if you can pipe the data between dar and dar_split. > (what I use is the nc on ports or ssh piping running dar -c on the backuped > computer and sending it to staging array on the host (with tape) . yes, if the tape is remote from dar, netcat or ssh can provide the pipe I mentioned above between dar and dar_split. Same thing if you use the dar's slice feature and have the ability to store one slice on the host dar is running on, you don't need temporary storage on the remote node where the tape is attached if you can pipe the data between dd and the tape device (using netcat or ssh as you mentioned) > > OTOH using slices would allow to store on the array only few slices, write > them to tapes, delete and continue creating new slices (it is excellent you > have the -E -F options here !) You can also reduce the temporary storage requirement, whether it is local to the host where dar runs or remotely on the host that has the tape attached, by selecting a slice size as a divisor of tape slices: For having had tapes very long ago, the 'mt' command let you add EOF mark after a file's data and this way let you add a new file on the same tape, so you can seek over a file's data to reach the next mark and so forth. This way you can store several slices on a single tape with the only penalty of the few bytes used by the EOF marks added to separate them on this tape. This makes more complicated script to setup (especially when it comes to fetch a given slice in the middle of a tape) but that's an approach to consider. > > So I hope that you will return to this issue soon ! The only issue I see is about tapes and their limitations! :) All what you hit using dar would also be hit using tar for example, not because of tar nor dar, but because of tapes, right? > > Best regards > > Petr > > Regards, Denis |
|
From: Petr S. <sez...@se...> - 2025-08-29 15:32:07
|
Dear Denis, after several years I have again to backup many TB of data on tape and so I started to look for progress in tape handling. I have installed 2.8.0 without problem. Looking for recent changes I have noticed you have now option to read the first slice to get catalogue instead of last one. I am not sure if this help in my problem anyway. I would like to ask whether there is progress in working with tape slices. As you remember from our discussion in 2022 my idea was to create slices of equal size (= size of tape) and store each on one LTO (2.5TB - LTO6) tape. But at that time it was quite complicated due to the need of making symbolic links etc - in fact it never worked for me in order to be satisfied. Just for reminder - I wanted to make the multislice (with -s) (say 2.5TB) use dd if=slice.n.dar of=/dev/nst0 (+ some bs=256K ...). Then I would be happy to use the first tape get from it the catalogue (at that time it was in the last slice so required reading all tapes but still the problems was with the generic name of the backup - not reflecting the .n parts ) requiring ln -s in an advanced hacking. Do you have some new recommendations how to deal with tape slices ? Or is the only simple way to use dar_split as in https://dar.sourceforge.io/doc/FAQ. html#tapes ? (BTW in this FAQ section is a number of typos - I would suggest to expand that LTO FAQ by more details about using slices (or warning against it) etc... Just I will repeat while the dar_split is not an ideal solution. >From my experiments followed the need to have the *.dar file on fast array on machine hosting the LTO tapedrive. otherwise the transfer from remote machine is slow and the tape starts to stops and starts so called "shoe - shining". But if you are not able to create one large *.dar file on the hosting computer array you cannot use dar_split. (what I use is the nc on ports or ssh piping running dar -c on the backuped computer and sending it to staging array on the host (with tape) . OTOH using slices would allow to store on the array only few slices, write them to tapes, delete and continue creating new slices (it is excellent you have the -E -F options here !) So I hope that you will return to this issue soon ! Best regards Petr |
|
From: Thomas <dar...@ra...> - 2025-06-03 09:15:58
|
Hi Denis! I can confirm my minimal example works with v2.7.19.RC1 with all three variants like v2.7.16. It is good to know that there is no difference internally between cdrom and cdrom/. But it feels right that both are working. Thanks a lot for your support! Cheers, Tom On Mon, Jun 02, 2025 at 05:29:50PM +0200, Denis Corbin wrote: > Hi, > > I have reviewed the implementation to accept trailing slashes, this is fixed > in 2.7.19.RC1 (git and > https://dar.edrusb.org/dar.linux.free.fr/Interim_releases ) > > However, note that dar never considers "bar/" different from "bar". File > file selection is only based on: > - path, (-P, -g, -[, -], options...) > - filename, (-X, -I options) > - eventually the presence of a nodump flag (--nodump) > - eventually the presence of a given Extended Attribute (--exclude-by-ea > options) > - mount-point / filesystem location (--mount-points option) > > File selection is not based on the inode type > (directory/file/symlink/chardev/blockdev/named pipe/door inode...) and the > fact you have specified a trailing slash or not in dar's filtering > mechanism. > > But yes, at restoration time, when a file has to be updated (differential > backup and even binary delta), dar checks the nature of the current entry in > filesystem (and also that the CRC matches before and after a applying binary > patch), same thing when a file has to be removed during a restoration (its > inode type should match), and this is (hopefully) here too, independent from > the presence of a training slash, if ever you were restoring with some > filtering mechanism (to only restore some files/directory/named > pipes/chardev/blockdev...) > > Cheers, > Denis > > On 02/06/2025 13:05, Graham Cobb wrote: > > I *always* type directory paths with a trailing slash. Mostly for the > > reason John gives but also because that is the way my mind works while I > > am typing filenames (probably inherited from my earlier RSX and VAX/VMS > > experience where directories are entered differently if you are > > operating on the directory or the files within it). > > > > While I realize that I could leave the trailing slash out, after over 40 > > years of using Unix I don't think I can retrain my fingers! > > > > I would certainly prefer if it was possible to fix the DAR filtering > > problem without breaking trailing slashes. > > > > Regards > > Graham > > > > On 01/06/2025 20:53, John Goerzen via Dar-support wrote: > > > I'll just note that a lot of shell expansion will add the trailing slash > > > for directories. It is also something I often use when I want to force > > > something to be a directory; for instance, "mv foo bar/" ensures that I > > > don't overwrite a file named bar with the file named foo, and instead > > > move foo into the directory named bar. > > > > > > - John > > > > > > On Sun, Jun 01 2025, Denis Corbin wrote: > > > > > > > Hi Thomas, > > > > > > > > Changelog for 2.7.17 reports > > > > - fixed bug where -R path ending with // was breaking the path filtering > > > > mechanism (-P/-g/-[/-] options). > > > > > > > > see commit 503326bf8735d8eab48d4ff0ab9c000ffa031dec > > > > > > > > The fix was needed for the filtering mechanism to work with > > > > those uncommon but > > > > valid paths (some//path)... sorry if this impacts training > > > > slashes. Is it a big > > > > problem to avoid using trailing slash in paths? > > > > > > > > Regards, > > > > Denis > > > > > > > > On 01/06/2025 16:53, Thomas wrote: > > > > > Hi! > > > > > ;TLDR > > > > > ===== > > > > > With v2.7.17 and later one can't use '-P cdrom/' anymore. Dar aborts > > > > > with the error message: > > > > > ,---- [ ] > > > > > | Parse error: cdrom/ is an not a valid path: Empty string > > > > > as subdirectory > > > > > | does not make a valid path > > > > > `---- > > > > > The following works: > > > > > '-P cdrom' > > > > > or > > > > > '-P cdrom/*' > > > > > Bug or feature? > > > > > Long version > > > > > ============ > > > > > I have a couple of directories I wan't to have in the backup > > > > > but without > > > > > the files. > > > > > I used for a long time for example '--empty-dir -P cdrom/' without > > > > > problems. > > > > > Now with v2.7.17 and v2.7.18 I only get en error message: > > > > > ,---- [ ] > > > > > | Parse error: cdrom/ is an not a valid path: Empty string > > > > > as subdirectory > > > > > | does not make a valid path > > > > > `---- > > > > > What went wrong? The message does not enlighten me. I would say cdrom/ > > > > > is a perfect path and is not empty either. > > > > > How to reproduce: > > > > > ----------------- > > > > > export BASE=/tmp/dar_debug > > > > > mkdir -p ${BASE}/bak ${BASE}/files/cdrom > > > > > touch ${BASE}/files/file1 ${BASE}/files/cdrom/cdromfile1 > > > > > dar --version > > > > > dar --create ${BASE}/bak/full_with_slash --empty-dir > > > > > --fs-root ${BASE}/files/ -P cdrom/ > > > > > dar --create ${BASE}/bak/full_wo_slash --empty-dir > > > > > --fs-root ${BASE}/files/ -P cdrom > > > > > dar --create ${BASE}/bak/full_with_asterisk --empty-dir > > > > > --fs-root ${BASE}/files/ -P cdrom/* > > > > > Results with v2.7.16 > > > > > -------------------- > > > > > All three create-commands are working as expected. They generate a > > > > > backup file including file1 and an empty cdrom directory. > > > > > cdromfile1 is not included as expected. > > > > > dar --list ${BASE}/bak/full_with_slash > > > > > [Data ][D][ EA ][FSA][Compr][S]| Permission | User | Group > > > > > | Size | Date | filename > > > > > --------------------------------+------------+-------+------- > > > > > +---------+-------------------------------+------------ > > > > > [Saved][ ] [---][ ][ ] -rw-rw-r-- 1000 > > > > > 1000 0 Sun Jun 1 12:49:26 2025 file1 > > > > > [Saved][-] [---][ ][ ] drwxrwxr-x 1000 > > > > > 1000 0 Sun Jun 1 12:49:26 2025 cdrom > > > > > dar --list ${BASE}/bak/full_wo_slash > > > > > [Data ][D][ EA ][FSA][Compr][S]| Permission | User | Group > > > > > | Size | Date | filename > > > > > --------------------------------+------------+-------+------- > > > > > +---------+-------------------------------+------------ > > > > > [Saved][ ] [---][ ][ ] -rw-rw-r-- 1000 > > > > > 1000 0 Sun Jun 1 12:49:26 2025 file1 > > > > > [Saved][-] [---][ ][ ] drwxrwxr-x 1000 > > > > > 1000 0 Sun Jun 1 12:49:26 2025 cdrom > > > > > dar --list ${BASE}/bak/full_with_asterisk > > > > > [Data ][D][ EA ][FSA][Compr][S]| Permission | User | Group > > > > > | Size | Date | filename > > > > > --------------------------------+------------+-------+------- > > > > > +---------+-------------------------------+------------ > > > > > [Saved][ ] [---][ ][ ] -rw-rw-r-- 1000 > > > > > 1000 0 Sun Jun 1 12:49:26 2025 file1 > > > > > [Saved][-] [---][ ][ ] drwxrwxr-x 1000 > > > > > 1000 0 Sun Jun 1 12:49:26 2025 cdrom > > > > > Results with v2.7.17 and v2.7.18 > > > > > -------------------------------- > > > > > The first create command failes with the error message above. > > > > > The second and third are working as expected. > > > > > dar --list ${BASE}/bak/full_with_slash > > > > > No backup file is present in file:///tmp/dar_debug/bak for archive > > > > > full_with_slash, please provide the last file of the set. > > > > > [return = YES | Esc > > > > > = NO] > > > > > dar --list ${BASE}/bak/full_wo_slash > > > > > [Data ][D][ EA ][FSA][Compr][S]| Permission | User | Group > > > > > | Size | Date | filename > > > > > --------------------------------+------------+-------+------- > > > > > +---------+-------------------------------+------------ > > > > > [Saved][ ] [---][ ][ ] -rw-rw-r-- 1000 > > > > > 1000 0 Sun Jun 1 12:49:26 2025 file1 > > > > > [Saved][-] [---][ ][ ] drwxrwxr-x 1000 > > > > > 1000 0 Sun Jun 1 12:49:26 2025 cdrom > > > > > dar --list ${BASE}/bak/full_with_asterisk > > > > > [Data ][D][ EA ][FSA][Compr][S]| Permission | User | Group > > > > > | Size | Date | filename > > > > > --------------------------------+------------+-------+------- > > > > > +---------+-------------------------------+------------ > > > > > [Saved][ ] [---][ ][ ] -rw-rw-r-- 1000 > > > > > 1000 0 Sun Jun 1 12:49:26 2025 file1 > > > > > [Saved][-] [---][ ][ ] drwxrwxr-x 1000 > > > > > 1000 0 Sun Jun 1 12:49:26 2025 cdrom > > > > > Something between v2.7.16 and v2.7.17 has changed. > > > > > I would say this is a bug but it could be intentional. So my question > > > > > is: > > > > > Is it a bug or a feature? > > > > > Thanks for your support. > > > > > Tom > > > > > > > > > > > > > |
|
From: Denis C. <dar...@fr...> - 2025-06-02 15:30:03
|
Hi, I have reviewed the implementation to accept trailing slashes, this is fixed in 2.7.19.RC1 (git and https://dar.edrusb.org/dar.linux.free.fr/Interim_releases ) However, note that dar never considers "bar/" different from "bar". File file selection is only based on: - path, (-P, -g, -[, -], options...) - filename, (-X, -I options) - eventually the presence of a nodump flag (--nodump) - eventually the presence of a given Extended Attribute (--exclude-by-ea options) - mount-point / filesystem location (--mount-points option) File selection is not based on the inode type (directory/file/symlink/chardev/blockdev/named pipe/door inode...) and the fact you have specified a trailing slash or not in dar's filtering mechanism. But yes, at restoration time, when a file has to be updated (differential backup and even binary delta), dar checks the nature of the current entry in filesystem (and also that the CRC matches before and after a applying binary patch), same thing when a file has to be removed during a restoration (its inode type should match), and this is (hopefully) here too, independent from the presence of a training slash, if ever you were restoring with some filtering mechanism (to only restore some files/directory/named pipes/chardev/blockdev...) Cheers, Denis On 02/06/2025 13:05, Graham Cobb wrote: > I *always* type directory paths with a trailing slash. Mostly for the > reason John gives but also because that is the way my mind works while I > am typing filenames (probably inherited from my earlier RSX and VAX/VMS > experience where directories are entered differently if you are > operating on the directory or the files within it). > > While I realize that I could leave the trailing slash out, after over 40 > years of using Unix I don't think I can retrain my fingers! > > I would certainly prefer if it was possible to fix the DAR filtering > problem without breaking trailing slashes. > > Regards > Graham > > On 01/06/2025 20:53, John Goerzen via Dar-support wrote: >> I'll just note that a lot of shell expansion will add the trailing slash >> for directories. It is also something I often use when I want to force >> something to be a directory; for instance, "mv foo bar/" ensures that I >> don't overwrite a file named bar with the file named foo, and instead >> move foo into the directory named bar. >> >> - John >> >> On Sun, Jun 01 2025, Denis Corbin wrote: >> >>> Hi Thomas, >>> >>> Changelog for 2.7.17 reports >>> - fixed bug where -R path ending with // was breaking the path filtering >>> mechanism (-P/-g/-[/-] options). >>> >>> see commit 503326bf8735d8eab48d4ff0ab9c000ffa031dec >>> >>> The fix was needed for the filtering mechanism to work with those >>> uncommon but >>> valid paths (some//path)... sorry if this impacts training slashes. >>> Is it a big >>> problem to avoid using trailing slash in paths? >>> >>> Regards, >>> Denis >>> >>> On 01/06/2025 16:53, Thomas wrote: >>>> Hi! >>>> ;TLDR >>>> ===== >>>> With v2.7.17 and later one can't use '-P cdrom/' anymore. Dar aborts >>>> with the error message: >>>> ,---- [ ] >>>> | Parse error: cdrom/ is an not a valid path: Empty string as >>>> subdirectory >>>> | does not make a valid path >>>> `---- >>>> The following works: >>>> '-P cdrom' >>>> or >>>> '-P cdrom/*' >>>> Bug or feature? >>>> Long version >>>> ============ >>>> I have a couple of directories I wan't to have in the backup but >>>> without >>>> the files. >>>> I used for a long time for example '--empty-dir -P cdrom/' without >>>> problems. >>>> Now with v2.7.17 and v2.7.18 I only get en error message: >>>> ,---- [ ] >>>> | Parse error: cdrom/ is an not a valid path: Empty string as >>>> subdirectory >>>> | does not make a valid path >>>> `---- >>>> What went wrong? The message does not enlighten me. I would say cdrom/ >>>> is a perfect path and is not empty either. >>>> How to reproduce: >>>> ----------------- >>>> export BASE=/tmp/dar_debug >>>> mkdir -p ${BASE}/bak ${BASE}/files/cdrom >>>> touch ${BASE}/files/file1 ${BASE}/files/cdrom/cdromfile1 >>>> dar --version >>>> dar --create ${BASE}/bak/full_with_slash --empty-dir --fs-root >>>> ${BASE}/files/ -P cdrom/ >>>> dar --create ${BASE}/bak/full_wo_slash --empty-dir --fs-root >>>> ${BASE}/files/ -P cdrom >>>> dar --create ${BASE}/bak/full_with_asterisk --empty-dir --fs-root >>>> ${BASE}/files/ -P cdrom/* >>>> Results with v2.7.16 >>>> -------------------- >>>> All three create-commands are working as expected. They generate a >>>> backup file including file1 and an empty cdrom directory. >>>> cdromfile1 is not included as expected. >>>> dar --list ${BASE}/bak/full_with_slash >>>> [Data ][D][ EA ][FSA][Compr][S]| Permission | User | Group | >>>> Size | Date | filename >>>> --------------------------------+------------+-------+------- >>>> +---------+-------------------------------+------------ >>>> [Saved][ ] [---][ ][ ] -rw-rw-r-- 1000 1000 0 >>>> Sun Jun 1 12:49:26 2025 file1 >>>> [Saved][-] [---][ ][ ] drwxrwxr-x 1000 1000 0 >>>> Sun Jun 1 12:49:26 2025 cdrom >>>> dar --list ${BASE}/bak/full_wo_slash >>>> [Data ][D][ EA ][FSA][Compr][S]| Permission | User | Group | >>>> Size | Date | filename >>>> --------------------------------+------------+-------+------- >>>> +---------+-------------------------------+------------ >>>> [Saved][ ] [---][ ][ ] -rw-rw-r-- 1000 1000 0 >>>> Sun Jun 1 12:49:26 2025 file1 >>>> [Saved][-] [---][ ][ ] drwxrwxr-x 1000 1000 0 >>>> Sun Jun 1 12:49:26 2025 cdrom >>>> dar --list ${BASE}/bak/full_with_asterisk >>>> [Data ][D][ EA ][FSA][Compr][S]| Permission | User | Group | >>>> Size | Date | filename >>>> --------------------------------+------------+-------+------- >>>> +---------+-------------------------------+------------ >>>> [Saved][ ] [---][ ][ ] -rw-rw-r-- 1000 1000 0 >>>> Sun Jun 1 12:49:26 2025 file1 >>>> [Saved][-] [---][ ][ ] drwxrwxr-x 1000 1000 0 >>>> Sun Jun 1 12:49:26 2025 cdrom >>>> Results with v2.7.17 and v2.7.18 >>>> -------------------------------- >>>> The first create command failes with the error message above. >>>> The second and third are working as expected. >>>> dar --list ${BASE}/bak/full_with_slash >>>> No backup file is present in file:///tmp/dar_debug/bak for archive >>>> full_with_slash, please provide the last file of the set. [return = >>>> YES | Esc >>>> = NO] >>>> dar --list ${BASE}/bak/full_wo_slash >>>> [Data ][D][ EA ][FSA][Compr][S]| Permission | User | Group | >>>> Size | Date | filename >>>> --------------------------------+------------+-------+------- >>>> +---------+-------------------------------+------------ >>>> [Saved][ ] [---][ ][ ] -rw-rw-r-- 1000 1000 0 >>>> Sun Jun 1 12:49:26 2025 file1 >>>> [Saved][-] [---][ ][ ] drwxrwxr-x 1000 1000 0 >>>> Sun Jun 1 12:49:26 2025 cdrom >>>> dar --list ${BASE}/bak/full_with_asterisk >>>> [Data ][D][ EA ][FSA][Compr][S]| Permission | User | Group | >>>> Size | Date | filename >>>> --------------------------------+------------+-------+------- >>>> +---------+-------------------------------+------------ >>>> [Saved][ ] [---][ ][ ] -rw-rw-r-- 1000 1000 0 >>>> Sun Jun 1 12:49:26 2025 file1 >>>> [Saved][-] [---][ ][ ] drwxrwxr-x 1000 1000 0 >>>> Sun Jun 1 12:49:26 2025 cdrom >>>> Something between v2.7.16 and v2.7.17 has changed. >>>> I would say this is a bug but it could be intentional. So my question >>>> is: >>>> Is it a bug or a feature? >>>> Thanks for your support. >>>> Tom >>>> >> > > |