Merge pull request 'fully independent chunking for resilient disc recovery' (#1) from chunking into master
Reviewed-on: #1
This commit is contained in:
commit
5c258bac2f
167
README.md
167
README.md
|
@ -1,100 +1,109 @@
|
|||
Below is a sample Bash script and accompanying guide that demonstrate one way to automate:
|
||||
# backup2mdisc
|
||||
|
||||
1. **Creating a single encrypted archive** from your data.
|
||||
2. **Splitting** that encrypted archive into 100GB chunks.
|
||||
3. **Generating checksums** and a manifest/catalog.
|
||||
4. **Optionally creating ISO images** from each chunk for more convenient burning.
|
||||
5. **Burning** the resulting chunks (or ISOs) to M-Disc.
|
||||
## Purpose:
|
||||
1. Scans all files in a source directory.
|
||||
2. Groups them into "chunks" so that each chunk is <= a specified size (default 100GB).
|
||||
3. Creates a TAR archive of each chunk, compresses it with `lz4`, and encrypts it with GPG (AES256).
|
||||
4. Each `.tar.lz4.gpg` is fully independent (no other parts/discs needed to restore that chunk).
|
||||
5. (Optional) Creates ISO images from each encrypted chunk if `--create-iso` is provided.
|
||||
6. (Optional) Burns each chunk or ISO to M-Disc if `--burn` is provided.
|
||||
|
||||
> **Important**
|
||||
> - This script is written in Bash for Linux/macOS compatibility. It should also work on FreeBSD with minimal (if any) modifications, but you may need to install or adjust the relevant tools.
|
||||
> - The script focuses on automating the encryption and splitting steps, as well as generating a manifest.
|
||||
> - Burning to M-Disc on different platforms can vary. We show an example using `growisofs` (common on Linux) and `hdiutil` (macOS). Adjust as needed.
|
||||
> - For best security, do **not** hardcode your passphrase in the script. You should be prompted for it.
|
||||
## How It Works
|
||||
|
||||
1. **File Collection & Sorting**
|
||||
- The script uses `find` to list all files in your `SOURCE_DIR` with their sizes.
|
||||
- It sorts them in ascending order by size so it can pack smaller files first (you can remove `| sort -n` if you prefer a different method).
|
||||
|
||||
2. **Chunk Accumulation**
|
||||
- It iterates over each file, summing up file sizes into a “current chunk.”
|
||||
- If adding a new file would exceed `CHUNK_SIZE` (default 100GB), it **finalizes** the current chunk (creates `.tar.lz4.gpg`) and starts a new one.
|
||||
|
||||
3. **Archive, Compress, Encrypt**
|
||||
- For each chunk, it creates a `.tar.lz4.gpg` file. Specifically:
|
||||
1. `tar -cf - -T $TMP_CHUNK_LIST` (archive of the files in that chunk)
|
||||
2. Pipe into `lz4 -c` for fast compression
|
||||
3. Pipe into `gpg --batch -c` (symmetric encrypt with AES256, using your passphrase)
|
||||
- The result is a self-contained file like `chunk_001.tar.lz4.gpg`.
|
||||
|
||||
4. **Checksums & Manifest**
|
||||
- It calculates the SHA-256 sum of each chunk archive and appends it to a manifest file along with the list of included files.
|
||||
- That manifest is stored in `$WORK_DIR`.
|
||||
|
||||
5. **Optional ISO Creation** (`--create-iso`)
|
||||
- After each chunk is created, the script can build an ISO image containing just that `.tar.lz4.gpg`.
|
||||
- This step uses `genisoimage` (or `mkisofs`). The resulting file is `chunk_001.iso`, etc.
|
||||
|
||||
6. **Optional Burning** (`--burn`)
|
||||
- If you specify `--burn`, the script will pause after creating each chunk/ISO and prompt you to insert a fresh M-Disc.
|
||||
- On **Linux**, it tries `growisofs`.
|
||||
- On **macOS**, it tries `hdiutil` (if creating an ISO).
|
||||
- If it doesn't find these commands, it'll instruct you to burn manually.
|
||||
|
||||
7. **Repeat**
|
||||
- The script loops until all files have been placed into chunk(s).
|
||||
|
||||
---
|
||||
|
||||
## How to Use the Script
|
||||
## Usage:
|
||||
`./backup_in_individual_chunks.sh /path/to/source /path/to/destination [CHUNK_SIZE] [--create-iso] [--burn]`
|
||||
|
||||
1. **Install Dependencies**
|
||||
Make sure the following tools are installed on your system(s):
|
||||
- **tar**
|
||||
- **xz**
|
||||
- **gpg**
|
||||
- **split**
|
||||
- **sha256sum** (or `shasum` on FreeBSD/macOS)
|
||||
- **genisoimage** or **mkisofs** (for creating ISOs if desired)
|
||||
- **growisofs** (Linux) or **hdiutil** (macOS) for burning.
|
||||
## Examples:
|
||||
`./backup_in_individual_chunks.sh /home/user/data /mnt/backup 100G --create-iso`
|
||||
`./backup_in_individual_chunks.sh /data /backup 50G --burn`
|
||||
|
||||
2. **Make the Script Executable**
|
||||
```bash
|
||||
chmod +x backup2mdisc.sh
|
||||
```
|
||||
|
||||
3. **Run the Script**
|
||||
```bash
|
||||
./backup2mdisc.sh /path/to/source /path/to/destination 100G --create-iso --burn
|
||||
```
|
||||
- **`/path/to/source`**: The directory you want to back up.
|
||||
- **`/path/to/destination`**: Where to store the intermediate backup files before burning.
|
||||
- **`100G`**: The chunk size. Adjust if you're using different capacity discs.
|
||||
- **`--create-iso`** (optional): Create ISO images from each chunk for more convenient burning.
|
||||
- **`--burn`** (optional): Attempt to burn each chunk/ISO to disc automatically.
|
||||
|
||||
4. **Enter Your GPG Passphrase**
|
||||
- The script will prompt for a passphrase. This passphrase encrypts your data. Keep it safe!
|
||||
|
||||
5. **Wait for the Script to Finish**
|
||||
- A large `tar` + `xz` + `gpg` pipeline can take a considerable amount of time depending on your data size.
|
||||
- After encryption, it splits into 100GB chunks.
|
||||
- It then generates a **manifest** with SHA-256 checksums of each chunk.
|
||||
|
||||
6. **Burn to M-Disc**
|
||||
- If you used `--burn`, the script will prompt you to insert an M-Disc for each chunk or ISO.
|
||||
- On Linux, it uses `growisofs`. On macOS, it attempts `hdiutil` if ISO files exist.
|
||||
- If you prefer manual burning, skip `--burn` and burn the `.iso` files using your favorite tool.
|
||||
|
||||
7. **Store the Manifest Safely**
|
||||
- The manifest (`backup_manifest.txt`) in the work directory includes:
|
||||
- Checksums for each chunk.
|
||||
- The original source path.
|
||||
- Timestamp.
|
||||
- Keep this manifest (and the passphrase!) somewhere secure. You'll need all parts to restore.
|
||||
## Dependencies:
|
||||
- `bash`
|
||||
- `gpg` (for encryption)
|
||||
- `lz4` (for fast compression)
|
||||
- `tar`
|
||||
- `split` or file-based grouping approach
|
||||
- `sha256sum` (or '`shasum -a 256`' on macOS/FreeBSD)
|
||||
- `genisoimage` or `mkisofs` (for creating ISOs if `--create-iso`)
|
||||
- `growisofs` (Linux) or `hdiutil` (macOS) for burning if `--burn`
|
||||
|
||||
---
|
||||
|
||||
## Restoring Your Backup
|
||||
## Restoring Your Data
|
||||
|
||||
To **restore** from these discs:
|
||||
- **Disc is self-contained**: If you have disc #4 containing `chunk_004.tar.lz4.gpg`, you can restore it independently of the others.
|
||||
- **Decrypt & Extract**:
|
||||
```bash
|
||||
gpg --decrypt chunk_004.tar.lz4.gpg | lz4 -d | tar -xvf -
|
||||
```
|
||||
This will prompt for the passphrase you used during backup.
|
||||
|
||||
1. Copy all chunk files (or `.iso` contents) back to a working directory on your system.
|
||||
2. Combine them back into a single file (if they were split outside of an ISO filesystem, just `cat` them together):
|
||||
```bash
|
||||
cat backup.tar.xz.gpg.* > backup.tar.xz.gpg
|
||||
```
|
||||
3. Decrypt and extract:
|
||||
```bash
|
||||
gpg --decrypt backup.tar.xz.gpg | xz -d | tar -xvf -
|
||||
```
|
||||
You'll be prompted for the same GPG passphrase. Once it's done, the original files/folders should appear in your current directory.
|
||||
- If one disc is lost, you only lose the files in that chunk; all other chunks remain restorable.
|
||||
|
||||
---
|
||||
|
||||
### Notes & Tips
|
||||
## Why lz4?
|
||||
|
||||
- **Individual Chunk Decryption**:
|
||||
The above script creates **one** large encrypted archive, then splits it. You need **all** parts to decrypt. If you want each 100GB chunk to be decryptable separately, you'd need to tar smaller subsets of data individually, encrypt each, and then burn. This is more complex and requires advanced scripting or a specialized backup tool.
|
||||
- **Speed**: `lz4` is extremely fast at both compression and decompression.
|
||||
- **Less compression ratio** than xz, but if your priority is speed (and 100GB disc space is enough), `lz4` is a great choice.
|
||||
- For maximum compression at the cost of time, you could replace `lz4` with `xz -9`, but expect slower backups and restores.
|
||||
|
||||
- **Automated Backup Tools**:
|
||||
You might also consider tools like **Duplicati**, **Borg**, or **restic**, which support encryption, deduplication, and chunking. However, writing those chunks onto M-Disc is still a manual step.
|
||||
---
|
||||
|
||||
- **Testing**:
|
||||
Test with a small directory first (say 1GB) and 100MB “chunks” to ensure your workflow is correct. Then proceed to the full data.
|
||||
## Tips & Caveats
|
||||
|
||||
- **M-Disc Drive Compatibility**:
|
||||
Make sure your optical drive explicitly supports writing to 100GB BD-XL M-Disc. Standard Blu-ray or DVD drives often do not support higher-capacity M-Discs.
|
||||
1. **Large Files**
|
||||
- A single file larger than your chunk size (e.g., 101GB file with a 100GB chunk limit) won't fit. This script doesn't handle that gracefully. You'd need to split such a file (e.g., with `split`) before archiving or use a backup tool that supports partial file splitting.
|
||||
|
||||
- **Verification**:
|
||||
Always verify that your burned discs are readable. You can mount them and use the checksums from the manifest to confirm data integrity.
|
||||
2. **Verification**
|
||||
- Always verify your discs after burning. Mount them and compare the chunk's SHA-256 with the manifest to ensure data integrity.
|
||||
|
||||
That's it! This script and guide should get you started creating encrypted backups on 100GB M-Discs, with a manifest to track chunks and checksums, plus optional ISO creation and automated burning steps. Adjust as necessary for your specific environment and needs.
|
||||
3. **Incremental or Deduplicated Backups**
|
||||
- For advanced features (incremental, deduplication, partial-chunk checksums), consider specialized backup programs (like Borg, restic, Duplicati). However, they usually produce multi-volume archives that need **all** volumes to restore.
|
||||
|
||||
4. **Cross-Platform**
|
||||
- On FreeBSD or macOS, you might need to tweak the commands for hashing (`sha256sum` vs. `shasum -a 256`) or ISO creation (`mkisofs` vs. `genisoimage`).
|
||||
- For burning, Linux uses `growisofs`, macOS uses `hdiutil`, and FreeBSD may require `cdrecord` or another tool.
|
||||
|
||||
---
|
||||
|
||||
**Now you can enjoy the best of both worlds**:
|
||||
- **Independently decryptable** (and restorable) archives on each M-Disc.
|
||||
- Automatic ISO creation and optional disc burning in the same script.
|
||||
- Fast compression via lz4.
|
||||
|
||||
This gives you a **self-contained** backup on each disc without chain-dependency across your entire 2TB backup set!
|
326
backup2mdisc.sh
326
backup2mdisc.sh
|
@ -1,68 +1,50 @@
|
|||
#!/usr/bin/env bash
|
||||
#
|
||||
# backup_to_mdisc.sh
|
||||
# backup2mdisc.sh
|
||||
#
|
||||
# A script to:
|
||||
# 1. Archive a directory.
|
||||
# 2. Compress and encrypt it.
|
||||
# 3. Split into chunks (default 100GB).
|
||||
# 4. Generate checksums and a manifest.
|
||||
# 5. Optionally create ISO images for burning to M-Disc.
|
||||
# Purpose:
|
||||
# 1. Scans all files in a source directory.
|
||||
# 2. Groups them into "chunks" so that each chunk is <= a specified size (default 100GB).
|
||||
# 3. Creates a TAR archive of each chunk, compresses it with lz4, and encrypts it with GPG (AES256).
|
||||
# 4. Each .tar.lz4.gpg is fully independent (no other parts/discs needed to restore that chunk).
|
||||
# 5. (Optional) Creates ISO images from each encrypted chunk if --create-iso is provided.
|
||||
# 6. (Optional) Burns each chunk or ISO to M-Disc if --burn is provided.
|
||||
#
|
||||
# Usage:
|
||||
# ./backup_to_mdisc.sh /path/to/source /path/to/destination [CHUNK_SIZE] [--create-iso] [--burn]
|
||||
# ./backup2mdisc.sh /path/to/source /path/to/destination [CHUNK_SIZE] [--create-iso] [--burn]
|
||||
#
|
||||
# Examples:
|
||||
# ./backup_to_mdisc.sh /home/user/documents /media/backup 100G --create-iso
|
||||
# ./backup_to_mdisc.sh /data /backup 100G --create-iso --burn
|
||||
# ./backup2mdisc.sh /home/user/data /mnt/backup 100G --create-iso
|
||||
# ./backup2mdisc.sh /data /backup 50G --burn
|
||||
#
|
||||
# Dependencies (install these if missing):
|
||||
# Dependencies:
|
||||
# - bash
|
||||
# - gpg (for encryption)
|
||||
# - lz4 (for fast compression)
|
||||
# - tar
|
||||
# - xz
|
||||
# - gpg
|
||||
# - split
|
||||
# - sha256sum (on BSD/macOS, use 'shasum -a 256')
|
||||
# - genisoimage or mkisofs (for ISO creation)
|
||||
# - growisofs (on Linux, for burning)
|
||||
# - hdiutil (on macOS, for burning)
|
||||
# - split or file-based grouping approach
|
||||
# - sha256sum (or 'shasum -a 256' on macOS/FreeBSD)
|
||||
# - genisoimage or mkisofs (for creating ISOs if --create-iso)
|
||||
# - growisofs (Linux) or hdiutil (macOS) for burning if --burn
|
||||
#
|
||||
# Notes:
|
||||
# - This script sorts files by size and accumulates them until the chunk is "full."
|
||||
# - If a file alone is bigger than CHUNK_SIZE, this script won't handle it gracefully.
|
||||
# - Each chunk gets a separate .tar.lz4.gpg file. If one disc is lost, only that chunk's files are lost.
|
||||
# - Keep your GPG passphrase safe; you'll need it to decrypt any chunk.
|
||||
#
|
||||
# NOTE:
|
||||
# - This script assumes you have a functioning optical burner that supports
|
||||
# 100GB M-Disc (BD-XL) media, and the relevant drivers and software installed.
|
||||
# - For FreeBSD, adjust commands (e.g., use 'shasum -a 256' instead of 'sha256sum').
|
||||
|
||||
set -e
|
||||
|
||||
#####################################
|
||||
# CONFIGURATION #
|
||||
# CONFIGURATION & DEFAULTS #
|
||||
#####################################
|
||||
|
||||
# The default chunk size for splitting. 100GB is typical for 100GB M-Disc (BD-XL).
|
||||
DEFAULT_CHUNK_SIZE="100G"
|
||||
|
||||
# The name for the final TAR.XZ.GPG file (before splitting).
|
||||
# This will be suffixed by .partNN once split.
|
||||
ENCRYPTED_ARCHIVE_NAME="backup.tar.xz.gpg"
|
||||
|
||||
# The manifest file name.
|
||||
MANIFEST_NAME="backup_manifest.txt"
|
||||
DEFAULT_CHUNK_SIZE="100G" # Adjust if you want a different default
|
||||
MANIFEST_NAME="manifest_individual_chunks.txt"
|
||||
|
||||
#####################################
|
||||
# HELPER FUNCTIONS #
|
||||
#####################################
|
||||
|
||||
# Cross-platform SHA-256 function. On Linux, we can rely on 'sha256sum'.
|
||||
# On macOS/FreeBSD, you might need to use 'shasum -a 256'.
|
||||
function compute_sha256() {
|
||||
if command -v sha256sum >/dev/null 2>&1; then
|
||||
sha256sum "$1"
|
||||
else
|
||||
shasum -a 256 "$1"
|
||||
fi
|
||||
}
|
||||
|
||||
#####################################
|
||||
# MAIN SCRIPT #
|
||||
# FUNCTIONS #
|
||||
#####################################
|
||||
|
||||
function usage() {
|
||||
|
@ -72,16 +54,31 @@ function usage() {
|
|||
exit 1
|
||||
}
|
||||
|
||||
# Parse arguments
|
||||
# Cross-platform SHA-256
|
||||
function compute_sha256() {
|
||||
if command -v sha256sum >/dev/null 2>&1; then
|
||||
sha256sum "$1"
|
||||
else
|
||||
shasum -a 256 "$1"
|
||||
fi
|
||||
}
|
||||
|
||||
#####################################
|
||||
# MAIN SCRIPT #
|
||||
#####################################
|
||||
|
||||
# Parse primary arguments
|
||||
SOURCE_DIR="$1"
|
||||
DEST_DIR="$2"
|
||||
CHUNK_SIZE="${3:-$DEFAULT_CHUNK_SIZE}"
|
||||
|
||||
# Shift away the first 3 arguments if present
|
||||
shift 3 || true
|
||||
|
||||
CREATE_ISO=false
|
||||
BURN_MEDIA=false
|
||||
|
||||
# Parse flags
|
||||
for arg in "$@"; do
|
||||
case "$arg" in
|
||||
--create-iso)
|
||||
|
@ -110,134 +107,173 @@ if [[ ! -d "$DEST_DIR" ]]; then
|
|||
exit 1
|
||||
fi
|
||||
|
||||
# Prompt for GPG passphrase (don't store in script for security).
|
||||
# Prompt for GPG passphrase
|
||||
echo -n "Enter GPG passphrase (will not be displayed): "
|
||||
read -s GPG_PASSPHRASE
|
||||
echo
|
||||
|
||||
# Create a working directory inside DEST_DIR for the backup process
|
||||
WORK_DIR="${DEST_DIR}/backup_$(date +%Y%m%d_%H%M%S)"
|
||||
# Create a working directory
|
||||
WORK_DIR="${DEST_DIR}/individual_chunks_$(date +%Y%m%d_%H%M%S)"
|
||||
mkdir -p "$WORK_DIR"
|
||||
cd "$WORK_DIR"
|
||||
|
||||
echo "=== Step 1: Create a compressed TAR archive and encrypt it with GPG ==="
|
||||
echo "Creating encrypted archive. This may take a while depending on your data size..."
|
||||
# Create a manifest file to track chunk -> files mapping and checksums
|
||||
MANIFEST_FILE="${WORK_DIR}/${MANIFEST_NAME}"
|
||||
touch "$MANIFEST_FILE"
|
||||
echo "Manifest for independent chunks backup" > "$MANIFEST_FILE"
|
||||
echo "Source: $SOURCE_DIR" >> "$MANIFEST_FILE"
|
||||
echo "Timestamp: $(date)" >> "$MANIFEST_FILE"
|
||||
echo "Chunk size limit: $CHUNK_SIZE" >> "$MANIFEST_FILE"
|
||||
echo >> "$MANIFEST_FILE"
|
||||
|
||||
# We tar, compress, and encrypt in a single pipeline.
|
||||
# tar -cf - : stream archive
|
||||
# xz -c -9 : compress with xz at high compression
|
||||
# gpg -c : symmetric encrypt, using passphrase
|
||||
#
|
||||
# Adjust cipher-algo or compression level (-9) as needed.
|
||||
# Step 1: Collect all files with their sizes and sort them (ascending by size).
|
||||
TEMP_FILE_LIST=$(mktemp)
|
||||
find "$SOURCE_DIR" -type f -printf "%s %p\n" | sort -n > "$TEMP_FILE_LIST"
|
||||
|
||||
tar -cf - "$SOURCE_DIR" \
|
||||
| xz -c -9 \
|
||||
| gpg --batch --yes --cipher-algo AES256 --passphrase "$GPG_PASSPHRASE" -c \
|
||||
> "${ENCRYPTED_ARCHIVE_NAME}"
|
||||
CHUNK_INDEX=1
|
||||
CURRENT_CHUNK_SIZE=0
|
||||
TMP_CHUNK_LIST=$(mktemp)
|
||||
|
||||
echo "=== Step 2: Split the encrypted archive into $CHUNK_SIZE chunks ==="
|
||||
split -b "$CHUNK_SIZE" -a 3 "${ENCRYPTED_ARCHIVE_NAME}" "${ENCRYPTED_ARCHIVE_NAME}."
|
||||
function bytes_from_iec() {
|
||||
# Convert something like '100G' or '50G' into bytes using numfmt
|
||||
numfmt --from=iec "$1"
|
||||
}
|
||||
|
||||
# Remove the single large file to save space (optional).
|
||||
rm -f "${ENCRYPTED_ARCHIVE_NAME}"
|
||||
MAX_CHUNK_BYTES=$(bytes_from_iec "$CHUNK_SIZE")
|
||||
|
||||
echo "=== Step 3: Generate checksums and a manifest/catalog ==="
|
||||
touch "${MANIFEST_NAME}"
|
||||
function start_new_chunk() {
|
||||
rm -f "$TMP_CHUNK_LIST"
|
||||
touch "$TMP_CHUNK_LIST"
|
||||
CURRENT_CHUNK_SIZE=0
|
||||
}
|
||||
|
||||
echo "Backup Manifest - $(date)" >> "${MANIFEST_NAME}"
|
||||
echo "Source directory: $SOURCE_DIR" >> "${MANIFEST_NAME}"
|
||||
echo "Destination directory: $DEST_DIR" >> "${MANIFEST_NAME}"
|
||||
echo "Split chunk size: $CHUNK_SIZE" >> "${MANIFEST_NAME}"
|
||||
echo "Encrypted archive chunk names:" >> "${MANIFEST_NAME}"
|
||||
echo >> "${MANIFEST_NAME}"
|
||||
function finalize_chunk() {
|
||||
# Called when we have a list of files in TMP_CHUNK_LIST and we want to
|
||||
# 1) TAR them
|
||||
# 2) Compress with lz4
|
||||
# 3) Encrypt with GPG
|
||||
# 4) Possibly create ISO
|
||||
# 5) Possibly burn
|
||||
# 6) Update manifest
|
||||
|
||||
for chunk in ${ENCRYPTED_ARCHIVE_NAME}.*; do
|
||||
CHUNK_SHA256=$(compute_sha256 "$chunk")
|
||||
echo "$CHUNK_SHA256" >> "${MANIFEST_NAME}"
|
||||
done
|
||||
local chunk_name
|
||||
chunk_name=$(printf "chunk_%03d.tar.lz4.gpg" "$CHUNK_INDEX")
|
||||
|
||||
echo "Manifest created at: ${WORK_DIR}/${MANIFEST_NAME}"
|
||||
echo
|
||||
echo "==> Creating chunk #$CHUNK_INDEX: $chunk_name"
|
||||
|
||||
# If ISO creation is requested
|
||||
if [ "$CREATE_ISO" = true ]; then
|
||||
echo "=== Step 4: Create an ISO for each chunk (for easier burning) ==="
|
||||
# Tar + lz4 + gpg pipeline
|
||||
tar -cf - -T "$TMP_CHUNK_LIST" \
|
||||
| lz4 -c \
|
||||
| gpg --batch --yes --cipher-algo AES256 --passphrase "$GPG_PASSPHRASE" -c \
|
||||
> "${WORK_DIR}/${chunk_name}"
|
||||
|
||||
# We'll place ISOs in a subfolder
|
||||
mkdir -p iso_chunks
|
||||
# Generate a SHA-256 sum
|
||||
local chunk_path="${WORK_DIR}/${chunk_name}"
|
||||
local sum_line
|
||||
sum_line=$(compute_sha256 "$chunk_path")
|
||||
|
||||
for chunk in ${ENCRYPTED_ARCHIVE_NAME}.*; do
|
||||
ISO_BASENAME="${chunk}.iso"
|
||||
|
||||
# Create a temporary directory to hold the chunk file
|
||||
mkdir -p temp_dir
|
||||
cp "$chunk" temp_dir/
|
||||
|
||||
# Build an ISO with a single file inside:
|
||||
genisoimage -o "iso_chunks/${ISO_BASENAME}" -V "ENCRYPTED_BACKUP" temp_dir >/dev/null 2>&1 || \
|
||||
mkisofs -o "iso_chunks/${ISO_BASENAME}" -V "ENCRYPTED_BACKUP" temp_dir
|
||||
|
||||
# Remove the temporary directory
|
||||
rm -rf temp_dir
|
||||
done
|
||||
|
||||
echo "ISO files created under: ${WORK_DIR}/iso_chunks"
|
||||
fi
|
||||
|
||||
# If burning is requested, attempt to burn right away.
|
||||
# For cross-platform compatibility, we'll provide examples.
|
||||
# You may need to adapt device names (/dev/sr0, /dev/dvd, etc.).
|
||||
|
||||
if [ "$BURN_MEDIA" = true ]; then
|
||||
echo "=== Step 5: Burn chunks/ISOs to M-Disc ==="
|
||||
|
||||
# Example using growisofs on Linux:
|
||||
# growisofs -Z /dev/sr0=chunk_or_iso
|
||||
# or:
|
||||
# growisofs -use-the-force-luke=dao -speed=2 -Z /dev/sr0=chunk_or_iso
|
||||
|
||||
# Example using hdiutil on macOS for ISO:
|
||||
# hdiutil burn chunk.iso
|
||||
|
||||
echo "Attempting to burn each chunk (or ISO) to M-Disc. Please ensure a blank M-Disc is loaded each time."
|
||||
# Add chunk info to manifest
|
||||
echo "Chunk #$CHUNK_INDEX -> $chunk_name" >> "$MANIFEST_FILE"
|
||||
echo "Files in this chunk:" >> "$MANIFEST_FILE"
|
||||
cat "$TMP_CHUNK_LIST" >> "$MANIFEST_FILE"
|
||||
echo "" >> "$MANIFEST_FILE"
|
||||
echo "SHA256: $sum_line" >> "$MANIFEST_FILE"
|
||||
echo "-----------------------------------" >> "$MANIFEST_FILE"
|
||||
echo >> "$MANIFEST_FILE"
|
||||
|
||||
# Optionally create ISO
|
||||
local iso_name
|
||||
iso_name=$(printf "chunk_%03d.iso" "$CHUNK_INDEX")
|
||||
if [ "$CREATE_ISO" = true ]; then
|
||||
# We have ISO images
|
||||
for iso_file in iso_chunks/*.iso; do
|
||||
echo "Insert new disc for: $iso_file"
|
||||
read -p "Press [Enter] when ready to burn..."
|
||||
echo "==> Creating ISO for chunk #$CHUNK_INDEX"
|
||||
mkdir -p "${WORK_DIR}/iso_chunks"
|
||||
local temp_iso_dir="${WORK_DIR}/temp_iso_dir_$CHUNK_INDEX"
|
||||
mkdir -p "$temp_iso_dir"
|
||||
|
||||
# Linux (growisofs) example:
|
||||
# Copy the encrypted archive into a temp directory
|
||||
cp "$chunk_path" "$temp_iso_dir"/
|
||||
|
||||
# Build the ISO
|
||||
local iso_output="${WORK_DIR}/iso_chunks/${iso_name}"
|
||||
if command -v genisoimage >/dev/null 2>&1; then
|
||||
genisoimage -quiet -o "$iso_output" -V "ENCRYPTED_BACKUP_${CHUNK_INDEX}" "$temp_iso_dir"
|
||||
else
|
||||
# Try mkisofs
|
||||
mkisofs -quiet -o "$iso_output" -V "ENCRYPTED_BACKUP_${CHUNK_INDEX}" "$temp_iso_dir"
|
||||
fi
|
||||
rm -rf "$temp_iso_dir"
|
||||
|
||||
# If --burn is also requested, burn the ISO
|
||||
if [ "$BURN_MEDIA" = true ]; then
|
||||
echo
|
||||
echo "Please insert a blank M-Disc for chunk #$CHUNK_INDEX (ISO): $iso_name"
|
||||
read -rp "Press [Enter] when ready to burn..."
|
||||
if command -v growisofs >/dev/null 2>&1; then
|
||||
growisofs -Z /dev/sr0="$iso_file"
|
||||
growisofs -Z /dev/sr0="$iso_output"
|
||||
elif [[ "$OSTYPE" == "darwin"* ]]; then
|
||||
# macOS example using hdiutil
|
||||
hdiutil burn "$iso_file"
|
||||
# macOS example
|
||||
hdiutil burn "$iso_output"
|
||||
else
|
||||
echo "No known burner command found. Please burn manually: $iso_file"
|
||||
echo "No recognized burner found. Please burn ${iso_output} manually."
|
||||
fi
|
||||
done
|
||||
fi
|
||||
else
|
||||
# Burn the chunk files directly.
|
||||
for chunk in ${ENCRYPTED_ARCHIVE_NAME}.*; do
|
||||
echo "Insert new disc for: $chunk"
|
||||
read -p "Press [Enter] when ready to burn..."
|
||||
|
||||
# If we are not creating ISO but we are burning the chunk file directly
|
||||
if [ "$BURN_MEDIA" = true ]; then
|
||||
echo
|
||||
echo "Please insert a blank M-Disc for chunk #$CHUNK_INDEX: $chunk_name"
|
||||
read -rp "Press [Enter] when ready to burn..."
|
||||
if command -v growisofs >/dev/null 2>&1; then
|
||||
growisofs -Z /dev/sr0="$chunk"
|
||||
growisofs -Z /dev/sr0="$chunk_path"
|
||||
elif [[ "$OSTYPE" == "darwin"* ]]; then
|
||||
# We can't directly burn a raw file with hdiutil. Typically you'd create an ISO first.
|
||||
# So warn the user.
|
||||
echo "On macOS, please create an ISO or use a separate burning tool for: $chunk"
|
||||
# hdiutil doesn't burn a raw file easily, typically it expects .iso
|
||||
echo "On macOS, consider creating an ISO or using a different burning tool for $chunk_name."
|
||||
else
|
||||
echo "No known burner command found. Please burn manually: $chunk"
|
||||
echo "No recognized burner found. Please burn ${chunk_path} manually."
|
||||
fi
|
||||
done
|
||||
fi
|
||||
fi
|
||||
|
||||
echo "Burning process completed. Verify your discs for peace of mind."
|
||||
((CHUNK_INDEX++))
|
||||
start_new_chunk
|
||||
}
|
||||
|
||||
# Initialize the first chunk
|
||||
start_new_chunk
|
||||
|
||||
# Step 2: Go through each file, add to chunk if it fits, otherwise finalize and start a new chunk.
|
||||
while IFS= read -r line; do
|
||||
FILE_SIZE=$(echo "$line" | awk '{print $1}')
|
||||
FILE_PATH=$(echo "$line" | cut -d' ' -f2-)
|
||||
|
||||
# If adding this file exceeds the chunk limit, finalize the current chunk now
|
||||
if [[ $((CURRENT_CHUNK_SIZE + FILE_SIZE)) -gt $MAX_CHUNK_BYTES ]]; then
|
||||
# Finalize current chunk if it has at least 1 file
|
||||
if [[ $(wc -l < "$TMP_CHUNK_LIST") -gt 0 ]]; then
|
||||
finalize_chunk
|
||||
fi
|
||||
fi
|
||||
|
||||
# Add the file to the chunk
|
||||
echo "$FILE_PATH" >> "$TMP_CHUNK_LIST"
|
||||
CURRENT_CHUNK_SIZE=$((CURRENT_CHUNK_SIZE + FILE_SIZE))
|
||||
done < "$TEMP_FILE_LIST"
|
||||
|
||||
# Finalize the last chunk if it has leftover files
|
||||
if [[ $(wc -l < "$TMP_CHUNK_LIST") -gt 0 ]]; then
|
||||
finalize_chunk
|
||||
fi
|
||||
|
||||
echo "=== All done! ==="
|
||||
echo "Your backup chunks and manifest are in: ${WORK_DIR}"
|
||||
echo "Keep the manifest safe. You'll need all chunk files + passphrase to restore."
|
||||
echo
|
||||
echo "=== All chunks created ==="
|
||||
echo "Your chunks (and possibly ISOs) are located in:"
|
||||
echo " $WORK_DIR"
|
||||
echo
|
||||
echo "Manifest: $MANIFEST_FILE"
|
||||
echo "-----------------------------------"
|
||||
echo "Done!"
|
||||
|
||||
# Cleanup
|
||||
rm -f "$TEMP_FILE_LIST" "$TMP_CHUNK_LIST"
|
||||
|
||||
exit 0
|
Loading…
Reference in a new issue