Poster une réponse à un sujet: Script backup linux
Attention, ce sujet est un sujet ancien (6122 jours sans réponse)
cauet
C'est la raison pour laquelle j'utilise plus innodb..
ovh
Je n'utilisais pas innodb, et j'utilisais ce rsync dans le temps, plus maintenant
zion
ovh> Et si tu utilises InnoDB, tu peux tout foutre à la poubelle, ton backup est illisible nul part ailleurs
ovh
Moi un jour je backupais en effet mes db mysql en rsyncant directement les répertoires /var/lib/mysql/mabase comme un bourrin sur la machine de backup Et ça avait l'air de marcher
gizmo
Pour le repertoire, du incrémental avec rsync, et pour le SQL suffit de pisser un mysqldump dans le tas...
Pour la DB, je ne sais pas si MySQL supporte le hot-dumping, mais si oui, autant également utiliser rsync également, ce sera nettement plus rapide.
blietaer
Pour le repertoire, du incrémental avec rsync, et pour le SQL suffit de pisser un mysqldump dans le tas...
#!/bin/sh
# ----------------------------------------------------------------------
# mikes handy rotating-filesystem-snapshot utility
# ----------------------------------------------------------------------
# this needs to be a lot more general, but the basic idea is it makes
# rotating backup-snapshots of /home whenever called
# ----------------------------------------------------------------------
unset PATH # suggestion from H. Milz: avoid accidental use of $PATH
# ------------- system commands used by this script --------------------
ID=/usr/bin/id;
ECHO=/bin/echo;
MOUNT=/bin/mount;
UMOUNT=/bin/umount;
RM=/bin/rm;
MV=/bin/mv;
CP=/bin/cp;
TOUCH=/bin/touch;
DPKG=/usr/bin/dpkg
AWK=/usr/bin/awk
RSYNC=/usr/bin/rsync;
TAR=/bin/tar
# ------------- file locations -----------------------------------------
MOUNT_DEVICE=/dev/sdb1;
SNAPSHOT_RW=/mnt/backup;
EXCLUDES=/usr/local/backup/exclude;
BCKDIR=/depot/tools/linux/backup
# ------------------ the backup of config files ------------------------
$DPKG -l | $AWK '{print $2}' > $BCKDIR/packages.list
$CP /etc/fstab $BCKDIR
$CP /boot/conf* $BCKDIR
$CP /boot/grub/menu.lst $BCKDIR
$CP /etc/X11/xorg.conf* $BCKDIR
$CP /root/.vimrc $BCKDIR/_vimrc
$CP /root/.bashrc $BCKDIR/_bashrc
$CP /usr/local/backup/* $BCKDIR
$CP /bin/maj.sh $BCKDIR
$CP /etc/apt/sources.list $BCKDIR
# ------------- the script itself --------------------------------------
# make sure we're running as root
if (( `$ID -u` != 0 )); then { $ECHO "Sorry, must be root. Exiting..."; exit; } fi
# attempt to remount the RW mount point as RW; else abort
$MOUNT -t ext3 $MOUNT_DEVICE $SNAPSHOT_RW ;
if (( $? )); then
{
$ECHO "snapshot: could not remount $SNAPSHOT_RW readwrite";
exit;
}
fi;
#######################################################################################
# rotating snapshots of /var/www
#######################################################################################
# step 1: delete the oldest snapshot, if it exists:
if [ -d $SNAPSHOT_RW/www/hourly.3 ] ; then \
$RM -rf $SNAPSHOT_RW/www/hourly.3 ; \
fi ;
# step 2: shift the middle snapshots(s) back by one, if they exist
if [ -d $SNAPSHOT_RW/www/hourly.2 ] ; then \
$MV $SNAPSHOT_RW/www/hourly.2 $SNAPSHOT_RW/www/hourly.3 ; \
fi;
if [ -d $SNAPSHOT_RW/www/hourly.1 ] ; then \
$MV $SNAPSHOT_RW/www/hourly.1 $SNAPSHOT_RW/www/hourly.2 ; \
fi;
# step 3: make a hard-link-only (except for dirs) copy of the latest snapshot,
# if that exists
if [ -d $SNAPSHOT_RW/www/hourly.0 ] ; then \
$CP -al $SNAPSHOT_RW/www/hourly.0 $SNAPSHOT_RW/www/hourly.1 ; \
#cd $SNAPSHOT_RW/home/hourly.0 && find . -print | cpio -dpl $SNAPSHOT_RW/home/hourly.1
fi;
# step 4: rsync from the system into the latest snapshot (notice that
# rsync behaves like cp --remove-destination by default, so the destination
# is unlinked first. If it were not so, this would copy over the other
# snapshot(s) too!
$RSYNC \
-va --delete --delete-excluded \
--exclude-from="$EXCLUDES" \
/var/www/* $SNAPSHOT_RW/www/hourly.0 ;
# step 5: update the mtime of hourly.0 to reflect the snapshot time
$TOUCH $SNAPSHOT_RW/www/hourly.0 ;
# and thats it for www.
#######################################################################################
# rotating snapshots of /depot
#######################################################################################
# step 1: delete the oldest snapshot, if it exists:
if [ -d $SNAPSHOT_RW/depot/hourly.3 ] ; then \
$RM -rf $SNAPSHOT_RW/depot/hourly.3 ; \
fi ;
# step 2: shift the middle snapshots(s) back by one, if they exist
if [ -d $SNAPSHOT_RW/depot/hourly.2 ] ; then \
$MV $SNAPSHOT_RW/depot/hourly.2 $SNAPSHOT_RW/depot/hourly.3 ; \
fi;
if [ -d $SNAPSHOT_RW/depot/hourly.1 ] ; then \
$MV $SNAPSHOT_RW/depot/hourly.1 $SNAPSHOT_RW/depot/hourly.2 ; \
fi;
# step 3: make a hard-link-only (except for dirs) copy of the latest snapshot,
# if that exists
if [ -d $SNAPSHOT_RW/depot/hourly.0 ] ; then \
$CP -al $SNAPSHOT_RW/depot/hourly.0 $SNAPSHOT_RW/depot/hourly.1 ; \
fi;
# step 4: rsync from the system into the latest snapshot (notice that
# rsync behaves like cp --remove-destination by default, so the destination
# is unlinked first. If it were not so, this would copy over the other
# snapshot(s) too!
$RSYNC \
-va --delete --delete-excluded \
--exclude-from="$EXCLUDES" \
/depot/ $SNAPSHOT_RW/depot/hourly.0 ;
# step 5: update the mtime of hourly.0 to reflect the snapshot time
$TOUCH $SNAPSHOT_RW/depot/hourly.0 ;
# and thats it for depot.
#######################################################################################
$UMOUNT $SNAPSHOT_RW;
# now remount the RW snapshot mountpoint as readonly
#$MOUNT -o remount,ro $MOUNT_DEVICE $SNAPSHOT_RW ;
#if (( $? )); then
#{
# $ECHO "snapshot: could not remount $SNAPSHOT_RW readonly";
# exit;
#} fi;
#!/bin/sh
# ----------------------------------------------------------------------
# mikes handy rotating-filesystem-snapshot utility
# ----------------------------------------------------------------------
# this needs to be a lot more general, but the basic idea is it makes
# rotating backup-snapshots of /home whenever called
# ----------------------------------------------------------------------
unset PATH # suggestion from H. Milz: avoid accidental use of $PATH
# ------------- system commands used by this script --------------------
ID=/usr/bin/id;
ECHO=/bin/echo;
MOUNT=/bin/mount;
UMOUNT=/bin/umount;
RM=/bin/rm;
MV=/bin/mv;
CP=/bin/cp;
TOUCH=/bin/touch;
DPKG=/usr/bin/dpkg
AWK=/usr/bin/awk
RSYNC=/usr/bin/rsync;
TAR=/bin/tar
# ------------- file locations -----------------------------------------
MOUNT_DEVICE=/dev/sdb1;
SNAPSHOT_RW=/mnt/backup;
EXCLUDES=/usr/local/backup/exclude;
BCKDIR=/depot/tools/linux/backup
# ------------------ the backup of config files ------------------------
$DPKG -l | $AWK '{print $2}' > $BCKDIR/packages.list
$CP /etc/fstab $BCKDIR
$CP /boot/conf* $BCKDIR
$CP /boot/grub/menu.lst $BCKDIR
$CP /etc/X11/xorg.conf* $BCKDIR
$CP /root/.vimrc $BCKDIR/_vimrc
$CP /root/.bashrc $BCKDIR/_bashrc
$CP /usr/local/backup/* $BCKDIR
$CP /bin/maj.sh $BCKDIR
$CP /etc/apt/sources.list $BCKDIR
# ------------- the script itself --------------------------------------
# make sure we're running as root
if (( `$ID -u` != 0 )); then { $ECHO "Sorry, must be root. Exiting..."; exit; } fi
# attempt to remount the RW mount point as RW; else abort
$MOUNT -t ext3 $MOUNT_DEVICE $SNAPSHOT_RW ;
if (( $? )); then
{
$ECHO "snapshot: could not remount $SNAPSHOT_RW readwrite";
exit;
}
fi;
#######################################################################################
# rotating snapshots of /var/www
#######################################################################################
# step 1: delete the oldest snapshot, if it exists:
if [ -d $SNAPSHOT_RW/www/hourly.3 ] ; then \
$RM -rf $SNAPSHOT_RW/www/hourly.3 ; \
fi ;
# step 2: shift the middle snapshots(s) back by one, if they exist
if [ -d $SNAPSHOT_RW/www/hourly.2 ] ; then \
$MV $SNAPSHOT_RW/www/hourly.2 $SNAPSHOT_RW/www/hourly.3 ; \
fi;
if [ -d $SNAPSHOT_RW/www/hourly.1 ] ; then \
$MV $SNAPSHOT_RW/www/hourly.1 $SNAPSHOT_RW/www/hourly.2 ; \
fi;
# step 3: make a hard-link-only (except for dirs) copy of the latest snapshot,
# if that exists
if [ -d $SNAPSHOT_RW/www/hourly.0 ] ; then \
$CP -al $SNAPSHOT_RW/www/hourly.0 $SNAPSHOT_RW/www/hourly.1 ; \
#cd $SNAPSHOT_RW/home/hourly.0 && find . -print | cpio -dpl $SNAPSHOT_RW/home/hourly.1
fi;
# step 4: rsync from the system into the latest snapshot (notice that
# rsync behaves like cp --remove-destination by default, so the destination
# is unlinked first. If it were not so, this would copy over the other
# snapshot(s) too!
$RSYNC \
-va --delete --delete-excluded \
--exclude-from="$EXCLUDES" \
/var/www/* $SNAPSHOT_RW/www/hourly.0 ;
# step 5: update the mtime of hourly.0 to reflect the snapshot time
$TOUCH $SNAPSHOT_RW/www/hourly.0 ;
# and thats it for www.
#######################################################################################
# rotating snapshots of /depot
#######################################################################################
# step 1: delete the oldest snapshot, if it exists:
if [ -d $SNAPSHOT_RW/depot/hourly.3 ] ; then \
$RM -rf $SNAPSHOT_RW/depot/hourly.3 ; \
fi ;
# step 2: shift the middle snapshots(s) back by one, if they exist
if [ -d $SNAPSHOT_RW/depot/hourly.2 ] ; then \
$MV $SNAPSHOT_RW/depot/hourly.2 $SNAPSHOT_RW/depot/hourly.3 ; \
fi;
if [ -d $SNAPSHOT_RW/depot/hourly.1 ] ; then \
$MV $SNAPSHOT_RW/depot/hourly.1 $SNAPSHOT_RW/depot/hourly.2 ; \
fi;
# step 3: make a hard-link-only (except for dirs) copy of the latest snapshot,
# if that exists
if [ -d $SNAPSHOT_RW/depot/hourly.0 ] ; then \
$CP -al $SNAPSHOT_RW/depot/hourly.0 $SNAPSHOT_RW/depot/hourly.1 ; \
fi;
# step 4: rsync from the system into the latest snapshot (notice that
# rsync behaves like cp --remove-destination by default, so the destination
# is unlinked first. If it were not so, this would copy over the other
# snapshot(s) too!
$RSYNC \
-va --delete --delete-excluded \
--exclude-from="$EXCLUDES" \
/depot/ $SNAPSHOT_RW/depot/hourly.0 ;
# step 5: update the mtime of hourly.0 to reflect the snapshot time
$TOUCH $SNAPSHOT_RW/depot/hourly.0 ;
# and thats it for depot.
#######################################################################################
$UMOUNT $SNAPSHOT_RW;
# now remount the RW snapshot mountpoint as readonly
#$MOUNT -o remount,ro $MOUNT_DEVICE $SNAPSHOT_RW ;
#if (( $? )); then
#{
# $ECHO "snapshot: could not remount $SNAPSHOT_RW readonly";
# exit;
#} fi;
Jean-Christophe
Ah, super!
Merci
Merci
zion
Un script que j'avais trouvé il y a longtemps, que j'ai utilisé quelques temps et modifié à ma guise. Il n'y a pas de fichier de config mais tu verras c'est ultra simple
#! /bin/bash
#
# creates backups of essential files
#
DATA="/home/ /root /var/www/html /var/spool/mail"
CONFIG="/etc /var/cache /usr/local/bin"
LIST="/tmp/backlist_$$.txt"
DATE_START=`date`
#
set $(date)
#
if test "$1" = "Sun" ; then
# weekly a full backup of all data and config. settings:
#
tar cfz "/backup/data/$6-$2-$3_data_full.tgz" $DATA
#rm -f /backup/data/data_diff*
#
tar cfz "/backup/config/$6-$2-$3_config_full.tgz" $CONFIG
#rm -f /backup/config/config_diff*
BACK_TYPE="Full Backup"
else
# incremental backup:
#
find $DATA -depth -type f \( -ctime -1 -o -mtime -1 \) -print > $LIST
tar cfzT "/backup/data/$6-$2-$3_data_diff.tgz" "$LIST"
rm -f "$LIST"
#
find $CONFIG -depth -type f \( -ctime -1 -o -mtime -1 \) -print > $LIST
tar cfzT "/backup/config/$6-$2-$3_config_diff.tgz" "$LIST"
rm -f "$LIST"
BACK_TYPE="Incremental Backup"
fi
#
# create sql dump of databases:
mysqldump -u root --opt unebase > "/backup/database/temp/$6-$2-$3_db_unebase.sql"
mysqldump -u root --all-databases > "/backup/database/temp/$6-$2-$3_db_all.sql"
# gzip databases dump
gzip "/backup/database/temp/$6-$2-$3_db_unebase.sql"
gzip "/backup/database/temp/$6-$2-$3_db_all.sql"
tar -cf /backup/database/$6-$2-$3_db.tar /backup/database/temp/*
rm -f /backup/database/temp/*
# Upload to home
tar -cf /backup/ftp/p4_$6-$2-$3.tar /backup/database/$6-$2-$3* /backup/config/$6-$2-$3* /backup/data/$6-$2-$3*
#scp -B /backup/ftp/p4_$6-$2-$3.tar backup@yourhostname.com:p4_$6-$2-$3.tar
ftp -n -v yourhostname.com <<EOF
user tonlogin tonpassword
lcd /backup/ftp
binary
cd HD_a2/p4
put p4_$6-$2-$3.tar
quit
EOF
rm -f /backup/ftp/*
rm -f /backup/config/*
rm -f /backup/database/*
rm -f /backup/data/*
C'est rudimentaire mais tu peux partir d'un truc comme ça
#! /bin/bash
#
# creates backups of essential files
#
DATA="/home/ /root /var/www/html /var/spool/mail"
CONFIG="/etc /var/cache /usr/local/bin"
LIST="/tmp/backlist_$$.txt"
DATE_START=`date`
#
set $(date)
#
if test "$1" = "Sun" ; then
# weekly a full backup of all data and config. settings:
#
tar cfz "/backup/data/$6-$2-$3_data_full.tgz" $DATA
#rm -f /backup/data/data_diff*
#
tar cfz "/backup/config/$6-$2-$3_config_full.tgz" $CONFIG
#rm -f /backup/config/config_diff*
BACK_TYPE="Full Backup"
else
# incremental backup:
#
find $DATA -depth -type f \( -ctime -1 -o -mtime -1 \) -print > $LIST
tar cfzT "/backup/data/$6-$2-$3_data_diff.tgz" "$LIST"
rm -f "$LIST"
#
find $CONFIG -depth -type f \( -ctime -1 -o -mtime -1 \) -print > $LIST
tar cfzT "/backup/config/$6-$2-$3_config_diff.tgz" "$LIST"
rm -f "$LIST"
BACK_TYPE="Incremental Backup"
fi
#
# create sql dump of databases:
mysqldump -u root --opt unebase > "/backup/database/temp/$6-$2-$3_db_unebase.sql"
mysqldump -u root --all-databases > "/backup/database/temp/$6-$2-$3_db_all.sql"
# gzip databases dump
gzip "/backup/database/temp/$6-$2-$3_db_unebase.sql"
gzip "/backup/database/temp/$6-$2-$3_db_all.sql"
tar -cf /backup/database/$6-$2-$3_db.tar /backup/database/temp/*
rm -f /backup/database/temp/*
# Upload to home
tar -cf /backup/ftp/p4_$6-$2-$3.tar /backup/database/$6-$2-$3* /backup/config/$6-$2-$3* /backup/data/$6-$2-$3*
#scp -B /backup/ftp/p4_$6-$2-$3.tar backup@yourhostname.com:p4_$6-$2-$3.tar
ftp -n -v yourhostname.com <<EOF
user tonlogin tonpassword
lcd /backup/ftp
binary
cd HD_a2/p4
put p4_$6-$2-$3.tar
quit
EOF
rm -f /backup/ftp/*
rm -f /backup/config/*
rm -f /backup/database/*
rm -f /backup/data/*
C'est rudimentaire mais tu peux partir d'un truc comme ça
Jean-Christophe
yop!
Je voudrais faire un petit script de backup "générique" qui s'occuperait de backuper une db MySQL et un répertoire.
Le but est de pouvoir faire des backups déparés et automatique des différents sites intranet/internet dont je m'"occupe".
Le but est de faire un backup de la DB et du root folder. Si je peux garder les droits, c'est encore mieux. Le target est un folder share samba ou un FTP.
A la fin, je voudrais un truc à qui je donne un fichier de config qui contiendrait
DB_Server :
DB_Name :
DB_User :
DB_Password :
Files-Folder :
Backup_Target : smb/ftp
Backup_Path :
Backup_User :
Backup_Password :
Full_copy_to_Keep :
Ce genre de chose existe déjà ?
Si oui, où?
Si non, c'est compliqué?
Merci pour vos idées
Je voudrais faire un petit script de backup "générique" qui s'occuperait de backuper une db MySQL et un répertoire.
Le but est de pouvoir faire des backups déparés et automatique des différents sites intranet/internet dont je m'"occupe".
Le but est de faire un backup de la DB et du root folder. Si je peux garder les droits, c'est encore mieux. Le target est un folder share samba ou un FTP.
A la fin, je voudrais un truc à qui je donne un fichier de config qui contiendrait
DB_Server :
DB_Name :
DB_User :
DB_Password :
Files-Folder :
Backup_Target : smb/ftp
Backup_Path :
Backup_User :
Backup_Password :
Full_copy_to_Keep :
Ce genre de chose existe déjà ?
Si oui, où?
Si non, c'est compliqué?
Merci pour vos idées