ESPEasy Backup script

Moderators: grovkillen, Stuntteam, TD-er

Post Reply
Message
Author
_Cyber_
Normal user
Posts: 115
Joined: 20 Oct 2019, 09:46

ESPEasy Backup script

#1 Post by _Cyber_ » 16 Feb 2021, 07:05

As I did not find any skeleton for a automated ESPEasy Backup script for its configuration I share here my working bash-script for doing this.

Useable on linux hosts, as cronjob on daily basis e.g.

it fetches config and rules*.txt from ESPEasy nodes, compresses the fetched files, checks for duplicates (and only keeps the original one if nothing has changed) and echo's the list of *new* backups.

configure the path and the IPs. the "alswayson" IPs are used to fetch the Inter-ESPEasy Network nodes registered on this ip and added to the backuplist automatically. So if you use the Inter-ESPEasy Network functionality just configure one IP and all your nodes registered at the same port will be backed up. With different Inter-ESPEasy Networks (different ports) just use from each network one Node in the "alwayson" configuration.

If you want some more output uncomment the "echo" in the script. It is so silent because if you have configured your cron daemon to send you emails, you will in the silent configuration only get emails if new backups are made.

feel free to copy, share, modify or whatever you want to do with it. :-)

Code: Select all

#!/bin/bash

# configure here your directory for the backups
DIRECTORY="/home/backups/ESPEasy"
DATE=$(date +%Y_%m_%d)

# configure the nodes for Inter-ESPEasy network nodes. They will also be backed up, and all their listed Nodes
declare -a espalwayson=(
                        "192.168.10.244"
                        "192.168.10.243"
                        "192.168.10.233"
                       )

# have some nodes which are not part of a Network? Just add them here.
declare -a alwaysbackupespips=(
                   "192.168.10.234"
                   "192.168.10.235"
                  )

for (( h=1;h<${#espalwayson[@]}+1; h++ )); do
 wget http://${espalwayson[$h-1]}/json -q -T 1 -t 1 -O ${DIRECTORY}/json_grepIPs || rm -f ${DIRECTORY}/json_grepIPs
 if [ -f ${DIRECTORY}/json_grepIPs ]; then
  #echo "Successful fetched json-file from ${espalwayson[$h-1]}"
  INTERESPNW=`grep -E '"ip"' ${DIRECTORY}/json_grepIPs | cut -d : -f2 | awk -F\" '{print $2}'`
  if [ ! -z "$INTERESPNW" ]; then
   INTERESPNWCOUNT=`echo -n "$INTERESPNW" | grep -c '^'`
   #echo "Aways-on-Node ${espalwayson[$h-1]} has ${INTERESPNWCOUNT} Inter-ESP Network IPs."
   while IFS= read -r line; do
    alwaysbackupespips+=("$line")
    #echo "Added node ${line} from Aways-on-Node ${espalwayson[$h-1]}"
   done <<< "$INTERESPNW"
   rm -f ${DIRECTORY}/json_grepIPs
  #else
   #echo "Aways-on-Node ${espalwayson[$h-1]} has no Inter-ESP Network Nodes in list."
  fi

done

eval espips=($(printf "%q\n" "${alwaysbackupespips[@]}" | sort -u))

declare -a espdownloadfiles=(
                             "json"
                             "config.dat"
                             "rules1.txt"
                             "rules2.txt"
                             "rules3.txt"
                             "rules4.txt"
                            )

for (( i=1; i<${#espips[@]}+1; i++ )); do
 for (( j=1; j<${#espdownloadfiles[@]}+1; j++ )); do
  wget http://${espips[$i-1]}/${espdownloadfiles[$j-1]} -q -T 1 -t 1 -O ${DIRECTORY}/${espdownloadfiles[$j-1]} || rm -f ${DIRECTORY}/${espdownloadfiles[$j-1]}
 done
 if [ -f ${DIRECTORY}/json ]; then
  UNITNAME=`grep -E '"Unit Name"' ${DIRECTORY}/json | cut -d : -f2 | awk -F\" '{print $2}'`
  if [ -z ${UNITNAME} ]; then
   UNITNAME=`grep -E '"Hostname"' ${DIRECTORY}/json | cut -d : -f2 | awk -F\" '{print $2}'`
  fi
  (cd ${DIRECTORY}; GZIP=-nq tar --mtime='1970-01-01' -czf ${UNITNAME}_${espips[$i-1]}_${DATE}.tgz config.dat rules*.txt)
  #echo "Backup successful for ${UNITNAME}"
 #else
  #echo "Could not reach ${espips[$i-1]} for backup"
 fi
 rm -f ${DIRECTORY}/config.dat ${DIRECTORY}/json ${DIRECTORY}/rules*.txt
 (cd ${DIRECTORY}; comm -13 <(md5sum * | sort | uniq -w 32 -d) <(md5sum * | sort | uniq -w 32 -D) | cut -f 3- -d" " | xargs -d '\n' rm -f)
done

files=( "${DIRECTORY}"/*${DATE}.tgz )

for file in "${files[@]}"; do
 if [ -f ${file} ]; then
  echo "${file}"
 fi
done

TD-er
Core team member
Posts: 8739
Joined: 01 Sep 2017, 22:13
Location: the Netherlands
Contact:

Re: ESPEasy Backup script

#2 Post by TD-er » 16 Feb 2021, 08:44

Just as a suggestion for you to also have a look at as you probably use these kind of scripts for other backups as well :)
Have you ever heard of Dirvish?
What this does is quite simple:
- Create a new directory with hard links to all files in the previous dir
- Copy over using rsync with some special flags which only unlinks files that have been changed

This way you have a full directory tree of all your files and only the changes need storage space.

Just share these trees via a read-only share and you have your old versions always at hand.
Somewhat like Git-avant-le-lettre. :)

I used it way over a decade and works perfectly.
See my howto (in Dutch)

_Cyber_
Normal user
Posts: 115
Joined: 20 Oct 2019, 09:46

Re: ESPEasy Backup script

#3 Post by _Cyber_ » 16 Feb 2021, 09:06

TD-er wrote: 16 Feb 2021, 08:44 Just as a suggestion for you to also have a look at as you probably use these kind of scripts for other backups as well :)
Have you ever heard of Dirvish?
What this does is quite simple:
- Create a new directory with hard links to all files in the previous dir
- Copy over using rsync with some special flags which only unlinks files that have been changed

This way you have a full directory tree of all your files and only the changes need storage space.

Just share these trees via a read-only share and you have your old versions always at hand.
Somewhat like Git-avant-le-lettre. :)

I used it way over a decade and works perfectly.
See my howto (in Dutch)
regrettably I am still really backup-lazy. so, I always hope my raid-devices do not die at the same time, or nobody does accidentally delete my data there.
but, as I already suffered 3 times from broken SD cards in my raspberry's I started to dump them monthly. this are also bash-scripts which simply do dd over ssh and compress afterwards as squashfs.
and, all RPI's which have the possibility are complete storageless with TFTP and nfs-rootfs ... on the nfs server they own loop-mounted dd-images as rootfs which is simply "cp"-able before doing update where I do not know if all my config survives the update.

Post Reply

Who is online

Users browsing this forum: No registered users and 89 guests