Type your search keyword, and press enter

SSH known hosts verification failure one liner

Your ads will be inserted here by

Easy Plugin for AdSense.

Please go to the plugin admin page to
Paste your ad code OR
Suppress this ad slot.

WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!

Those who regularly build and rebuild machines or virtual machines on a dhcp network will probably be faced with this quite often, this is due to the known fingerprint for the previous host being different to a new one which has aquired the same IP address.

@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@ WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that a host key has just been changed.
The fingerprint for the ECDSA key sent by the remote host is
c5:ab:00:3c:88:7e:18:8f:46:49:1d:af:f1:8b:4e:98.
Please contact your system administrator.
Add correct host key in /root/.ssh/known_hosts to get rid of this message.
Offending ECDSA key in /root/.ssh/known_hosts:66
ECDSA host key for 192.168.1.165 has changed and you have requested strict checking.
Host key verification failed.

There is an option to have SSH ignore these when connecting, however i find that cleaning out the old line before connecting far quicker and i do this with a Sed one liner.

The line in the known_hosts file we are interested in can be found at the end of the line:

Offending ECDSA key in /root/.ssh/known_hosts:66

66 in this case, so we can get sed to simply delete that line using:

sed -i '66d' ~/.ssh/known_hosts

An SSH session can now be opened without Host key verification failure.

Hope this helps someone.

rsync with spaces in filenames fun

Your ads will be inserted here by

Easy Plugin for AdSense.

Please go to the plugin admin page to
Paste your ad code OR
Suppress this ad slot.

While I was trying to copy a load of files from one server to another I found a problem I’ve seen time and time again, rsync with filenames that have spaces in them. While normally this can be easily fixed via number of methods:

Continue reading “rsync with spaces in filenames fun”

Hourly Backup Script for Email

So I was working on a script a while ago to backup all my email on my Linux box. I put together a fairly simple bash script to delve into home directories and pick out the Maildir and back it up. Great!

DRIVE=”/dev/sdb1″
HOME=”/home/”
LIST=”/tmp/backlist_$$.txt”
#
mount $DRIVE /backup
set $(date)
for DIR in `find /home/ -name “Maildir” -print`; do
NAME=`echo $DIR|awk -F\/ {‘ print $3 ‘}`
tar vcfz “/backup/Mail/mail_full_${NAME}_$6-$2-$3.tgz” $DIR
umount /backup

So it was doing a full backup each day, which quickly became stoopid in size, so I decided to go for a weekly full and daily differentials.

DRIVE=”/dev/sdb1″
HOME=”/home/”
LIST=”/tmp/backlist_$$.txt”
#
mount $DRIVE /backup
set $(date)
#
if test “$1” = “Sat” ; then
# weekly a full backup of all data and config. settings:

#Backup of users Mail Directories
for DIR in `find /home/ -name “Maildir” -print`; do
NAME=`echo $DIR|awk -F\/ {‘ print $3 ‘}`
tar vcfz “/backup/Mail/mail_full_${NAME}_$6-$2-$3.tgz” $DIR
done
rm -f /backup/home/Mail/mail_diff*
#Daily differential backup
else
# incremental maildir
for DIR in `find /home/ -name “Maildir” -print`; do
NAME=`echo $DIR|awk -F\/ {‘ print $3 ‘}`
find $DIR -depth -type f \( -ctime -1 -o -mtime -1 \) -print > $LIST
tar vcfzT “/backup/Mail/mail_diff_${NAME}_$6-$2-$3.tgz” “$LIST”
done

fi
sleep 5
umount /backup

Nice! So now I’ve got a nice little backup solution. I have a function which rotates which I have yet to add. Then it will be nearly perfect…

 

Or will it?

So what happens if I recieve an email during the day, then accidentally delete it before the daily backup?? err…. uh-oh!

OK so a secondary mail store now would work a treat

for DIR in `find /home/ -name “Maildir” -print`; do
NAME=`echo $DIR|awk -F\/ {‘ print $3 ‘}`
/usr/bin/rsync -a $DIR/. /backup/Maildirs/$NAME/Maildir/ –delete -exclude=”dovecot.index*” –exclude=”dovecot-uidlist”
done

Dropping that in a seperate script and adding an hourly cron job would make sure I could go back an hour. But only an hour… hmm

So utilising a magic feature of the *nix filesystem inodes, I created a section which will copy using hardlinks several generations of the files but only actually store any differences thus not eating up precious disk space…

for DIR in `find /home/ -name “Maildir” -print`; do
NAME=`echo $DIR|awk -F\/ {‘ print $3 ‘}`
# NAME=”accounts”
echo “Running hourly backup for $NAME”
mv /backup/Maildirs/$NAME/Maildir.{12,tmp}
mv /backup/Maildirs/$NAME/Maildir.{11,12}
mv /backup/Maildirs/$NAME/Maildir.{10,11}
mv /backup/Maildirs/$NAME/Maildir.{9,10}
mv /backup/Maildirs/$NAME/Maildir.{8,9}
mv /backup/Maildirs/$NAME/Maildir.{7,8}
mv /backup/Maildirs/$NAME/Maildir.{6,7}
mv /backup/Maildirs/$NAME/Maildir.{5,6}
mv /backup/Maildirs/$NAME/Maildir.{4,5}
mv /backup/Maildirs/$NAME/Maildir.{3,4}
mv /backup/Maildirs/$NAME/Maildir.{2,3}
mv /backup/Maildirs/$NAME/Maildir.{1,2}
mv /backup/Maildirs/$NAME/Maildir.{0,1}
mv /backup/Maildirs/$NAME/Maildir.{tmp,0}
cp -al /backup/Maildirs/$NAME/Maildir.{1/.,0}
/usr/bin/rsync -a $DIR/. /backup/Maildirs/$NAME/Maildir.0/ –delete -exclude=”dovecot.index*” –exclude=”dovecot-uidlist”
echo `date` >/backup/Maildirs/$NAME/Maildir.0/dumptime
done

I probably could have made this a lot more elegant (and probably will put a loop in there) but for now it does the trick. It keeps 12 hourly generations of the backup but really only stores the full copy once and the differences between each generation in the copies by hardlinking the files to the same inodes rather than creating new copies each time.

My next task to make this even better is to integrate the 2 scripts, pretty them up and use hourly/daily/weekly versions rather than the tarballs.