这篇文章的小问题实在是困惑我许久(每次都要手动输入密码)
今天终于打通 “任督二脉” 了
特发此文,谨此纪念




文章目录


0. 大前提(生成公、私钥)

​ssh-keygen -t rsa​

1. 准备依赖

​yum install expect tcl-devel -y​

2. CV脚本

​vim auto_ssh.sh​

#!/usr/bin/expect  

set timeout 10
set username [lindex $argv 0]
set password [lindex $argv 1]
set hostname [lindex $argv 2]
spawn ssh-copy-id -i /root/.ssh/id_rsa.pub $username@$hostname
expect {
#first connect, no public key in ~/.ssh/known_hosts
"Are you sure you want to continue connecting (yes/no)?" {
send "yes\r"
expect "password:"
send "$password\r"
}
#already has public key in ~/.ssh/known_hosts
"password:" {
send "$password\r"
}
"Now try logging into the machine" {
#it has authorized, do nothing!
}
}
expect eof

3. 上菜

  • 用法示例:

​./auto_ssh.sh hadoop '123456' 192.168.0.102​

名称

解释

​./auto_ssh.sh​

脚本名

​hadoop​

用户名

​'123456'​

密码

​192.168.0.102​

IP地址


  • 用法详情:
[hadoop@192.168.0.101 ~]$ cat /etc/hosts
192.168.0.101 hadoop101
192.168.0.102 hadoop102
[hadoop@192.168.0.101 ~]$ cat ~/.ssh/config
Host hadoop101
Hostname 192.168.0.101
Port 22
User hadoop

Host hadoop102
Hostname 192.168.0.102
Port 22
User hadoop

[hadoop@192.168.0.101 ~]$ ll
total 4
-rwxrwxrwx. 1 hadoop hadoop 864 Mar 29 21:40 auto_ssh.sh
[hadoop@192.168.0.101 ~]$ cat auto_ssh.sh
#!/usr/bin/expect

set timeout 10
set username [lindex $argv 0]
set password [lindex $argv 1]
set hostname [lindex $argv 2]
spawn ssh-copy-id -i /home/hadoop/.ssh/id_rsa.pub $username@$hostname
expect {
#first connect, no public key in ~/.ssh/known_hosts
"Are you sure you want to continue connecting (yes/no)?" {
send "yes\r"
expect "password:"
send "$password\r"
}
#already has public key in ~/.ssh/known_hosts
"password:" {
send "$password\r"
}
"Now try logging into the machine" {
#it has authorized, do nothing!
}
}
expect eof
[hadoop@192.168.0.101 ~]$ ./auto_ssh.sh hadoop '123456' 192.168.0.102
spawn ssh-copy-id -i /home/hadoop/.ssh/id_rsa.pub hadoop@192.168.0.102
/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/hadoop/.ssh/id_rsa.pub"
/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
hadoop@192.168.0.102's password:

Number of key(s) added: 1

Now try logging into the machine, with: "ssh 'hadoop@192.168.0.102'"
and check to make sure that only the key(s) you wanted were added.

[hadoop@192.168.0.101 ~]$ ssh hadoop102
[hadoop@192.168.0.102 ~]$

4. 批量使用(数百台机器)

  • 用法示例:

​vim ssh_no_passwd_pro.sh​

#!/bin/bash

./auto_ssh.sh hadoop '1234562' 192.168.0.102
./auto_ssh.sh hadoop '1234563' 192.168.0.103
./auto_ssh.sh hadoop '1234564' 192.168.0.104
./auto_ssh.sh hadoop '1234565' 192.168.0.105
./auto_ssh.sh hadoop '1234566' 192.168.0.106
./auto_ssh.sh hadoop '1234567' 192.168.0.107
# 此处省略100台
./auto_ssh.sh hadoop '12345602' 192.168.1.202
./auto_ssh.sh hadoop '12345603' 192.168.1.203
./auto_ssh.sh hadoop '12345604' 192.168.1.204
./auto_ssh.sh hadoop '12345605' 192.168.1.205
./auto_ssh.sh hadoop '12345606' 192.168.1.206
./auto_ssh.sh hadoop '12345607' 192.168.1.207

看看上面这样写的太菜了吧,垃圾一批,优化一哈

#!/bin/bash

users=(
hadoop1
hadoop2
hadoop3
hadoop4
hadoop5
hadoop6
hadoop7
)

pwds=(
12345601
12345602
12345603
12345604
12345605
12345606
12345607
)

hosts=(
hadoop101
hadoop102
hadoop103
hadoop104
hadoop105
hadoop106
hadoop107
)

for host in ${hosts[@]}
do
for pwd in ${pwds[@]}
do
for user in ${users[@]}
do
./auto_ssh.sh $user '$pwd' $host
done
done
done

再优化,就上 ​​正则表达式​

  • 用法示例:

​vim ssh_no_passwd_pro_plus.sh​

#!/bin/bash

seq 254 > hosts.txt
sed -i 's#^#192.168.0.#g' hosts.txt
sed -i 's#$# test 123456#g' hosts.txt
hosts=`cat hosts.txt |awk '{print $1}'`
users=`cat hosts.txt |awk '{print $2}'`
pwds=`cat hosts.txt |awk '{print $3}'`

for host in ${hosts[@]}
do
for user in ${users[@]}
do
for pwd in ${pwds[@]}
do
./auto_ssh.sh $user '$passwd' $host
done
done
done

本文依赖:

之前写的就是下面这篇文章 + (苦恼数百台机器频繁输入密码) 所带来的灵感

​ssh远程好用教程,没有之一,你值得拥有​

感谢前 leader!

​高效就是你追求的,效率成为你的追求!​

我们下期见,拜拜!

听说点赞的小伙伴们,永不宕机,永无 bug


注意:

看不懂此文的就算了,不予解释,不接受反驳

特此声明