Using ssh with Amazon AWS EC2 Instances
Amazon’s Web Services (aka AWS) make it very easy to fire up as many servers as you like. I fired up a few dozen over the course of two days while perfecting the scripts that make our standard server build. Our Behavior Targeting team regularly fire up thousands.
Once you’ve fired up an instance you can access it via a very long domain name or you can attach it to an Elastic IP (read static IP address) and access it that way. I even had assigned these static ip addresses domain names like a.emarmite.com, b.emarmite.com, etc.
There’s only one problem with this approach:
- Create a node on AWS and assign it to elastic IP address 1.2.3.4, domain name alpha.acme.com
- Login to your new node
- Destroy the node, build another and assign it to the same elastic IP
- Try and login again
You won’t be able to without removing the identifier for the original node from ~/.ssh/known_hosts. This is to protect you from ‘man in the middle’ attacks where someone else is impersonating a computer you want to connect to (via a DNS hijack for instance) and gets you to enter your user ID and password before you realize it’s not where you want to be.
After a bit of digging, I added the following to ~/.ssh/config :
Host *.amazonaws.com a.acme.com b.acme.com
ForwardAgent yes
CheckHostIP no
StrictHostKeyChecking no
UserKnownHostsFile=/dev/null
User root
IdentityFile ~/.ssh/pk-ec2.pem
What this does is: (a) allows you to login to the server even though there is no entry for it in ~/.ssh/known_hosts and (b) directs your ssh client to add the server identifier to /dev/null, i.e. don’t record it. This means when you log back in to the same Domain Name but fresh AWS node you won’t trip up on the host key check.
Make sure to change Host to read whichever domain names you setup for your AWS instances and IdentityFile to point to the private key you setup the AWS instance with.
Comments