Could someone help me to understand what I am missing to allow ssh into my EC2 instances in my cloudFormation template? I'm guessing that my connections are being blocked somewhere but I don't seem to find any semblance of network configuration options for the VPC or subnet https://gist.github.com/zackteo/cc5f3d718f4ada748c1d56805cc97a37
I managed to get it to work already 🙂 https://www.reddit.com/r/aws/comments/k3mu8r/cloudformation_lacking_permissions_to_ssh_into/
Also, does anyone know what is the best way to automate copying of public key from namenode to worker nodes in EC2 Cluster in CloudFormation?
I want to do ssh hadoop-worker-1 'cat >> ~/.ssh/authorized_keys' < ~/.ssh/id_rsa.pub
in my namenode but will be stopped by a password request.
I believe I need my aws-key but I'm not sure what's a reasonable way for my namenode to get it. Or what might be a good workaround
I believe AWS systems manager is what you're looking for - it can run scripts etc on your behalf across your ec2 instances https://docs.aws.amazon.com/systems-manager/latest/userguide/execute-remote-commands.html Another approach is to use user-data and cloud-init to build the authorized keys on machine boot (that's something I used to do: we'd pull public keys from GitHub's user api and bake in an authorized_keys file for a jump host
I dimly remember that there is a way to do ssh into an EC2 instance using AMI instead of dealing with keys etc.
You can configure your EC2 instance to allow a ssh key when building it or pre-bake the authorized_keys in your AMI image
Nowadays best practice is to go through AWS SSM and not use SSH keys at all, and instead delegate to your AWS credentials/identity
Something I really miss from Google Cloud - you can magically SSH to an instance, provided you have access to the GCP project
in AWS it requires some setup