Nebulaworks Insight Content Card Background - Adrien olichon dark concrete
Recent Updates
Session Manager
AWS System Manager is a great tool with many capabilities. I’ll be covering one particular capability, Session Manager, as an outright replacement for OpenSSH’s remote shell on EC2. By default, Session Manager uses TLS 1.2 to encrypt session data transmitted between the local machines of users in your account and your EC2 instances. By using the AWS CLI with the ssm-session-manager plugin, properly configured policies, and ssm-agent running on EC2 we are able jump right into our EC2 instances just as if we were exposing SSH on them.
Session Manager comes installed on Amazon Linux 2
and
many other AMIs. We opt to
build our EC2 AMIs on top of NixOS so here’s a snippet for enabling ssm-agent
in our
configuration.nix
:
services = {
openssh = {
enable = true;
settings.PasswordAuthentication = false;
openFirewall = false; # SSH is done via SSM proxy
};
ssm-agent.enable = true;
};
When deploying an AMI with ssm-agent
installed onto an EC2 instance we are able to totally deny any ingress
connectivity to the security group associated with the instance. This HCL snippet is an example what the ingress looks
like on the security group associated with the instance:
resource "aws_security_group" "vm" {
name_prefix = "vm"
vpc_id = data.aws_vpc.this.id
egress {
protocol = "-1"
from_port = 0
to_port = 0
cidr_blocks = ["0.0.0.0/0"]
}
}
Note that we are not defining any ingress for the security group and therefore its blocked by default
AWS IAM Managed Policy
This EC2 instance needs to have an instance profile associated with that’s appropriate for allowing the ssm-agent
to
communicate with the Session Manager
service inside of AWS.
AWS provides many pre-baked IAM Managed Policies so that users can leverage services quickly. This all sounds great until you discover the managed policy AmazonSSMManagedInstanceCore and the exhaustive list of IAM actions defined inside:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ssm:DescribeAssociation",
"ssm:GetDeployablePatchSnapshotForInstance",
"ssm:GetDocument",
"ssm:DescribeDocument",
"ssm:GetManifest",
"ssm:GetParameter",
"ssm:GetParameters",
"ssm:ListAssociations",
"ssm:ListInstanceAssociations",
"ssm:PutInventory",
"ssm:PutComplianceItems",
"ssm:PutConfigurePackageResult",
"ssm:UpdateAssociationStatus",
"ssm:UpdateInstanceAssociationStatus",
"ssm:UpdateInstanceInformation"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"ssmmessages:CreateControlChannel",
"ssmmessages:CreateDataChannel",
"ssmmessages:OpenControlChannel",
"ssmmessages:OpenDataChannel"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"ec2messages:AcknowledgeMessage",
"ec2messages:DeleteMessage",
"ec2messages:FailMessage",
"ec2messages:GetEndpoint",
"ec2messages:GetMessages",
"ec2messages:SendReply"
],
"Resource": "*"
}
]
}
The above policy seems to be a generic one that encompasses
all of the capabilities
the AWS Systems Manager service is able to provide. This policy contains a ton of actions that are unnecessary when only
granting remote shell access via ssm-agent
. So what is the bare minimum that would meet our needs for remote access?
Trimmed Down IAM Policy
We decided to go digging to see what exact actions were needed in the IAM instance profile to allow a shell via Session Manager. After some troubleshooting and comparing our findings with some other sources online we were able to trim the policy down to just 5 actions for a standard remote shell:
{
"Statement": [
{
"Action": [
"ssmmessages:CreateDataChannel",
"ssmmessages:OpenDataChannel",
"ssmmessages:CreateControlChannel",
"ssmmessages:OpenControlChannel",
"ssm:UpdateInstanceInformation"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
Here’s a snippet of the same policy defined as HCL:
resource "aws_iam_policy" "vm" {
name_prefix = "vm"
policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = [
"ssmmessages:CreateDataChannel",
"ssmmessages:OpenDataChannel",
"ssmmessages:CreateControlChannel",
"ssmmessages:OpenControlChannel",
"ssm:UpdateInstanceInformation",
]
Effect = "Allow"
Resource = "*"
},
]
})
}
Connecting with SSM Plugin
After applying the trimmed policy and provisioning a new EC2 instance we can try to establish a remote connection via our local machine, the AWS CLI, and ssm-session-manager plugin. We’ll setup a nix shell to ensure that we have all the requirements locally:
$ nix shell nixpkgs#awscli2 nixpkgs#ssm-session-manager-plugin
Then establish the connection with the remote EC2 instance:
$ aws ssm start-session --target i-0123456789abcdefg
Starting session with SessionId: josh-0123456789abcdefg
sh-5.2$ uname -a
Linux mu 6.1.19 #1-NixOS SMP PREEMPT_DYNAMIC Mon Mar 13 09:21:32 UTC 2023 x86_64 GNU/Linux
sh-5.2$ whoami
ssm-user
Awesome, the connection was successful with our trimmed down IAM policy!
Troubleshooting
- If you find yourself wanting to iterate on these IAM policies with a live EC2 instance, we highly recommend
restarting the
ssm-agent.service
after adjusting your policy to verify that thessm-agent
is make use of the latest IAM policy changes. We noticedssm-agent
wasn’t always honoring our policy changes after we updated them:
$ systemctl restart ssm-agent.service
It’s a good idea to have alternative modes of connectivity (OpenSSH) into the EC2 instance while iterating on policies. It’s easy to lock yourself out!
- Because the
ssm-agent
running on the instance has to establish a connection with theSSM
service in AWS it will need a minimumegress
HTTPS (port 443) to:
ec2messages.region.amazonaws.com
ssm.region.amazonaws.com
ssmmessages.region.amazonaws.com
- You’ll receive the following error if you don’t have
Session Manager Plugin
installed or correctly configured locally with the AWS CLI:
$ aws ssm start-session --target i-0123456789abcdefg
SessionManagerPlugin is not found. Please refer to SessionManager Documentation here: http://docs.aws.amazon.com/console/systems-manager/session-manager-plugin-not-found
SSH via Session Manager
Session Manager can also be combine with SSH via the proxy functionality in SSH. This makes use of Session Manager as a transport to reach the EC2 instance, then leverages the SSH on loopback, and SSH authentication (SSH key on our case) for access. From there you can leverage all the things you would expect via SSH like ssh-agent forwarding, etc. all without ever exposing SSH via ingress security group rules.
Add the following host configuration to your ssh config: ~/.ssh/config
:
host i-* mi-*
ProxyCommand sh -c "aws ssm start-session --target %h --document-name AWS-StartSSHSession --parameters 'portNumber=%p'"
--document-name
is important since it defines the type of session. Without a document a shell managed node is launched by default.
Assuming we have an SSH key in place for a user - josh
- you can connect to the instance like so:
$ ssh i-0123456789abcdefg
Last login: Tue May 23 23:04:58 2023 from ::1
[josh@nwi:~]$
Or you can opt to call the proxy command inline with SSH without modifying ~/.ssh/config
:
$ ssh -o ProxyCommand="aws ssm start-session --target %h \
--document-name AWS-StartSSHSession --parameters 'portNumber=%p'" \
josh@i-0123456789abcdefg
Last login: Tue May 23 23:04:58 2023 from ::1
[josh@nwi:~]$
Conclusion
We’ve covered AWS System Manager Session Manager as a replacement for OpenSSH with minimal IAM permissions. I hope that this overview has given you some new insights into how to rethink gaining remote access to your EC2 instances and increasing your overall security posture while operating inside AWS.
Looking for a partner with engineering prowess? We got you.
Learn how we've helped companies like yours.