EC2 linux instances are our kingdom, & we as engineers rule over it. If we make a small mistake in ssh_config file of not adding our root user (centos / ubuntu / ec2-user) in AllowUsers section, we will lose complete access to our instances as a root user.
Now how can conquer it back without losing any data?
Detach the EBS Volume
Stop your affected ec2-instance, and detach this ebs volume from it.
Then launch a new instance with the same base AMI, let's say your affected instance is centos 9, then you should make another instance of centos 9, in the same availability zone.
Attach your EBS volume to this newly launched instance.
Mounting the volume
Now that you have your EBS volume attached to this new instance, list all the block devices attached to this new instance, and check the name of newly attached volume.
lsblk -f
Create mount directory, so that we can mount our affected instance's ebs volume over here.
Why to mount?
- It's used to attach a storage device to a specified directory, it makes the files and directories inside it accessible for user and system.
mkdir /mnt/rescue
Now mount the volume at this directory
mount -t xfs -o nouuid /dev/xvdf1 /mnt/rescue/
Making changes to file
We can now make changes to sshd_config file and add root user to the AllowUsers line
vim /mnt/rescue/etc/ssh/sshd_config
If our root user is centos, and we have a user stalin, then the line should look like this
AllowUsers centos stalin
This will allow the root user (centos over here), and user stalin to ssh into the instance. Password authentication should be set to yes, if you want normal user stalin to access this instance by password authentication mechanism.
Unmount and attach volume back
Now that we have corrected the file, we can unmount the volume, and attach the volume back to our original affected instance.
To unmount -
umount /mnt/rescue/
Attach the EBS volume back to the original instance, and start the instance.
System files should always be carefully edited, and double-checked before you restart the sshd service.