Run named pod vs names deployment in Create the SSH Connection section
kubectl run -it --rm --image=debian --generator=run-pod/v1 aks-ssh -- sh -c "apt-get update && apt-get install openssh-client -y && sh"
This way container name will be aks-ssh
Some other improvements in my guide I use for Linux:
Based on https://docs.microsoft.com/en-us/azure/aks/ssh
But improved for Bash with less manual steps
Steps:
[Terminal Session 1] Set convenience variables
Set cluster info SUB_ID, CLUSTER, RG_CLUSTER. Also locates 1st available NODE_IP and NODE_NAME.
If using non default SSH keys change: KEY_PUB_PATH, KEY_PRIV_PATH
SUB_ID=<subscription_id_with_resources> \
&& CLUSTER_NAME=<aks_cluster_name> \
&& RG_CLUSTER=<resource_group_where_aks_cluster_resource_is_located> \
&& NODE_NUMBER=0 \
&& RG_AKS=$(az aks show -g $RG_CLUSTER -n $CLUSTER_NAME --subscription $SUB_ID --query 'nodeResourceGroup' -otsv) \
&& KEY_PUB_PATH="$HOME/.ssh/id_rsa.pub" \
&& KEY_PRIV_PATH="$HOME/.ssh/id_rsa" \
&& NODE_IP=$(az vm list-ip-addresses --resource-group $RG_AKS --subscription $SUB_ID --query "[$NODE_NUMBER].virtualMachine.network.privateIpAddresses[0]" -otsv) \
&& NODE_NAME=$(az vm list-ip-addresses --resource-group $RG_AKS --subscription $SUB_ID --query "[$NODE_NUMBER].virtualMachine.name" -otsv)
[Terminal Session 1] (Optionally) Change desired cluster node
List all nodes
az vm list-ip-addresses --resource-group $RG_AKS --subscription $SUB_ID --query "[].virtualMachine.name"
Change node number from 0 to desired node position in the array (0 based)
NODE_NUMBER=<desired_node_index> \
&& NODE_IP=$(az vm list-ip-addresses --resource-group $RG_AKS --subscription $SUB_ID --query "[$NODE_NUMBER].virtualMachine.network.privateIpAddresses[0]" -otsv) \
&& NODE_NAME=$(az vm list-ip-addresses --resource-group $RG_AKS --subscription $SUB_ID --query "[$NODE_NUMBER].virtualMachine.name" -otsv)
[Terminal Session 1] (Optionally) add ssh public key to the node (if you don't have one from AKS creation)
az vm user update \
--resource-group $RG_AKS \
--name $NODE_NAME \
--subscription $SUB_ID \
--username azureuser \
--ssh-key-value $KEY_PUB_PATH
[Terminal Session 2] In separate terminal window run debian pod with openssh-client
kubectl run -it --rm --image=debian --generator=run-pod/v1 aks-ssh -- sh -c "apt-get update && apt-get install openssh-client -y && sh"
[Terminal Session 1] After you are in the container shh session on Terminal Session 2, place SSH private key into container
kubectl cp $KEY_PRIV_PATH aks-ssh:/id_rsa
[Terminal Session 1] Print NODE_IP
echo $NODE_IP
[Terminal Session 2] Permission private key and ssh into node ip retrieved via the previous command in Terminal Session 1
chmod 0600 id_rsa && ssh -i id_rsa azureuser@<node_ip>
[Terminal Session 2] Finishing work
Run exit to end the session and pod will be automatically removed.
⚠Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.
Thanks for the feedback! I have assigned the issue to the content author to investigate further and update the document as appropriate.
Thanks for the feedback, @choovick. Some of this feedback is already underway, such as setting convenience variables. The problem is that it makes it more dependent on a given shell, and abstracts a little too much what the user is doing.
When it comes to security of the nodes, we've found combining multiple commands like you propose is harder to understand what's happening and is being done with the cluster, but it's absolutely something you're welcome to script and combine like you propose once you understand what is being done.
As we already have some improvements in this doc lined up to help with readability, @MicahMcKittrick-MSFT #please-close
@choovick - There even is a simpler solution if you want to update the user on all nodes, documented here:
az vm user update -u username --ssh-key-value "$(< ~/.ssh/id_rsa.pub)" --ids $(az vm list -g MyResourceGroup --query "[].id" -o tsv)
This doesn't work with CMD but thankfully there is WSL ;)
Most helpful comment
@choovick - There even is a simpler solution if you want to update the user on all nodes, documented here:
az vm user update -u username --ssh-key-value "$(< ~/.ssh/id_rsa.pub)" --ids $(az vm list -g MyResourceGroup --query "[].id" -o tsv)This doesn't work with CMD but thankfully there is WSL ;)