K8s: transfer file from stand-alone server to Kubernetes cluster

Table of Content

Cloud technologies aren’t new nowadays, but when you should do some administrative work with your Kubernetes Cluster it isn’t always obvious what to do.

I need to transfer media files from old server to new EKS Cluster, really, it doesn’t matter where cluster placed: AWS, Google Cloud or Azure; you will work with kubectl.

Set up public key authentication

Most of all your Pods placed in private subnet and there is no possibility connect to it from internet. So I’ll copy files to my cluster from a container in that cluster.

Connect to container

NOTE: You should mount persistent volume to your cluster, otherwise all your data will be lost after cluster recreating.

Check what pods we have and connect to Docker container

$ kubectl get po
NAME                          READY   STATUS    RESTARTS   AGE
web-7759d99865-czh74          2/2     Running   0          4h40m

Connect to container

$ kubectl exec -it web-7759d99865-czh74 bash -c nginx

Here you can take notice on -c nginx, it mean container to which you want to connect. You can check you containers with kubectl describe deploy web.

kubectl exec -it is the analog of docker exec -it with difference it connect in to container in K8s Cluster.

Create public key

Most of all you container based on alpine version of Linux image, this mean minimal number of packages are preinstalled. We should install keychain, this package contains ssh-keygen:

$ apt-get update -y
$ apt-get install keychain

Now we can create ssh-key with:

$ ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): 
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:r0vRbo6AnwHYkNle7SozAKbcydIVxJlj759beiQNQh4 root@web-7777777777-czh74
The key's randomart image is:
+---[RSA 2048]----+
|    ooo          |
|   + *.E         |
|..+ o.* o        |
|+.+=o. = o       |
|.oo=+ . S +      |
|  .. o o = o     |
|    = + o B.     |
|     = = Bo.     |
|      o +++      |

Now copy your pub key to clipboard, in further step we will place it on local machine and than add to old server through ssh-copy-id.

$ cat ~/.ssh/id_rsa.pub
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCtGRDGXYfaaTFJIBSPe1aGoXXin/+0qBDYqGxQ3j4p2Vsb9L1L6pJA2bUx7v9GFyJNrFEI6RQnG54iOF4a4mo20ZFbzbZT0xI2TJNxhW13VJmGvhamdd/hjIfwxDZB4ZWs4T+l4+cCWzdUmH0OOpwmxEag5pqIhIYaG4h6cKa+m7wFqrZFd6EXPChSMFaLx494H5o+A6XvJNaQjCm7rQzdv0EecJcoeYFgk8ecaygYPPWmdLbutWAJwFz0vGnF6mVbvZgW3aZfgk55GY/Iw85I6rTGHnNOOraJezQu7cHXymexryWQd3VzFa4Wgh90hvTBdLydYRyXen2wRYV7xXfB root@web-7777777777-czh74

Add public key to old server

Go to your local machine and place content from clipboard to ~/.ssh/web-node.pub

a. Add key to authorized_list with next command:

$ cat ~/.ssh/web-node.pub | ssh <user>@<hostname> 'umask 0077; mkdir -p .ssh; cat >> .ssh/authorized_keys && echo "Key copied"'

b. The same thing can be done with simplest command:

$ ssh-copy-id  -i ~/.ssh/web-node.pub <user>@<hostname>

Add the -p <port-number> if ssh port is different from default 22.
Use the -f switch, which will allow you to copy just a public key to the server and will not validate that private key exists (you should have new enough openssh installed).

Use command which you like more.

Transfer media files

To copy the entire send directory from your remote server to put directory at your destination server, enter:

with scp

$ scp -rpC <user>@<remotehost>:/folder/to/send /where/to/put

When the copy process is done, at the destination server you will found a directory named send with all its files. The folder send is automatically created.

  • -r parameter will recursively copy the source directory and its content.
  • -p parameter will provide modification times, access times, and modes from original files.
  • -C parameter will enable compression. If you are copying a lot files across the network, -C parameter would help you to decrease the total time you need.

with rsync

$ rsync -chavzP --ignore-existing --stats --exclude='*.log' <user>@<remotehost>:/folder/to/send /where/to/put/

The directory send will be present in put directory at destination host.

An explanation of the command: explainshell.com/…

I recommend use rsync approach, because in the case of error you can simply continue copy the remaining files. With scp you will must copy files all over again.

That’s all. Now, you should be waiting until files will be copied.

Leave a Reply

Your email address will not be published. Required fields are marked *