-
-
Save jujhars13/1e99cf110e5df39d4ae3c7fef81589f8 to your computer and use it in GitHub Desktop.
apiVersion: v1 | |
kind: Namespace | |
metadata: | |
name: sftp | |
--- | |
kind: Service | |
apiVersion: v1 | |
metadata: | |
name: sftp | |
namespace: sftp | |
labels: | |
environment: production | |
spec: | |
type: "LoadBalancer" | |
ports: | |
- name: "ssh" | |
port: 22 | |
targetPort: 22 | |
selector: | |
app: sftp | |
status: | |
loadBalancer: {} | |
--- | |
kind: Deployment | |
apiVersion: extensions/v1beta1 | |
metadata: | |
name: sftp | |
namespace: sftp | |
labels: | |
environment: environment: production | |
app: sftp | |
spec: | |
# how many pods and indicate which strategy we want for rolling update | |
replicas: 1 | |
minReadySeconds: 10 | |
template: | |
metadata: | |
labels: | |
environment: production | |
app: sftp | |
annotations: | |
container.apparmor.security.beta.kubernetes.io/sftp: runtime/default | |
spec: | |
#secrets and config | |
volumes: | |
- name: sftp-public-keys | |
configMap: | |
name: sftp-public-keys | |
containers: | |
#the sftp server itself | |
- name: sftp | |
image: atmoz/sftp:latest | |
imagePullPolicy: Always | |
env: | |
# - name: PASSWORD | |
# valueFrom: | |
# secretKeyRef: | |
# name: sftp-server-sec | |
# key: password | |
args: ["myUser::1001:100:incoming,outgoing"] #create users and dirs | |
ports: | |
- containerPort: 22 | |
volumeMounts: | |
- mountPath: /home/myUser/.ssh/keys | |
name: sftp-public-keys | |
readOnly: true | |
securityContext: | |
capabilities: | |
add: ["SYS_ADMIN"] | |
resources: {} |
I have some problems with this. The challenging part is that I need to figurate out how can be mounted directory that contains our python scripts.
This service basically is I want to mount a directory with scripts I then I start two python scripts that are listening input directory and output directory that we mount inside this mounted directory in our container.
In a local environment, this is easy because I start my docker like
docker run -v //c/Users/..../model:/home/foo/upload -p 2222:22 -d testko:1.0.0 foo:pass:1001
User is now simple, currently, I don't want to bother with password security, for sure that I will cover that after i
get this working.
And this is all working...
#1.1 Do I need to create an Azure file share on AKS and then Persistent Volume? How would all of this look like?
I am quite a new to Azure and Kubernetes, I learn much stuff in the last few days, but maybe is there someone that worked on something
like this?
This example is perfectly working for me. But running under Azure I experience the following problem. Each node in the Cluster is issuing a tcp connect to the running pod. This results in the following Log Message Spamming the ELK Stack :
Did not receive identification string from 10.240.0.4 port 50255
10.24.0.4 is one IP of one of the Cluster Nodes. The Message is repeating once per Minute by all nodes. Pretty Annoying. A solution would be to reduce the log level of the ssh daemon. Any Ideas how to accomplish this ?
what should be done to allow anonymous PUT/GET?
Thanks
@jujhars13 I made a slightly improved version of this here,
if you are interested you can copy the changes to here.
Changes i did:
- Update some things to allow it to work with newer Kubernetes versions
- Fix
environment: environment: production
- Only pull the image if it isn't already present
- Rename
sftp-public-keys
configmap tosftp-client-public-keys
and change it to a generic secret - Add a generic secret called
sftp-host-keys
for the servers host keys - Make user directory a persistent volume
- Change sftp port to 23 to allow connecting to the kubernetes node this runs on with ssh
- Disable apparmor because i couldn't get it to work
@ToMe25 I don't think it's necessary to change sftp-client-public-keys & sftp-host-keys to a secret. It's doesn't matter if someone else see the public key.
How to run this pod(sftp) with Non-Root user ??
I deployed in openShift and when I ssh I end up getting Permission denied (publickey,gssapi-keyex,gssapi-with-mic)
. anybody geeting the same issue?
Are you trying to connect to the node or the pod using ssh?
If the node, are you trying to connect to the port specified for the pod?
This yaml file sets the port on which you can connect to the pod to port 22.
This is the default ssh port.
That means unless you changed that you won't be able to connect to the node using ssh while the pod is running.
This is because trying to connect to the node will instead connect you to the pod.
If you are trying to connect to the pod using ssh, that can't work.
The SFTP pod is configured to only allow SFTP connections, no ssh connections.
This might cause a different error message tho, I can't remember.
@riprasad as I said in my last message, you shouldn't try to ssh into the sftp server.
The sftp server only allows SFTP connections.
If that isn't what you tried, I'm sorry, but I have no idea what you tried then.
I don't know much about OpenShift tho, so if that wasn't what you tried I probably can't help anyways.
@ToMe25 Sorry I deleted my last comment since I was not doing few things right. OpenShift by default runs image as user 1001
and doesn't allow root access. With few tweaks here and there I was able to deploy the server. And you were right, I was trying to ssh into the sftp server. Now I am able to connect by doing sftp to port 23
and CLUSTER-IP
of the service.
But, connecting to the sftp server using CLUSTER-IP
is possible only if I am logged in into the internal openShift Network. How do I connect to it remotely using some other machine? I tried exposing the service using OpenShift Route
and tried connecting to it via sftp but that ain't working.(OpenShift Routes are equivalent to ingress in kubernetes). I am getting the same error Permission denied (publickey,gssapi-keyex,gssapi-with-mic)
@riprasad I have just tested it, and the error message for connecting to the sftp server using ssh is a different one.
Also I have never used OpenShift, so this is just my guess, but I have an idea why it doesn't work using the Route.
If a Route really is something similar to a Kubernetes Ingress, then it can't work with SSH connections afaik.
This is because the Ingress system uses the target subdomain to determine which pod to route the connection to.
However a TCP connection does not contain this information.
Only higher level protocol specifications sometimes add this information.
HTTP adds this information, so it can work with Ingress like structures, SSH does not add this information afaik, so it cannot work with Ingress like structures.
Simply put SSH does not specify which domain on that IP it wants to connect to, so there is nothing something Ingress like can do to route the connection to the correct target.
What you have to do instead is reserve some port on the host exclusively for connections to the sftp server.
That is the only way I know of, at least.
That makes sense. Thanks for the explanation @ToMe25
Also, these lines from the documentation pretty much confirms that
An Ingress does not expose arbitrary ports or protocols. Exposing services other than HTTP and HTTPS to the internet typically uses a service of type Service.Type=NodePort or Service.Type=LoadBalancer.
Is it possible to get some help with the tweaks you made to get it working on openshift?
@afshinyavari Sure. You'll basically have to create a service account and grant it anyuid
SCC to bypass the default security constraints in OpenShift. You can run the below commands as admin to achieve the same: -
$ oc create serviceaccount sftp-sa
$ oc adm policy add-scc-to-user anyuid -z sftp-sa
Use the created service account in your deployment. In addition, you will also need to configure the security context for the container. Here's the snippet:-
spec:
serviceAccountName: sftp-sa
containers:
securityContext:
privileged: true
@afshinyavari Also, I found this project which is compatible with OpenShift https://github.com/drakkan/sftpgo
I did not find time to deploy this but please feel free to explore it, since it is openshift compatible out-of-the-box and offers better features too. Let me know if you're able to deploy this successfully, in case you decide to choose this one over atmoz-sftp
yea, sftpgo indeed is an interesting project! Do share the manifests if you decide to give it a shot :)
sftpgo is all fine, sadly until you actually need a debug - drakkan/sftpgo#1412
@salesh - Yes it worked for me, that was the issue on our side regarding the Azure ALB. It was small POC which I did during that time.