Expose existing deployment with minikube

in addition to https://lwpro2.dev/2020/03/23/expose-existing-deployment/, if the cluster is from minikube, there are some more options to expose the deployment.

Similar to kubectl port-forward svc/local-files-12a341c023 8889:8889, which expose the service to localhost.

Minikube can do a similar expose with:

`minikube service local-test-ecd44fa2fe --url`

for example, for existing service

local-test-ecd44fa2fe ClusterIP 10.96.222.209 8501/TCP,1337/TCP 15d

we can patch it,

kubectl patch svc local-test-ecd44fa2fe -p '{"spec": {"type": "NodePort"}}'

then run the minikube service,

minikube service local-test-ecd44fa2fe --url

which would then give us the URL for accessing the svc:

http://192.168.64.9:31012
http://192.168.64.9:31458

the svc is now updated with the host port:

local-test-ecd44fa2fe NodePort 10.96.222.209 8501:31012/TCP,1337:31458/TCP 15d

alternatively, we could also do tunnelling with minikube

for exmaple, if we patch existing svc:

kubectl patch svc local-files-12a341c023 -p '{"spec": {"type": "LoadBalancer"}}'

it would update the svc with the the ports, at the same time, if we run the tunnel:

minikube tunnel

it will give us the external-IP, (otherwise would be pending)

local-new-0608a5336b LoadBalancer 10.96.117.204 10.96.117.204 8501:30556/TCP,1337:32335/TCP 10d

now we will be able to the svc using the external-ip:port

at the same time, we can still do the

minikube service local-new-0608a5336b --url

which would give us the 192.168.x:port .

note: 192.168.x is the cluster IP:

Kubernetes master is running at https://192.168.xx.x:8443

which from the host, we can access that IP to get into the cluster.

while the IP 10.96.xx is the within cluster-ip, which however, with the tunnel would expose the host.

python websocket client and with auth header

# import asyncio
import ssl
from socket import socket

import websocket
# import websockets

def on_message(ws, message):
    print ('message received ..')
    print (message)


def on_error(ws, error):
    print ('error happened .. ')
    print (error)


def on_close(ws):
    print ("### closed ###")


def on_open(ws):

    print ('Opening Websocket connection to the server ... ')

    ## This session_key I got, need to be passed over websocket header isntad of ws.send.
    ws.send("testing message here")

websocket.enableTrace(True)

token = "........"
auth = "Authorization: Bearer " + token
ws = websocket.WebSocketApp("wss://APISERVER:8443/api/v1/namespaces/default/services/the-service:8889/proxy/websocket?token=123",
                            on_open = on_open,
                            on_message = on_message,
                            on_error = on_error,
                            on_close = on_close,
                            header = [auth]
                            )

ws.on_open = on_open

##Note: this is for --insecure flag in curl, basically to tell the client not verify the ssl certificate
ws.run_forever(sslopt={"cert_reqs": ssl.CERT_NONE})
# socket.setsockopt

get those APIServer and token using

APISERVER=$(kubectl config view --minify | grep server | cut -f 2- -d ":" | tr -d " ")
SECRET_NAME=$(kubectl get secrets | grep ^default | cut -f1 -d ' ')
TOKEN=$(kubectl describe secret $SECRET_NAME | grep -E '^token' | cut -f2 -d':' | tr -d " ")

curl $APISERVER/api --header "Authorization: Bearer $TOKEN" --insecure

Expose existing deployment

expose the deployment, pod, or replicateset using the expose command

kubectl expose replicasets.apps existing-rc –port=9000 –target-port=9000 –type=NodePort –name=testport

otherwise, if you already have a service running, you can upgrade it

`kubectl patch svc existing-service -p ‘{“spec”: {“type”: “NodePort”}}’`

Note, if you already have cluster IP, you can also use LoadBalancer.

after that, you should have the service with the necessary port information

Screenshot 2020-03-18 at 12.11.36 PM

if you are using minikube locally, you can get the URL as:

minikube service –url the-new-service

Screenshot 2020-03-18 at 12.13.25 PM

alternatively, you could also achive this by port forward

kubectl port-forward svc/existing-service host-port:container-port

as such, the service could be reached at hostname:host-port, for example: 127.0.0.1:host-port

expose more ports for existing service

(base) ➜ ~ kubectl get service local-files-12a341c023 -o yaml



then patch accoring to the spec:

kubectl patch services local-files-12a341c023 --type='json' -p='[{"op": "add", "path": "/spec/ports/-", "value": {"name":"tornado","port":8889,"targetPort": 8889,"protocol":"TCP"}}]'

expose Kubernetes cluster service

expose the deployment, pod, or replicateset using the expose command

kubectl expose replicasets.apps existing-rc –port=9000 –target-port=9000 –type=NodePort –name=testport

otherwise, if you already have a service running, you can upgrade it

`kubectl patch svc existing-service -p ‘{“spec”: {“type”: “NodePort”}}’`

 

Note, if you already have cluster IP, you can also use LoadBalancer.

after that, you should have the service with the necessary port information

Screenshot 2020-03-18 at 12.11.36 PM

if you are using minikube locally, you can get the URL as:

minikube service –url the-new-service

Screenshot 2020-03-18 at 12.13.25 PM

docker container to talk to Kubernetes Cluster

There could be need to talk to Kubernetest cluster from segregated docker containers. It’s possible to do so:

 

 

build the pipe from the container to the host machine

There are several ways to connect the host machine. the container is running together with the host, behaving like on the same subnet. you can access it through the public IP.

otherwise, more elegantly, you can leverage on host.docker.internal to talk to the host

 

proxy the resources for the Kubernetes cluster

you can start a Kubernetest proxy to talk to the cluster.

kubectl proxy --port=8080 --disable-filter &

 

 

then to talk to the resources in the cluster from the container, you can do

host.docker.internal:8080/api/v1/namespaces/default/services/pod:{port}/proxy/

 

update dockerfile within docker compose

have encountered some issue with the stale dockerfile. turns out, docker compose actually cache previous builds (this is not stated in the doc).

so to keep it updated, need to run build without cache then bring it up.


docker-compose build --no-cache && docker-compose up