Configure Docker for Node.JS API and deploy it on AKS

In this post, we will configure docker and host Node.JS API locally and then deploy it to the Azure Kubernetes Services.

To start with docker, we need to first install it. We can install docker from the following link https://docs.docker.com/docker-for-windows/install/

Once the docker is installed, we can verify the version by running the following command from Powershell

> docker –-version

image

Configure Docker in Node.js API project

I am assuming you are using the VS Code for Node.JS API. So first open the Node.JS API project in VS Code and add the docker extension as shown below.

image

Once this is installed we will add the Dockerfile into the project and add the following script

image

The above code downloads the docker image from the dockerhub.io registry. It sets the working directory as /src and copies all files into it. The port that we will be using is 3000 and CMD is the command which will be executed to run this API.

Next, add the .dockerignore file to exclude the files we needed not to be copied in the container directory. Here is the screenshot of .dockerignore file.

image

Build Docker Image

To build the docker image, go to the directory where the Dockerfile resides and run the following command

> docker build –t  <username>/<servicename>

Replace <username> with your user name it could be anything. For <servicename> add the name of the service. This will build the image and create it to the local registry.

Once this is run, execute the following command to see the docker image listed.

image

Run the Docker Image

Once the image is built and deployed to a local docker registry. We will just run it under the docker container by executing following command

> docker run –p 3000:3000 -d <username>/<servicename>

The –p flag redirects a public port to a private port inside the container. In our case, we are using the same port for both public and private ports.

You can run and test your application now and it should be working fine.

Create ACR in Azure

Now, since we need to deploy this to Azure using AKS (Azure Kubernetes Cluster) we will first create a registry and push this image over there. To create a registry we will use ACR (Azure Container Registry) service in Azure.

Provisioning ACR in Azure is simple. You just have to log in to Azure portal and create a new resource and choose ACR.

image

Hit create and enable Admin user.

image

Once it is created, go to the Settings section and click on Access Keys. We need to generate access keys so we can authenticate it while pushing the image to the ACR.

Push Docker Image To ACR

In order to push the docker image, we first need to tag the ACR image with a local docker registry. To do this, open PowerShell and execute the following command

> docker tag <localrepositoryname> <ACRURL>/<servicename>

for e.g.

docker tag ovaismehboob/auctionservice ovaismehboob.azurecr.io/auctionservice

Now, login to ACR registry using docker run as shown below

> docker run ovaismehboob.azurecr.io/auctionservice

It will ask admin user and password, copy it from the Access Keys section in ACR and use it to authenticate.

Once this is authenticated, we will push the image as shown below

docker push ovaismehboob.azurecr.io/auctionservice

Verify it from the ACR in Azure. It should be listed under the Repositories section.

Setup Azure Kubernetes Cluster in Azure

To create a new AKS cluster in Azure, click on create a resource in the Azure portal and search for Kubernetes service. Choose default values and provide values such as resource group, kubernetes cluster name,  region, etc.

Deploy Docker Image to AKS

First, we install the AKS CLI locally on our PC by running the following command.

> az aks install-cli

Next, we will create a secret that connects to the ACR and will be used by Kubernetes to pull the image from our ACR repository.

Run following to create a secret

kubectl create secret docker-registry <SECRET_NAME>
–docker-server=<REGISTRY_NAME>.azurecr.io
–docker-email=<YOUR_MAIL>
–docker-username=<SERVICE_PRINCIPAL_ID>
–docker-password=<YOUR_PASSWORD>

For e.g.

kubectl create secret docker-registry onlineauctionacr –docker-server= ovaismehboob.azurecr.io –docker-email=ovaismehboob@hotmail.com –docker-username=onlineauctionregistry –docker-password=CRj+++76yW5kAdEkrhJn4S4LNNRn+++

Now we need to deploy to Kubernetes, and for this, we create a YAML file and choose the kind as deployment. If you notice, the onlineauctionacr is the secret we have defined under imagePullSecrets section in below script.

apiVersion: apps/v1

kind: Deployment

metadata:
labels:
name: auctionservice
name: auctionservice

spec:
replicas: 3
selector:
matchLabels:
name: auctionservice
template:
metadata:
labels:
name: auctionservice
spec:
containers:
– image: ovaismehboob.azurecr.io/auctionservice
name: auctionservice
ports:
– containerPort: 3000
imagePullSecrets:
– name: onlineauctionacr

Save this file as .yaml extension and execute the following command to deploy it to Azure Kubernetes Cluster.

> kubectl apply –f <filename>.yaml

Once this is done, run the following command to see the deployment

> kubectl get deployments

image

Since we mentioned replicas value as 3, three pods will be created. We can verify it by running the following command

> kubectl get pods

image

Finally, we will expose this deployment as a service. To expose this as a service we will create another file having content as follows

apiVersion: v1

kind: Service

metadata:
name: auctionservice
labels:
name: auctionservice

spec:
ports:
– port: 3000
targetPort: 3000
protocol: TCP
type: LoadBalancer
selector:
name: auctionservice

Source port is 3000 and the target port is 3000. That means the external and internal port will be the same i.e. 3000.

Next, we need to run this command to expose this as a service that can be further used to access our API.

> kubectl apply servicefilename.yaml

We can verify the services are running through the following command

> kubectl get services

Above command list down the external IP address that we will use to access our API. In our case, it will be http://externalIPAddress:3000

Hope this helps!