In a previous blog we discussed the configuration and use of fluentbit with AWS elasticsearch.

https://medium.com/@bahubalishetti/configuring-fluentbit-on-kubernetes-for-aws-elasticsearch-bec486bcc727

It helped provide a basic configuration of “logging” from a Kubernetes cluster. “Logging” is one aspect of “Observability” in Kubernetes. Lets review:

Observability for the cluster and the application covers three areas:

  1. Monitoring metrics — Pulling metrics from the cluster, through cAdvisor, metrics server, and/or prometheus, along with application data which can be aggregated across clusters in Wavefront by VMware.
  2. Logging data — Whether its cluster logs, or application log information like syslog, these data sets are important analysis.
  3. Tracing data — generally obtained with tools like zipkin, jaeger, etc. and provide detailed flow information about the application

In this blog we will round out the “logging” section, by describing an alternative configuration to fluentbit, using fluentd.

What’s the difference between fluentd and fluentbit?

There is a great comparison here:

https://fluentbit.io/documentation/0.8/about/fluentd_and_fluentbit.html

The summary is that Fluentbit is designed for more light weight deployments, IOT, lambda, and even Kubernetes. Fluentd is generally used in VM based deployments and Kubernetes. The Kubernetes community is slowly adding and increasing support for Fluentbit, as it has 50% less the number of plugins than Fluentd.

Why Elasticsearch?

Compared to more popular architectures which discuss using individual Elasticsearch instances per cluster, using a central AWS based Elasticsearch instance is simpler and easier to scale. Specifically when you have multiple clusters being deployed as part of your application or rollout.

The other logging end points that can be used:

  1. splunk
  2. logz.io
  3. etc

There are two possible configurations of AWS Elasticsearch

  1. public configuration of AWS Elasticsearch
  2. secured configuration of AWS Elasticsearch

We will explore the use of a public configuration of AWS Elasticsearch, since using the secured VPC configuration restricts it to a non-SaaS like configuration. There are alternatives to using AWS Elasticsearch, but we choose this because it was AWS.

Component basics:

The following is a quick overview of the main components used in this blog: Kubernetes logging, Elasticsearch, and Fluentd.

Kubernetes Logging:

Log output, whether its system level or application based or cluster based is aggregated in the cluster and is managed by Kubernetes.

As noted in Kubernetes documentation:

  1. Application based logging —

Everything a containerized application writes to stdout and stderr is handled and redirected somewhere by a container engine. For example, the Docker container engine redirects those two streams to a logging driver, which is configured in Kubernetes to write to a file in json format.

  1. System logs —

There are two types of system components: those that run in a container and those that do not run in a container. For example:

The Kubernetes scheduler and kube-proxy run in a container.

The kubelet and container runtime, for example Docker, do not run in containers.

On machines with systemd, the kubelet and container runtime write to journald. If systemd is not present, they write to .log files in the /var/log directory. System components inside containers always write to the /var/log directory, bypassing the default logging mechanism.

Elasticsearch:

Elasticsearch, is a search engine based on Lucene. It aggregates data from multiple locations, parses it, and indexes it, thus enabling the data to be searched. The input can be from anywhere and anything. Log aggregation is one of the multiple use cases for Elasticsearch. There is an opensource version and the commercial one from elastic.co.

AWS provides users with the ability to standup an elasticsearch “cluster” on EC2. AWS thus helps install, manage, scale, and monitor this cluster taking out the intricacies of operating elasticsearch.

Fluentd:

Fluentd is a unified logging data aggregator that allows you to aggregate and consume multiple disparate data souces and send this data to the appropriate end point(s) for storage, analysis, etc.

In our configuration we will collect data from two main sources

  1. kubernetes cluster
  2. application running in kubernetes

we will then output this to Amazon Web Services Elasticsearch.

Fluentd provides several key features:

  1. unified logging with a simple to use structure — JSON
  2. numerous plugins. Approx. 500+ community based plugins
  3. minimal development effort

Prerequisites

Before working through the configuration, the blog assumes the following:

  1. application logs are output to stdout from the containers — a great reference is found here in kubernetes documentation
  2. privilege access to install fluentbit daemonsets into “kube-system” namespace.

Privilege access may require different configurations on different platforms:

  1. KOPs — open source kubernetes installer and manager — if you are the installer then you will have admin access
  2. GKE — turn off the standard fluentd daemonset preinstalled in GKE cluster. Follow the instructions here.
  3. VMware Cloud PKS— Ensure you are running privilege clusters

This blog will use VMware Cloud PKS which is a conformant kubernetes service.

Application and Kubernetes logs in Elasticsearch

Before we dive into the configuration, its important to understand what the output looks like.

I have configured my standard fitcycle application (as used in other blogs) with stdout. https://github.com/bshetti/container-fitcycle.

I’ve also configured fluentd on VMware Cloud PKS using helm and added a proxy (to access ES) to enable access to my ES cluster.

I have configured AWS Elasticsearch as a pubic deployment (vs VPC), but with Cognito configured for security.

As you can see above, AWS Elasticsearch provides me with a rich interface to review and analyze the logs for both application and system.

Configuring and deploying fluentd for AWS Elasticsearch

In this solution, I am using the helm chart for fluentd along with a es-proxy that allows me to connect to the AWS Elasticsearch address and write information into it.

  1. fluentd helm chart — https://github.com/helm/charts/tree/master/stable/fluentd-elasticsearch
  2. es-proxy — https://github.com/bshetti/es-proxy-vke
  3. total configuration — https://github.com/bshetti/fluentd-helm-vke

Setting up and configuring AWS Elasticsearch

The first step is properly configuring AWS Elasticsearch.

Configure AWS Elasticsearch as public access but with Cognito Authentication

This eliminates which VPC you specify the Elasticsearch cluster on. You can use the VPC configuration. I just choose not to for simplicity.

Configure authentication with Cognito

Once setup, you need to follow the steps from AWS to set up your ES policy, IAM roles, user pools, and users.

https://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/es-cognito-auth.html

Setup user with policy and obtain keys

Once Elasticsearch is setup with Cognito, your cluster is secure. In order for fluentbit configuration to access elasticsearch, you need to create a user that has elasticsearch access privileges and obtain a the Access Key ID and Secret Access Key for that user.

The policy to assign the user is AmazonESCognitoAccess. (This is setup by Cognito).

Deploying fluentd with es-proxy

This section uses the instructions outlined in:

https://github.com/bshetti/fluentd-helm-vke

Now that you have successfully set up elasticsearch on AWS, we will deploy fluentd with an elasticsearch proxy.

Fluentd does not support AWS authentication, and even with Cognito turned on, access to the elasticsearch indices is restricted to use of AWS authentication (i.e. key pairs). Keypairs etc are not supported yet (at the time of writing this blog) in fluentbit.

Hence we must front end fluentbit with an elasticsearch proxy that has the AWS authentication built in.

I’ve developed a kubernetes deployment for the following open source aws-es-proxy.

https://github.com/abutaha/aws-es-proxy

My aws-es-proxy kubernetes deployment files are located here:

https://github.com/bshetti/fluentd-helm-vke

  1. First step is configuring the Kubernetes cluster for fluentd

Fluentd must be deployed as a DaemonSet, so on that way it will be available on every node of your Kubernetes cluster. To get started run the following commands to create the namespace:

$ kubectl create namespace logging

Next initialize helm

    $ helm init
  1. Configure and run the es-proxy

Change the following parameters in the es-proxy-deployment.yaml file with your parameters

    -name: AWS_ACCESS_KEY_ID
     value: "YOURAWSACCESSKEY"
    -name: AWS_SECRET_ACCESS_KEY
     value: "YOURAWSSECRETACCESSKEY"
    -name: ES_ENDPOINT
     value: "YOURESENDPOINT"
  1. Install fluentd

$ helm install --name my-release -f values-es.yaml stable/fluentd-elasticsearch --namespace=logging

You should now start seeing output similar to the charts shown earlier in this blog.

For more information on VMware Cloud pks

https://cloud.vmware.com/vmware-cloud-pks