Elasticsearch Monitoring
TCG Elastic Connector
fas fa-video The Elastic Connector is implemented in the GroundWork Transit Connection Generator (TCG), and allows a unique type of service to be deployed into GroundWork Monitor, Elasticsearch Lucene queries. While we won’t be describing how to create more than an example or two in Elasticsearch, we will show you exactly how to take your searches through data you get from logs, traps, and other systems into Elastic and bring it directly into GroundWork, unifying the monitoring with Open Source Software across the enterprise.
First, a short background on the components.
Elasticsearch, Logstash and Kibana
To put it simply, Elasticsearch is a database, Logstash is a forwarder, and Kibana is a query engine. Together, they are the Elastic Stack. You can run them together on your own hardware or in cloud services, and when you do you have the ability to aggregate logs and events, and store them for searching and analysis.
GroundWork 8.x bundles these open source components in containerized versions you can use for lightweight aggregation. We use them as a way to monitor the containers in GroundWork itself, which is a great example. For anything larger, however, you will need a separate Elastic cluster, typically composed of several hosts (or vms, or containers) to handle the traffic from multiple systems all reporting.
So much for the Elastic components. On the GroundWork side we use TCG to connect Elastic Stack to GroundWork. The queries you save in Kibana are available to a connected GroundWork server to be brought in as services, with the only metric being the number of matches returned. This may sound simple, but it is beautiful that way. The matches can be anything you can search up, such as failed logins in the last 10 hours, or cache misses less than 5 minutes old. The range of possibilities is huge. We simply enable you to see these issues, and get alerts and reports on them in context with the other monitoring you are doing with GroundWork connected applications.
Requirements
- GroundWork server: 8.1.0 or later (8.1.2 or later for the containerized connector described here)
- An Elastic cluster: For testing or lightweight aggregation you can use the existing internal GroundWork Elastic Stack
- Network connectivity: Network connectivity to the elastic cluster on port 9200, and to kibana on https from the location you run the connector
- Disk space: A lot of disk space (50GB per connector instance), see Transit Connection Generator (TCG) for details
- TCG: If running as a Linux service, you will also need the TCG Elastic Connector binary for your platform (currently only Linux is supported)
Setting Up TCG
An Example
For an example, we will configure the existing Elastic Stack inside of GroundWork with a few queries, and connect them as services to the GroundWork server itself. We will run TCG on the GroundWork server host for simplicity, so in our example, it will all be running on one system. In production, it doesn't matter where you run TCG, as long as it meets the connectivity requirements. In 8.1.2 and above, GroundWork can manage the Elastic connector as a container under Docker Compose.
The GroundWork host is a good choice, but so is an independent host, or even a container or other instance. We will show you how to fire up the container version on the GroundWork server itself. For descriptions of how to run the connector as a linux service, see Running the Elastic Connector as a Linux Service, later in this page.
Preparing the Container
To prepare your Elastic Connector container to run on the GroundWork server:
As this is a .yml file, pay special attention to the indenting. You might get errors on the next step if the format is wrong.
Edit the docker-compose.override.yml file in the gw8 directory, and uncomment (or add, if they are not there) the lines to the services section, including the services: line if it's not already present:
services: # Uncomment to enable Elastic connector tcg-elastic: image: groundworkdevelopment/tcg:${TAG} volumes: - tcg-var:/tcg entrypoint: ["/app/docker_cmd.sh", "elastic-connector"]
Also, the volumes: section, uncomment the tcg-var: line for the TCG shared volume:
volumes: # Uncomment to enable tracing # jaegertracing: # Uncomment when adding any TCG connectors tcg-var:
As this is a .yml file, pay special attention to the indenting. You might get errors on the next step if the format is wrong.
Restart GroundWork 8:
docker-compose down
CODEdocker compose up -d
CODEThe container dockergw8_tcg-elastic_1 will show as started when the system starts.
Setting Up the Connector
The first thing you will need (if you don't already have it for other connectors), is a credentialed user. This is easy to create, just like a normal user, under the Administration > Users menu. The username and password you use does not matter, however make sure you assign it the Operator role. Also, make sure you check the box for a credentialed user in the dialog.
One credentialed user
There can be only one such Credentialed user in any GroundWork server, and it must be a local (not an LDAP-controlled) user account.
- Within the GroundWork Monitor UI, go to Configuration > Connectors, and click the Add button, then select Elastic from the listed connector types:
Fill out the form as follows:
Field/Option Purpose Enabled?
This is a way to either prepare the connector for operation and not start it, or to stop it temporarily. It is effective when you click Save. Generally you leave this checked.
Connector Name
This is up to you, but you can't have two connectors named the same.
TCG Host Address
Since you can run TCG on any accessible Linux system, this is where you specify where that is, from the point of view of the GroundWork containers. We use the host name of the connector's container in this example, since it's addressable by name within the container network. Make sure you specify the hostname or IP address and port number, e.g.,:
tcg-elastic:8099
Service Names
This is a list of Elastic cluster nodes, from the point of view of the connector. Yes, you can attach to several! Since we are running on the same server, again we use the elasticsearch container host name, and the default port of 9200. Note: You need to press Enter on each node name to be able to save it.
elasticsearch:9200
Interval
The number of minutes between polling for query results. Minimum is 1. We recommend 5.
Timeout
The number of seconds to wait for a connection. Depending on how busy your systems are, this might be as long as 5-10 seconds. 10 is the conservative default.
Retries
This has a checkbox and a numerical input. If you uncheck it, the connector will keep retrying as long as it is running. If you want the related services to show they are not receiving updates when the Elastic cluster is not responding to the connector's attempts, however, leave it checked and leave a relatively low number of attempts. This will ensure your service will go to an Unknown state, and allow you to correct the issue with Elastic Stack.
Kibana Server
Since you can connect to any Kibana server, this is where you input its address and port. Note this is also from the connector's point of view: you are telling the connector where to look for its Kibana server. We use the host name of the kibana container and the default port of 5601 in our example, and the urls path includes /kibana, though often this is different or not needed, depending on your installation: http://kibana:5601/kibana/
Kibana User Name and Kibana Password
On GroundWork Kibana, these are not used. If your Kibana server requires authentication, you should provide a valid username and password that can access the queries you need to use as services.
Always override time filter
This is an advanced option. It goes with the next two options:
Time Filter from and Time Filter to
Time range to use as an override from the query time range. You can leave this off unless you always want to use the same interval for historical queries with this connector.
Host Name Label Path
You can give the connector hostnames to create with a label in Kibana. Just leave this at the default for our example, but any valid field in your index can be used as a host name
User Defined Host Group Name
This is a checkbox, and if checked it will use what you type in the next field as the hostgroup name for all hosts this connector adds.
Host Group Label Path
You can give the connector hostgroup names to create with a label in Kibana, or specify your own( with the checkbox above). Just leave this at the default for this example.
Log Level
Unless you are having issues with this connector, leave this at the Error level.
Save the connector. If all is well, the dialog will update and the connector status will turn Green. If there are issues, then a message will appear in the logs on the Status tab with an idea about what is wrong, allowing you to correct it and move on.
If you see the status of the connection is red, or grey, please check all your inputs. Also you may need to go to GroundWork Connections and click Connect on the new connector to get it to turn green after resolving a configuration issue.
Setting Up Metric Queries, Filters and Searches
At this point, your Elastic connector is set to capture the results of queries. For this to work, you need some active queries in Kibana. Here's how you create some to get started. If you are already familiar with Kibana, you can create a few queries of your own and skip to Defining Metrics below.
- Go to Dashboards > Log Analysis, this will log you in to Kibana.
- Along the left side of the screen, click the last icon Management.
- Select Index Patterns and click Create index pattern.
- In the first step for Index pattern type logstash-*.
- Click Next step, and select @timestamp from the time filter field drop-down, and click Create index pattern:
- Click the top icon for Discover, and you can enter search text to create your own filters.
- We have included a couple of sample queries for you. To import these:
- Download the export.ndjson and check the MD5 sum matches the comment on the attachment.
- Click Open, then Manage searches.
- Click Import, then select and import the file. You should see a message for successful import, and then there should be two new filters and a new search in the list:
- Searches are useful in setting up dashboards and visualizations in Kibana. If you want to use a search in GroundWork as a service, save it as a query (optionally with filters attached).
- If you have not yet used Kibana, we suggest you explore these options. Note the host filter example, where you can specify which hosts the query applies to, and the fact a search can be saved as a query. It will benefit you in future to gain some familiarity with creating these objects now. Kibana is a very powerful log analysis tool, and this example will help you gauge its value to your organization.
- We have included a couple of sample queries for you. To import these:
Defining Metrics
Here's how to set up the queries you entered or imported as services in GroundWork.
- In GroundWork Monitor, go to Configuration > Connectors and select the connector you created above.
- Select the Metrics tab.
- Click the Update Metrics icon to query for your recently added queries.
- Click Add Metric (plus icon), this brings up the New Metric dialog.
Start typing the name of the query you want to add as a metric. All matching names will show in the drop-down list. Select the one you want.
You can fill out the form as follows:
Field Purpose Metric name
The actual name of the query you imported or created in Kibana, e.g., query_error
Metric Format String
An optional c-style format descriptor, e.g., %d.2
Display name
An optional name for the service that will report this metric, it will use the metric name if you don't enter one
Monitor
Enable or disable the monitoring of this metric
Graph
Enable or disable the graphing of this metric
Delta
Treat this metric as a delta from the last time (generally used for numbers that are cumulatively increasing)
Default Warning Threshold
Set a number above which (or below which, if more than the Critical threshold) the service that reports on this metric will be in a Warning state. Note you can override this with the Edit function in individual cases from the Status dashboard. This is optional, and a -1 will disable it.
Default Critical Threshold
Set a number above which (or below which, if less than Warning threshold) the service that reports on this metric will be in Critical state. Note you can override this with the Edit function in individual cases from the Status dashboard. This is optional, and a -1 will disable it.
Description
Optional text describing this metric
- Click Save to save your new metrics.
- Browse to Status to see your new metrics start to populate on the interval you set for the connector. These metrics will show up against the GroundWork containers, since these are the only hosts logging in Elastic in this example.
There are many possible uses for the Elastic connector. For example, Elastic can receive log messages from specific hosts using Filebeat or Winlogbeat agents. You can then query for critical messages using Elastic, and report them into GroundWork using the Elastic connector.
Running Elastic Connector as a Linux Service
In some cases, you may wish to run the Elastic connector on a host other than the GroundWork host. You might want to offload a lighter-configured GroundWork server, or otherwise isolate the connector in a separate subnet in some way. This can be done by running the connector as a Linux Service.
To prepare your connector for this, you need to:
- Download the external connector binary from the support downloads page
- Install the connector as a service on a Linux host (you will need root access)
- Configure the connector as described above, substituting in the connector host and elastic host addresses. You generally will not want to connect to the GroundWork instance of Elastic Stack when running the connector as a service, since it requires opening ports to Kibana and Elasticsearch containers, and the GroundWork instance is most easily accessed with the container approach anyway.
For steps on doing this, read on.
Install Steps
- Download the TCG Elastic Connector to the same Linux system.
- To download TCG go to Downloads and find the elastic connector - this will always be the latest version.
- The download contains three files in a tar archive.
Un-tar these files into a directory called elastic on the linux system you wish to run it on, as the user you wish to run it as.
Running more than one connector
If you have the need to run more than one connector, you can. You do need to run them in separate directories, as the NATS pools will conflict if you run them in the same location.
- The files you will have after expansion include:
- The binary of the connector itself: elastic-connector
- The default configuration file for the connector: tcg_config.yaml
- The systemctl services file for making TCG run at system startup: tcg-elastic.service
To enable the Elastic Connector to run, first change the mode to executable on the binary:
chmod +x elastic-connector
CODETo have TCG start automatically when the system restarts, follow these steps:
Modify the tcg-elastic.service file. Change the lines:WorkingDirectory= User= Group= # Set environment (path) Environment='TCG_CONFIG=tcg_config.yaml' # TCG up ExecStart=elastic-connector
to reflect the location in which you have installed TCG. For example:
# Configure to match TCG installation WorkingDirectory=/home/ec2-user/tcg/elastic Group=docker # Set environment (path) Environment='TCG_CONFIG=/home/ec2-user/tcg/elastic/tcg_config.yaml' # TCG up ExecStart=/home/ec2-user/tcg/elastic/elastic-connector # TCG down ExecStop=/bin/kill -2 $MAINPID
Copy the tcg-elastic.service file to the systemctl services directory and make root the owner:
sudo cp tcg-elastic.service /etc/systemd/system/ sudo chown root:root /etc/systemd/system/tcg-elastic.service
CODEEnable the service to run at startup:
sudo systemctl enable tcg-elastic.service
CODEStart the connector:
sudo service tcg-elastic.service start
CODEYou can see the results by looking at the log:
sudo journalctl -u tcg-elastic.service
CODE- At this point, the connector is started in a waiting state, which you will see indicated in the last line of the log file (empty GWConnections). It's listening on the configured port you listed in tcg_config.yaml, by default 8099, which must be allowed by the firewall for incoming connections from the GroundWork server. Next, you will need to connect GroundWork to this instance of TCG, and generate a connection to the Elastic and Kibana servers. See above for details.
Related Resources
-
Page:
-
Page:
-
Page: