TCG Connectors
About Transit Connection Generator (TCG)
The connection between GroundWork servers and monitoring data sources such as Nagios is facilitated by a new component called TCG, or the Transit Connection Generator (see https://github.com/gwos/tcg). Based on the NATS project, TCG allows the creation of resilient connections that automatically re-establish when broken, and buffers data to ensure a lossless stream of metrics. TCG is easy to configure, and flexible.
The GroundWork TCG is open source and available on GitHub.
TCG does not support looped connections (sending data to itself), in case you were wondering.
Standalone GroundWork Servers (8.1.0 >) and TCG
The typical default standalone GroundWork server uses TCG to forward data from Nagios to GroundWork Foundation, our normalization and aggregation layer. You will see a connection called “Local Nagios” under the Connectors menu, which is the local internal connection that does this.
Parent Child Servers and TCG
Child servers use TCG to forward Nagios state data and metrics to Parent servers.
Types of TCG Connections
The connection types supported in TCG as of GroundWork Monitor 8.1.0:
Nagios
This is a connector to a local Nagios with the Data Geyser enhancement running.
You can also connect this to a Parent server to create an independently managed GroundWork Child.
Nagios Parent Managed Child
This is a special connection type for sending data from a Child to a Parent. It is special because it is provisioned and managed on the Parent, not the Child.
Elastic
This is similar to a Cloud Hub connector, in that it connects to a source of monitoring data, Elastic Stack in this case. It uses TCG to send its state and metric data to GroundWork.
Server
This connector allows a TCG instance running on a Linux server to forward basic resource monitoring metrics to GroundWork. Since TCG is a go program, it runs as a single executable, and is easily daemonized, configured with a single config file, and connected to. Please note this connector is unsupported and experimental and not intended for production use.
SNMP
This connector looks at NeDi and pulls metric independently from the database for the systems under monitoring. For details see SNMP Monitoring.
APM
The Application Performance Monitoring connector with Jaeger also uses TCG, along with a Prometheus client to forward custom metrics from your applications to GroundWork for monitoring.
Configuration Details
TCG uses NATS for guaranteed delivery of monitoring messages. The NATS component uses on-disk queues to save messages that can't be immediately delivered. These queues can be limited in scope by size and by age, separately. Here's how these settings can be adjusted for your TCG implementations.
Defaults
The maximum age a message can be in TCG is 10 days (240 hours), and the maximum size a queue can be is 50 GB. This means that by default you need to have 50 GB of disk set aside per connector. In a standalone GroundWork installation, this 50 GB requirement is covered easily by the 200GB minimum needed for running GroundWork 8.x. However, when one or more of the above optional configurations is used, more space has to be provisioned.
Changing Maximum Age and Queue Size for Nagios Connections
The Nagios instances on GroundWork Standalone or Child servers will queue and retransmit monitoring results using TCG. If you have Child servers, you may want to adjust the settings for how much data to queue and for how long.
To adjust the Nagios connector size on a Standalone or Child GroundWork server:
Access the command line of your GroundWork server and change to the gw8 directory:
$ cd gw8
CODEEdit the datageyser_tcg_config.yaml file in the nagios container:
docker-compose exec nagios vi /usr/local/groundwork/config/datageyser_tcg_config.yaml
CODEAdjust the following lines as needed:
natsStoreMaxAge: 240h0m0s natsStoreMaxBytes: 53687091200
For example: To make the timeout 3 days and the size 1GB:
natsStoreMaxAge: 72h0m0s natsStoreMaxBytes: 1073741824
and save the file.
Restart GroundWork to make the changes take effect:
$ docker-compose down
$ docker-compose up -d
Changing Maximum Age and Queue Size for Containerized Connections
If you run connectors on your GroundWork server as connectors, these connections will queue and retransmit monitoring results using TCG. You may want to adjust the settings for how much data to queue and for how long, since the local connectors will potentially consume a lot of disk space if they are unable to communicate with the GroundWork system for a while.
These settings are saved in a tcg_config.yaml (or datageyser_ tcg_config.yaml for the Nagios connectors). This file is typically located in the container running the connector, or in the directory where the connector runs when running as a Linux service.
For example, to adjust the Elasticsearch connector maximum age and queue size on a GroundWork server:
Access the command line of your GroundWork server and change to the gw8 directory:
$ cd gw8
Edit the tcg_config.yaml file in the connector container, for example if you are running the Elastic connector as a container, the container will be called tcg-elastic, and the file to edit will be called /tcg/elastic-connector/tcg_config.yaml:
docker-compose exec tcg-elastic vi /tcg/elastic-connector/tcg_config.yaml
Adjust the following lines as needed:
natsStoreMaxAge: 240h0m0s natsStoreMaxBytes: 53687091200
For example: To make the timeout 3 days and the size 1GB:
natsStoreMaxAge: 72h0m0s natsStoreMaxBytes: 1073741824
and save the file.
Restart GroundWork to make the changes take effect:
docker-compose down
docker-compose up -d
Changes to values in this file will be preserved across restarts of GroundWork, but other changes such as comments will not.
Appendices
Appendix A: Installing Containerized TCG Post GroundWork 8 Install
During an initial GroundWork Monitor 8 installation, you will be prompted for optional services to install, and any of these services can be installed at a later time. Doing so is easiest when your GroundWork server has the ability to pull the required containers from the Docker Hub repository where they are published (free to download) by GroundWork, but some customers might be deploying their GroundWork Monitor system in offline environments where this isn't the case. This section steps through how to install the TCG connectors without accessing the Internet at all, as long as you have the installer file present. This pertains to:
- TCG APM: Install TCG APM Connector to use along with a Prometheus client to forward custom metrics from your applications to GroundWork for monitoring
- TCG ELASTIC: Install TCG Elastic Connector if you are using Elastic
- TCG SNMP: Install TCG SNMP Connector if you are using NeDi for Network Monitoring
To install these optional containers, you will need to rerun the installer with --noexec
which unpacks the container images, then load them as follows:
Place the installer file in the directory immediately above the gw8
directory, and run it:
./gw8setup-8.2.0-GA.run --noexec
Then, change to the gw8
directory, where you will see several .bz2
archives.
cd gw8
Enter the following to load the tcg
image:
docker load -i gw8images-tcg.tar.bz2
Next, edit the following .yml
file:
docker-compose.override.yml
Include the required text to start the container and deploy the volumes, depending on which one you need. Here's what you need for each:
# APM Connector
tcg-apm:
image: groundworkdevelopment/tcg:${TAG}
entrypoint: ["/app/docker_cmd.sh", "apm-connector"]
volumes:
- tcg-var:/tcg
# Elastic Connector
tcg-elastic:
image: groundworkdevelopment/tcg:${TAG}
entrypoint: ["/app/docker_cmd.sh", "elastic-connector"]
volumes:
- tcg-var:/tcg
# SNMP Connector
tcg-snmp:
image: groundworkdevelopment/tcg:${TAG}
entrypoint: ["/app/docker_cmd.sh", "snmp-connector"]
environment:
- NEDI_CONF_PATH=/usr/local/groundwork/config/nedi
volumes:
- tcg-var:/tcg
- ulg:/usr/local/groundwork/config/
Also, include (if it isn't already) the volumes section with the tcg-var
volume. This is typically near the end of the file:
volumes:
tcg-var:
Restart GroundWork to make the changes take effect:
docker-compose down docker-compose up -d
Related Resources
-
Page:
-
Page:
-
Page:
-
Page:
-
Page: