This page provides detailed reference material for Auto Setup files, their directives, and resource-discovery sensors.

We use the term "auto-discovery" in this page to emphasize the fact that the auto-setup resource-probing process is a distinct phase of operation.

Auto Discovery Instructions File Format

An auto-discovery instructions file contains three sections of declarations:

  • global directives (of the form option = value)
  • host-sensor definitions (a set of <host "name"> declarations)
  • service-sensor definitions (a set of <service "name"> declarations)

Host-sensor definitions (<host "name"> objects) are intended to decide what single host profile should be applied to the machine's configuration. That host profile may itself reference an arbitrary number of ancillary service profiles, as it is set up within Monarch.

Service-sensor definitions (<service "name"> objects) are intended to decide which service profiles or individual services, if any, should be applied to the machine's configuration. There is no constraint on the number of such service profiles or services that may be applied to the host as a result of processing the auto-discovery results.

The tag names in <host "mytag"> and <service "mytag"> declarations must be unique across all <host> and <service> objects in the file, respectively. They are used to label the outcome of discovery on the respective sensors, but do not otherwise affect the matching of client resources.

Comments may be used anywhere within the auto-discovery instructions file to document the utility of specific pieces or to disable certain sections. Comments begin with an unescaped "#" character and run to the end of the line. Unfortunately, that is true as well within an option value, even if that value is quoted. The option value will be truncated at that point, and a leading double-quote character that you thought would enclose the value but not be part of it will instead become the first character in the value. To work around this and get a "#" character included in an option value, you must escape it with a preceding backslash character:

pattern = "this\#that"

Because backslashes are used when reading the config file for purposes like that, there is a level of interpreting backslashes throughout the instructions file, before the discovery code itself ever sees your option values. That means that if you truly want to get a single backslash character into an option value, you must double it in the instructions file to get it seen as a single backslash character in the option value seen by the discovery code. Furthermore, there is often an additional layer of interpreting backslashes as escape characters, within the discovery code. For instance, trying to match a literal backslash as part of a sensor pattern requires that you follow the ordinary Perl rules for regular expressions, meaning that the regular expression compiler must see a doubled backslash to match a single literal backslash. In a similar manner, the file_name, symlink_name, directory_name, and file_content sensor types use backslashes as escapes for resource fileglob metacharacters. Again, you would need to double the backslashes to get a single backslash recognized as a literal backslash in a filepath. Combining those two levels of interpretation, you will end up with quadrupled backslashes in the instructions file:

resource = " "C:\\\\program files\\\\outlook*" "
For convenience of exposition, we follow a standard pattern of indentation in the file, showing subsidiary objects as indented. This leading whitespace is not formally necessary, but it makes the file much easier for humans to read and understand, and the convention is recommended for general use.

This is insane, making the instructions file far less readable and maintainable than it should be. So we have provided a workaround; see the description of the file_name sensor type for details.

With respect to individual sensor definitions, if a single sensor definition is matched by the resources on the GDMA client, that suffices to activate inclusion of the associated host profile or service in the generated configuration. In the current implementation, there is no support for logical combinations of matching or not matching multiple sensor definitions being used for the final activation decision.

The auto-discovery instructions file must be internally tagged with a version number in the format_version directive. This declares the kinds of capabilities that may be present in the file, and provides a means to support backward compatibility as the software is extended.

Here is a sample auto-discovery instructions file, so you can see how one is constructed. We cover the individual directives below.

# The file-format version number is here to provide some clue to the software that
# reads this file as to what capabilities to expect.
format_version = "1.0"

<host "Linux">
    type = os_type
    pattern = "linux"
    host_profile = "gdma-linux-host"

<host "Windows">
    type = os_type
    pattern = "windows"
    host_profile = "gdma-windows-host"

<host "Solaris">
    type = os_type
    pattern = "solaris"
    host_profile = "gdma-solaris-host"

<host "AIX">
    type = os_type
    pattern = "aix"
    host_profile = "gdma-aix-host"

# All of the above could have been collapsed into just one sensor:
<host "GDMA host">
    type = os_type
    pattern = "(.*)"
    host_profile = "gdma-$SANITIZED1$-host"
    # Don't actually have this sensor in play, inasmuch as it would
    # collide with the results from the preceding sensors.
    enabled = false

<service "Apache 2.2">
    type = full_process_command
    # Match a filepath ending in a particular filename along with either
    # an extra space (preceding process arguments) or the end of the
    # entire command line.  Capture the "httpd.bin" portion for logging
    # purposes, and the config file path (value of the "-f" option) to
    # perhaps distinguish what kind of web server is being run.
    pattern = "/(httpd\.bin)(?:\s+-f\s+(\S+)|$)"
    service = "apache-web-server"

<service "MySQL">
    type = full_process_command
    pattern = "/(mysqld)"
    service_profile = "mysql-server"

<service "Cassandra">
    type = file_name
    resource = "/path-to/cassandra.yaml"
    pattern = "/cassandra.yaml$"
    service_profile = "cassandra-server"

<service "DNS Client">
    type = running_system_service
    pattern = "^DNS Client$"
    service_profile = "windows-dns-client"

<service "Search Indexer">
    type = full_process_command
    pattern = "^searchindexer.exe$"
    service_profile = "windows-search-indexer"

Catalog of Possible Auto Discovery Sensor Definition Directives

Inside the <host "mytag">...</host> or <service "mytag">...</service> wrapper for each sensor definition, you must include some number of sensor definition directives. The full set of available directives is listed in this section. See the Catalog of Supported Sensor Types section below for the available sensor types.


This directive specifies the kind of resource to be probed. It is mandatory for every sensor definition.


This directive is required for a few particular sensor types. For those, it specifies key details of how the resource is to be located, and provides some level of filtering before pattern matching is attempted.


This directive is only supported in a <host> sensor in a manner that should yield only one result (i.e., exactly one application of the host profile). That is, its value cannot be multiple in that context because the overall discovery results must yield exactly one host profile, even across multiple <host> sensors that may be defined in the discovery instructions.

This directive is supported and generally recommended in a <service> sensor. The possible values are:

single: means that there can only be one copy of the resource that matches the pattern. If more than one copy is found, the auto-discovery (or more properly, the analysis of the discovery results) will fail. For either a <host> sensor or a <service> sensor, this is the default value of the cardinality if none is specified in the sensor definition.

first: means that there can be more than one copy of the resource that matches the pattern. If more than one copy is found, only the first one (in some arbitrary unspecified order) will be processed; the rest will be silently ignored.

multiple: means that there can be more than one copy of the resource that matches the pattern. For this kind of cardinality, an instance_suffix directive is mandatory because multiple service instances may be generated and there must be some means to distinguish them.


This directive is mandatory for sensor definitions, and provides the details against which the resource probing will be matched to determine if the resource is present on the machine. The exact nature of the pattern depends on the sensor type, but generally a Perl regular expression is used for string matching and possible substring capture. This makes the pattern matching very flexible.

If parts of the pattern are captured during pattern matching, they can be referenced in other directives as $MATCHED1$, $MATCHED2$, and so forth. Through a series of steps, this allows details of discovered resources to be propagated all the way through to service setup in Monarch, so the specific identified instances of matched resources can be selected during ongoing monitoring.  As a trivial example, consider this pattern:

pattern = "(.+)_(.+)-(\d+)"

when matched against a resource with value foo_bar-23.  You would end up with $MATCHED1$ set to foo$MATCHED2$ set to bar, and $MATCHED3$ set to 23.  $MATCHED#$ values can be further refined by using the transliteration and sanitization directives, to produce corresponding $SANITIZED#$ values.  See the descriptions of those directives, just below.


This directive is optional. For convenience, it may be specified multiple times in a sensor definition to define a series of separate transforms, to implement some especially complex character substitutions or simply to make the intent clearer. It specifies how to substitute individual characters in pattern-match string results before applying sanitization. You might use this, for instance, to translate slashes in directory paths into underscores. The value of this option is any argument that the Perl "tr" operator will accept, including trailing modifiers (except the /r modifier, which is disallowed). Note that to specify a pound sign (#) in this option, you must precede it with a backslash to escape interpretation of the pound sign as the beginning of a comment by the code that reads the auto-discovery instructions file. Also, to specify a backslash (\) in this option, you must quadruple it to get it past the multiple layers of interpretation in the software. Here are some useful examples.

# Turn all ASCII uppercase into lowercase characters; also turn backslashes
# into underscores.
transliteration = "{A-Z\\\\}{a-z_}"

# Same thing, but performed as separate successive steps in the order given.
transliteration = "{A-Z}{a-z}"
transliteration = "{\\\\}{_}"

# This transliteration and sanitization do exactly the same thing: delete all
# characters which are not either alphanumeric or just a few allowed punctuation
# characters: - . @ _
transliteration = "/-.@_a-zA-Z0-9//cd"
sanitization = "-.@_a-zA-Z0-9"

# Change all space characters to underscores.  In the process of doing so,
# squash each sequence of consecutive spaces down to a single underscrore.
transliteration = "/ /_/s"

# Change all non-alphanumeric characters to underscores.
transliteration = "/a-zA-Z0-9/_/c"

Transliteration can help in handling backslashes within Windows pathnames. While their use is conventional and widespread on this platform, the Windows kernel itself accepts forward slashes as directory separators in filepaths passed to system calls. It's the cmd.exe shell that has some difficulty with forward slashes, although there are some places where it swallows them without burping. To avoid problems with backslash escaping and simplify processing elsewhere, you might want to consider transforming backslashes in matched strings to forward slashes, by using the transliteration directive in a sensor definition:

transliteration = "{\\\\}{/}"

Our matched-value character-cleanup model is presently primitive; the same transliteration and sanitization options in a sensor definition will apply uniformly to all of the pattern-match values for that sensor. If that turns out to be too restrictive in practice, we will look at modifying this facility.

Implementation note: This option effectively specifies Perl code that will be taken directly from the user and executed. The execution part validates that it has a proper construction of the "tr" arguments, without any security holes, before running the transliteration. Sandboxes are used both for validation testing and for sensor evaluation.


This directive is optional. It specifies which characters are to be preserved in pattern-match string results when they are referenced in $SANITIZED#$ macros. This is often used, for instance, to suppress shell metacharacters, which can lead to dangerous security holes if used in arbitrary shell commands.

The value of this option is the SEARCHLIST part of a Perl "tr" operator. Which is to say, you specify exactly the set of characters you wish to keep. Character ranges in the form "a-z" are supported. If you wish to preserve the "-" character itself, it must be specified as either the first or last character in the value of this option. To specify a backslash (\) or pound sign (#) in this option, you must precede it with a backslash to escape interpretation of these characters by the code that reads the auto-discovery instructions file.

Implementation note: The same code-crash and security issues arise as with transliteration, so sandboxes are used here as well.


This directive is mandatory for a <host> sensor definition and prohibited in a <service> sensor definition. It specifies the host profile to be applied to this host if this sensor definition finds a matching resource. If multiple <host> sensors match their respective resources, their host_profile values must also match, and any argument processing which in practice affects any aspect of the host profile must also match, so there is no conflict between the matching sensors. If such a conflict is detected, this pass of auto-discovery is declared as having failed.


This directive is optional for a <host> sensor definition. A <service> sensor definition must include either the service_profile directive or the service directive (but not both). The service_profile directive specifies a service profile to be applied to this host if this sensor definition finds a matching resource. If multiple sensors that name the same service_profile all match their respective resources, any argument processing which in practice affects any aspect of the service profile is merged, if possible. If a conflict arises and merging is not possible, this pass of auto-discovery is declared as having failed.

Merging of match results from multiple sensor definitions that name the same service_profile could be used to create different instances of the same base service, by judicious use of the instance_suffix directive. Or it could be used to create the same base service or service instance(s) but under different conditions.

In general, use of the service_profile directive is considered to be anachronistic. It is provided mainly to support an early design for the GDMA Auto-Setup facility. Using this directive requires that the results of applying the other directives like externals_arguments line up exactly across all the services assigned to the service profile. In general, that requirement would be difficult to set up and guarantee that it is maintained over time. To work around that, you would end up creating one service profile per service. But the service profile layer in this case does not add anything significant to the configuration and just creates more administrative burden. So it's usually better to skip this layer and just use the service directive instead.


This directive is optional for a <host> sensor definition. A <service> sensor definition must include either the service_profile directive or the service directive (but not both). (Most <service> sensors include a service directive.) The service directive specifies an individual service to be applied to this host if this sensor definition finds a matching resource. If multiple sensors that name the same service all match their respective resources, any argument processing which in practice affects any aspect of the service is merged, if possible. If a conflict arises and merging is not possible, this pass of auto-discovery is declared as having failed.

Merging of match results from multiple sensor definitions that name the same service could be used to create different instances of the same base service, by judicious use of the instance_suffix directive. Or it could be used to create the same base service or service instance(s) but under different conditions.

check_commandThis directive is optional, and probably rarely used. If supplied, it overrides the server-side "Check command" selected in every host service generated by this sensor definition. (Of course, that means that the sensor's profile should either only expand to one service, or all the services so generated must be checkable by exactly the same form of a check command.) We might only support this directive if we construct an actual useful use case for it, and show to what degree it can and must be macro-izable.

This directive is optional. If supplied, it is combined with the value of the check_command option (as supplied either explicitly in the sensor definition or in each service definition on the server side), to override the server-side "Command line" setting in each host service generated by this sensor definition. (For this to be useful, the sensor's profile should either only expand to one base service, or all the services so generated must have exactly the same forms for their check commands.) The main reason to do so would be to provide a macro-ized version of the command line that when expanded would identify a particular service instance that needs to be freshness-checked. The base-service version of the command line would be set up to operate when there are no service instances, and the version supplied in this directive would be set up to operate when there is at least one service instance defined for that service. We might only support this directive once we construct a working use case for it, showing the true need for this flexibility that could not be readily solved by other means.


This directive is optional, and is only allowed in a sensor definition with a service_profile or service defined. If present, it sets the values that are plugged into the base-service externals arguments configuration for each service specified in the service profile. If absent, the base service is set to inherit its externals arguments from the generic service it represents on this host.

If the final results of all sensor resource matching includes multiple matching sensors, whether or not they share the same service_profile or service, with service profiles that reference the same service but the expanded externals_arguments settings for those sensor definitions do not match exactly, the configuration phase of auto-setup will fail. This would include having externals arguments defined for one matching sensor and not for another, at least if the generic-service externals arguments do not match the externals arguments defined by GDMA auto-discovery. Since the GDMA client will have no knowledge of which services are included in each service profile, nor of what externals arguments are defined for generic services on the GroundWork server, these comparisons can only be run on the server after auto-discovery results are sent to the server.

The format of the externals_arguments value is a "!"-separated list of strings. Ultimately, these values are used on the GroundWork server to provide values to $ARG#$ macro references in service externals when they are built, unless the service externals are applied to a service instance and you have externals arguments defined at the service-instance level (see the instance_ext_args directive, below).

There is no default value of externals_arguments. If no value is supplied for this directive, that will later result in the externals arguments at the host-service level being inherited from those defined by the original generic service. If you want to define this directive to override that inheritance and capture all of the cleaned-up strings from your sensor pattern matching and nothing else, something like the following would do:

externals_arguments = "$SANITIZED1$!$SANITIZED2$!$SANITIZED3$"

for as many elements as are matched by the sensor pattern. $ARG#$ macro references that correspond to fields beyond the last one defined in this directive are substituted with a simple empty string.

For clarity, it helps to understand which macro references, if specified in the externals_arguments directive, get expanded during auto-setup. $MATCHED#$ and $SANITIZED#$ macro references are expanded on the GDMA client within externals_arguments directives during auto-discovery sensor matching. The substituted values become part of the discovery results. No macros get substituted when host services are configured on the server. Any other macro references within the externals_arguments are passed through unchanged to the service configuration stored in the Monarch database, and are substituted as appropriate when externals are built.


This directive is supported in any sensor definition, be it <host> or <service>. It specifies how a service-instance suffix is to be constructed for every service which is generated from this sensor definition. Typically, this value is specified to begin with an underscore, for better readability of the full service-instance name.

As an example, for an open_local_port sensor, where the pattern matching implicitly assigns the open-port number to the $MATCHED2$ macro, you might use:

instance_suffix = "_$SANITIZED2$"

to append an underscore and the found open-port number to the name of the base service to create the full service-instance name. (As noted above, $MATCHED#$ and $SANITIZED#$ macro references are expanded on the GDMA client during auto-discovery sensor matching, and the substituted strings are sent to the server as part of the auto-discovery results. In this case, that happens within instance_suffix directives.)

For any sensor definition that includes a service_profile directive, the instance_suffix directive is applied to the services mentioned in that service profile. For a <host> sensor definition, it is applied as well to all services mentioned in service profiles assigned to the host_profile for the sensor definition.

This directive is mandatory if the sensor cardinality is "multiple". In that case, a service instance is generated for every service even if only one service instance is present in the configuration. This directive is optional if the sensor cardinality is "single" or "first". In that case, if the instance_suffix directive is present, it forces a service instance to be created (instead of just using the base service) even if only one instance of the service is generated.


This directive is optional. If present, it provides the per-service-instance command arguments for each generated service derived from this sensor definition. Its format is a "!"-separated set of strings. Macro references in these strings are expanded in the same way as those in an externals_arguments directive.

Normally, there shouldn't be any need for this option; it is supported more or less to cover the waterfront, in case we realize later on that there is some supporting use case. These command arguments would only come into play for a command run directly on the GroundWork server, such as a freshness check for the service instance. So perhaps that might be the important supporting use case. In contrast, most of the calculation done by GDMA Auto-Setup is targeted at configuring service externals that define what commands will be run on the GDMA client machine. For configuring those on a per-service-instance basis, see the instance_ext_args directive.


This directive is optional. If present, it provides the per-service-instance externals arguments for each generated service derived from this sensor definition. Its format is a "!"-separated set of strings. Macro references in these strings are expanded in the same way as those in an externals_arguments directive.

If this directive is not supplied, service-instance-level externals arguments are set to be inherited from those for the base service.

The usual caveats apply about correspondence and conflicts if multiple sensor definitions generate the same service instance suffix. Merging of identical setup for multiple copies of the same service instance is allowed, and conflicting setup causes the configuration phase to fail.


This directive is optional, and defaults to a positive value ("yes", "on", "true", or "1"). If can be included in a sensor definition in one of those forms, or it can alternatively be set to a negative value ("no", "off", "false", or "0") to disable pattern matching and processing for this sensor definition. This is more convenient than commenting out the entire sensor definition if you want to leave the definition in the file for comparison or possible future use, but make it inactive.

Interactions Between Auto Discovery Sensor Definition Directives

Auto-discovery fails if sensor matching produces conflicting values of the host_profile directive within the final overall discovery results. (Multiple <host> sensors that all produce the same value of the host_profile on a given machine are allowed; the duplicate values are reduced to a single copy when discovery results are submitted to the server.) In contrast, multiple service_profile and service values can be sent in as part of the discovery results. That said, if two <service> sensors both match and have the same service_profile or the same service, the fully-evaluated externals_arguments must match for every generated instance_suffix that is shared between the matching <service> sensors. If this condition is not met, auto-discovery fails because the downstream code is unable to resolve the evident conflict.

Catalog of Supported Sensor Types

In addition to the fixed sensor types already listed in this catalog, one might conceive of some sort of "always present" resource that could be used to force certain configuration to be present under conditions that do not match any of the existing sensor types. We don't provide such a type because it would be ripe for abuse, causing people to not write the proper resource pattern matching into sensor definitions and then later being surprised when those resources should no longer be monitored. If there are additional sensor types that make sense but are not being supported, talk to GroundWork and we'll see what can be done about extending the set.

Fixed Sensors

Sensor TypeValue
type = os_type

This value is determined by internal logic within Perl code. Possible sensor values to be matched against using the pattern directive regular expression are:


type = os_version

This value is determined by programmatically probing the operating system in an OS-specific manner. Note that this value reflects the kernel version more than anything else. It does not distinguish, for instance, between Linux distributions (Red Hat Enterprise Linux, Ubuntu, SLES, etc.). If it comes to pass that an additional os_distribution sensor is needed for such distinctions, we will define and implement it at that time.


On AIX, we the equivalent of:

`uname -v`.`uname -r`

Typical sensor values on this platform are strings like:



On HP-UX, the proper derivation on this platform is unknown at this time. For the moment, we use the equivalent of:

`uname -r`


On Linux, we use:

`lsb_release -r -s`

Typical sensor values on this platform are strings like:

7.4.1708  (on CentOS 7.4)
11        (on SLES 11.1)
16.04     (on Ubuntu 16.04.3 LTS)

We might have used the output from "uname -r", but that is just the kernel version, which is generally less interesting. Note the operating-system package that provides lsb_release might not be installed by default on your Linux machine. On CentOS 7, for instance, the relevant package is redhat-lsb-core.


On Solaris, we use the equivalent of:

`uname -r` `uname -v`

Typical sensor values on this platform are strings like:

5.10 Generic
5.11 11.3


On Windows, programmatically obtaining the OS version on this platform is somewhat difficult. There is a standard Perl package for that purpose, but it is subject to confusion within Windows itself. Using that mechanism, we get a string typically beginning with "Win2003", "Win2008", or "Win2012", such as "Win2003 Enterprise Edition Service Pack 2". But note that the initial "Win2012" string is also generated for Windows 2016, so it is not completely definitive on this platform. See the article Finding your Operating System version programmatically for more information on the mess that is Windows versions in this regard.

We have instead implemented this sensor for Windows by using a WMI call. Together with a little editing of our own inside the sensor, it results in a string similar to these.

Microsoft Windows Server 2003, Enterprise Edition Service Pack 2 Version 5.2.3790
Microsoft Windows Server 2008 Standard Service Pack 2 Version 6.0.6002
Microsoft Windows Server 2012 R2 Standard Version 6.3.9600
Microsoft Windows Server 2016 Datacenter Version 10.0.14393

If you need to make distinctions based on the OS version for this platform, you should try a run of discovery on each of the OS versions that matter to you, and see what OS version string is determined during discovery. Then you can develop sensor patterns to match against whatever strings you obtained.

type = os_bitwidth

This value will be either 32 or 64. It is determined by the Perl code, through platform-specific means. The pattern directive regular expression can be used in the usual manner to match either value.

type = machine_architecture

This value is determined by internal logic within Perl code. Possible sensor values to be matched against using the pattern directive regular expression are:


Static Sensors

Sensor TypeValue
type = file_name
type = symlink_name
type = directory_name
resource = "fileglobs"

For all of these sensor types, the value to match with the pattern directive regular expression is some part of a filepath defined by the resource. If the resource alone is sufficient to describe the path(s) to be tested for presence on the system, and you don't care about capturing detail of the path(s) for substitution into macros, you can provide a very simple pattern just matching the initial character in the path (pattern = ".").

The sensor resource may be an ordinary Perl file glob, not just one path. It can even contain more than one glob pattern, with globs separated by spaces. To have the sensor match as a whole, the resource-filepath globs must match exactly some files in the filesystem which are then further filtered by matching against the specified sensor pattern. Finally, the file type is checked against the sensor type (regular file, symlink, or directory) Each glob within the resource must specify an absolute pathname, because there is no single fixed base directory against which a relative path could be rooted.

A fileglob is a means of specifying a potentially partially-wildcarded filepath, such as:

resource = "/etc/*.conf"

In brief, the conventions for metacharacter interpretation within a Perl glob are as follows:

  • [abc] matches any one of the individual characters enclosed within the square brackets; character ranges like [a-z] are allowed, where the "-" is treated as a metacharacter with the obvious interpretation
  • {abc,def,ghi} matches any of the comma-separated strings listed within the braces
  • * matches any string of zero or more characters
  • ? matches any single character
  • ~ alone matches the current user's home directory
  • ~/ matches the current user's home directory at the beginning of the glob
  • ~thatuser alone matches that user's home directory; not available on Windows
  • ~thatuser/ matches that user's home directory at the beginning of the glob; not available on Windows
  • \ is used to quote the next double-quote character or glob metacharacter (including \ itself), removing its nature as a metacharacter and treating it as just an ordinary character in the filepath or character range

With respect to backslashes in globs, to avoid confusion and confine the use of an initial backslash to only mean it is escaping the interpretation of the next character, we only allow a backslash to appear immediately before either a double-quote character or one of the glob metacharacters:

" \ * ? ~ { } , [ ] -

The sensor will claim you have a bad filepath glob if you use an unescaped backslash before any other character.

See the description of the full_process_command sensor for notes on the construction of the sensor pattern.

There are various complications with glob matching on Windows.

  • Support for tilde expansion is limited under Windows. Only the current user's home directory can be matched using this syntax. But it is not likely that the user running the discovery process will be a useful part of sensor matching.

  • If you specify multiple globs in the resource string, they are separated by spaces. If you need to include a space character in a particular glob, that glob must be enclosed in double-quote characters. Those enclosing double quotes will nest within the overall double-quoted resource string without problem. Thus you might say either of:

    resource = " "C:\\\\program files\\\\outlook*" "
    resource = " "C:/program files/outlook*" "
  • That last example illustrates a practical issue with using backslashes in filepaths. No matter where they appear in the instructions file, they need to be doubled to escape the interpretation of backslashes as escape characters as the instructions file is read. Then in a glob pattern, they must be doubled again because backslash is used to quote glob metacharacters. (In the glob shown, the user is wanting to match a single backslash used as a directory level, as a literal character in the filepath.) Similarly, in a sensor match pattern, they must also be doubled, because backslash is used to quote pattern metacharacters. Having to quadruple the backslashes like this is so completely insane that we have implemented a sensible workaround: you can use forward slashes for directory levels on Windows, just as you can on UNIX-like systems. So the second form shown is perfectly acceptable. In fact, the same is true for the sensor pattern as well, for these sensor types. (We automatically modify the sensor pattern you provide before using it for matching.) These two patterns for matching the filename portion of the filepath have exactly the same effect, but the second one is far more readable.

    pattern = "\\\\([^\\\\]+)$"
    pattern = "/([^/]+)$"
  • Note in spite of using forward slashes in resource globs and sensor patterns, the paths that are effectively returned by the globbing and effectively matched by the pattern will end up appearing as though they used backslashes as directory levels. Thus, any part of a filepath captured by the pattern that includes directory separators will produce strings using backslashes at those points in the captured string. We do this because we don't know how the captured string will be used and possibly substituted into some Windows command line. In that regard, it is best to follow the usual platform conventions. (The Windows kernel does support using slashes for directory levels, but some Windows command-line tools do not.) The upshot is that if you used:

    resource = " "C:/program files/outlook*" "
    pattern = "(/[^/]+)$"

    to capture the directory separator right before the last component of the filepath, the captured string would be:

    \Outlook Express

    and not:

    /Outlook Express

    These conventions might seem a little odd at first, but they are designed to make the instructions file much easier to write and read, while maintaining the best possible compatibility with downstream uses of matched patterns.

    • On Windows only, filepath globbing is case-insensitive.
    • On Windows, except when a leading ~ is used in a glob, the drive prefix ought generally to be specified as the initial part of each glob. Otherwise, some system default is used, and that might not be where the path of interest resides.
    • Note: On the Windows platform, support for symlinks was historically weak, few people know about them, they required elevated privileges to create, and they are little-used. That situation is changing a bit in Windows 10, which is dropping the need to elevate to administrator privileges as long as some administrator enables "Developer Mode" on the machine. But given the paucity of current usage, for the time being GroundWork has not implemented support for the symlink_name sensor under Windows, and it will fail to match any filepaths on that platform. One might imagine that we could generalize this sensor type to cover Windows shortcuts, directory junction points, NTFS reparse points, and similar objects. If you believe you have a valid need for sensing such filesystem objects that is not adequately covered by the other sensor types, talk to GroundWork.

type = mounted_filesystem
resource = "filesystem types"

For this sensor type, the specified pattern is matched against all the active filesystem mount points on the machine. The practical intent is generally to match physical or network filesystems, not virtual filesystems like the/proctree on UNIX-like systems. You may restrict the set of filesystems to be examined by providing a comma-separated or space-separated list of filesystem types as the sensor'sresourcedirective. Filesystem types are short words like these (this list is not exhaustive), with the resource matching being case-insensitive:

autofs       ctfs      ext4             jfs2    objfs       swap
binfmt_misc  debugfs   fd               mntfs   proc        sysfs
cachefs      devfs     fuse.gvfsd-fuse  mqueue  procfs      tmpfs
cdfs         devpts    fusectl          nfs     pstore      udfs
cdrfs        devtmpfs  hpfs             nfs3    rpc_pipefs  ufs
cgroup       ext2      hugetlbfs        nfs4    securityfs  usbfs
cifs         ext3      jfs              ntfs    sfs         zfs
Extra leading and trailing separators are allowed within the overall list of filesystem types. Using theresourcedirective is recommended where practical and sensible, but optional; if it is not provided, the mount points for all filesystems mounted on the system are matched against the sensorpattern.

Possibly in a future release, we might provide support for special values of theresourcedirective that could restrict the mounted filesystems to be matched to only regular filesystems, or to only special filesystems, or to only particular filesystems associated with some system file like/etc/fstab,/etc/vfstab, or/etc/filesystems.

On UNIX-like systems, the implementation of this sensor type does identify/procand a bunch of other filepaths as potential mount points for matching against your sensor resource and pattern. That implementation should not interfere with your own choice of mount points via the sensor resource and pattern. Just make your pattern select only the particular filepaths you care about, and the fact that there were extra possibilities should be of no concern.

On Windows, the initial implementation of this sensor type identifies a set of mount points, but some of them might not be currently active with a mounted filesystem behind them. This is typically the case, for instance, for a drive letter that represents a CD-ROM drive, where there may be no disc currently in the drive but the drive path will still be listed in the potential results for your sensorpatternto match. Also, paths to UNC shares may not be listed. These issues are present because of difficulties in identifying ways to probe the Windows system for the relevant data to adjust the list of possible mount-point paths. A future version of the discovery code may do better at fulfilling the original intent of this sensor.

The value to match with thepatterndirective regular expression is the filepath of the mount point, not the name of the device mounted at that location. On Windows, that would ordinarily be the drive letter and the root directory, and possibly some additional path components, perhaps like "C:\smalldisk\". But to avoid the ugliness of needing to double each backslash in a pattern regex and then double it again to escape backslash interpretation as the instructions file is read, we support using forward slashes in such paths in your regex: "C:/smalldisk/". The forward slashes are automatically modified so they match the backslashes in the filepath, without your needing to take any extra effort. That said, if you want to match a path like\\?\Volume{2bac18e8-2b26-11e8-81d8-806e6f6e6963}\then you must double-backslash-escape the literal question-mark in that path so the escaping gets past interpretation while the file is read and thereafter the question mark is not interpreted as a pattern regex quantifier.

pattern = "//\\?/Volume{2bac18e8-2b26-11e8-81d8-806e6f6e6963}/"

Unlike the sensor pattern, if you want to specify a literal backslash in a transliteration or sanitization directive, it must be quadrupled in the instructions file. The specified transliteration is applied against the actual strings captured from the sensor pattern match, and as such under Windows, those strings may contain backslash characters. To get one literal backslash character into the transliteration argument, we must double the backslash once to get past escaping while the instructions file is read, and double backslashes again so the backslash that makes it through is not interpreted as an escape character within the transliteration argument. Providing the same workaround for transliteration and sanitization directives for certain sensors on the Windows platform as we do for pattern matching is under consideration.

type = file_content
resource = "fileglobs"

For this sensor type, the pattern directive regular expression is applied against all lines of the file content. This regex is tested only once against each line of a file that matches the specified resource, so it cannot be used to match more than one copy of the pattern on a single line. (If there is a good use case, we could relax that constraint, and perform a global pattern match that recognizes multiple instances of the pattern in a single line of the file. Talk to GroundWork if you see such a need.) But the pattern can match multiple lines of the file and thereby generate multiple service instances.The sensor resource may be an ordinary Perl file glob, not just one path. It can even contain more than one glob pattern, with globs separated by spaces. Each component of the resource must be an absolute pathname, because there is no single fixed base directory against which a relative path could be rooted. To have the sensor match, the resource-filepath globs as a whole must match exactly one file in the filesystem. (Given that this sensor can have cardinality = "multiple", it probably makes sense to allow the resource to match multiple files, and to scan each of the matching files in the same way. We would like feedback on real-world cases where it might be useful.)The resource globs for this sensor type follow the same conventions for metcharacter interpretation and are subject to the same restrictions on the presence of backslash characters as is documented for the file_name sensor type.

Dynamic Sensors

Sensor TypeValue
type = running_system_service

System services are an OS-specific type of object. We only match against running services because apparently, most operating systems have many disabled services, and there is little point in matching those entries.

aix"lssrc -a" is used to list the available services. (This only shows services that are registered with the system resource controller. There may be others, such as those run by inetd. And if a service is started manually by running the executable from the command line, it may not show up in the lssrc output.) Before pattern matching, the list is filtered to include only active services, not inoperatlve services. The pattern directive regular expression is matched against the service subsystem name, and not against any other part of the lssrc output.


The command used to probe will probably need to vary depending on the OS distribution and version, to accommodate both initd- and systemd-based operating systems. Look at these commands as possibilities:

chkconfig --list 2>/dev/null | fgrep :on | awk '{print $1}'
systemctl list-unit-files --full --state static,enabled,indirect \
    --no-pager --no-legend | awk '{{print $1}}'

Also look at the service command. See the external articles Check running services in Fedora / RHEL / CentOS Linux server and Ubuntu full list of available services for more possibilities.

solaris"svcs -H -o FMRI" will be used, which will list both legacy_run and online services, and ignore disabled services. The value to match with the pattern directive regular expression will be the FMRI (Fault management Resource Identifier), such as "svc:/network/nfs/server:default".


We use the equivalent of "sc query" to list the running services, though the actual implementation relies on system calls rather than running that command. We filter to only match if the service STATE is RUNNING. The value to be matched with the pattern directive regular expression will be the SERVICE_NAME as listed in the output from "sc query". A simpler way to view this data is to look at the output from:

wmic service get name,state | findstr Running

type = process_command_name

This sensor type is not supported. That is to say, we considered and rejected providing a sensor type that would match just against the command name of the process. The reason we rejected this idea is that the command to fetch this data, "ps -e -o comm=", truncates the returned data to quite possibly a useless extent. On Linux you get only 15 characters max for each command name; on AIX, 32 characters; on Solaris, 79 characters. Also, modern UNIXes tend to show the underlying interpreter running a script (e.g., bash or perl) instead of the script name as the command. There's no point in struggling with a broken idea because of such limitations. Instead, use the full_process_command sensor and construct an anchored pattern that matches the processes you wish to identify.

type = full_process_command
resource = "userlist"

This sensor matches the regular expression given as the pattern directive against the full command line of each process. You may restrict the set of processes to be examined by providing a comma-separated or space-separated list of user names as the sensor's resource directive. Extra leading and trailing separators are allowed within the overall userlist. Using the resource directive is recommended where practical and sensible, but optional; if it is not provided, the command lines for all processes running on the system are examined.

On UNIX-like platforms, the command "ps -u 'userlist' -o args=" is used to find the command lines to match against. You can consider that while developing your pattern-directive regex. If the resource directive is not provided, "ps -e -o args=" is used instead.

On Solaris, the command string retrieved by "ps" is truncated to 79 characters, so trying to match something farther out in the command line will be fruitless. This is a limitation of Solaris, not of GDMA. If you need to identify individual processes, make sure the arguments are arranged so the unique parts fit within this constraint. Other platforms (Linux, AIX) do not appear to have this problem.

On Solaris, there is currently no support for limiting the processes under consideration to only those running in a particular zone. If such a capability is needed, talk to GroundWork.

On Windows, a username may be specified as just the user account or prepended with a domain name in the form domain\user as you would expect. That said, there is a level of backslash interpretation that occurs when the instructions file is read, that will tend to swallow a single backslash used in that fashion. Therefore, to specify both a domain and user, double the backslash between them to specify adomain\\user instead. If a simple user account is named, without a prefixed domain name, that is the only component matched against process owners. If both a domain name and a user account are provided in a given user name, then both components are matched against process owners.

On Windows, it is possible for domain names and user account names to contain spaces. That conflicts with the possible use of spaces to separate consecutive user names within the resource definition. To get around that, on this platform we support double-quoting a full username. If either the domain name or the user account name contains a space character, such double-quoting is mandatory for that username. The double quotes used in this fashion nest within the overall double quotes which contain the full resource list of user names without needing to be themselves escaped by backslashes, so you may have a specification like this:

resource = " "NT AUTHORITY\\LOCAL SERVICE" Administrator "

On Windows, you may use "." as an abbreviation for the unqualified hostname used as a domain name. So for instance, on a machine named MyMachine, this resource:

resource = " ".\\MyUser" "

would match processes running under the "MyMachine\MyUser" account. This kind of substitution can be used to restrict the resource matching to only local accounts, while still keeping the instructions file portable across many Windows machines.

On Windows, the ability to access process command-line and user-account information for processes not owned by the currently running user may be limited by the set of privileges assigned to the account. This is commonly evident if you run the GDMA service as a non-admin user. Such restrictions severely limit the utility of this sensor type in such a context. The proper treatment of Windows privileges is beyond the scope of this document.

On UNIX-like platforms, user name matching for this sensor is case-sensitive. On Windows, it is case-insensitive.

On UNIX-like platforms, all the user names you specify must exist on the platform you are probing. Otherwise, the "ps" command might fail and not produce the desired list of processes. On Windows, we use a different mechanism to check for full process commands, and the matching process ends up ignoring user names that do not exist on the machine whose assets are being discovered. This means that any typos made when specifying user names on Windows are much less likely to be noticed, and may simply produce discovery results which omit some expected sensor matches. You are forewarned.

There is an important consideration when writing the sensor pattern. This applies to any sensor type, but is most commonly an issue with these sensors, where the sensor pattern may be at least partially matched against filepaths:

type = file_name
type = symlink_name
type = directory_name
type = mounted_filesystem
type = full_process_command
type = open_named_socket

You may be tempted to backslash-escape the forward slashes in pathnames within your sensor patterns, because you are used to doing so in ordinary Perl regular expressions that are delimited by slashes. Thus for instance, for a full_process_command sensor, you may be tempted to write:

pattern = "\/usr\/bin\/python -Es \/usr\/sbin\/firewalld"

The backslashes there would be pretty much useless, because those backslashes would be swallowed by quote-escape interpretation as the instructions file is read, and never reach the regex processing. So you might try doubling them, so single backslashes become part of the regular expression used by the pattern matching:

pattern = "\\/usr\\/bin\\/python -Es \\/usr\\/sbin\\/firewalld"

While that would work, it is completely unnecessary. The manner in which the regular expression you provide is used does not treat slashes within the regex as delimiters. So it is simplest and most readable to just reference paths in sensor patterns in their usual unadorned form:

pattern = "/usr/bin/python -Es /usr/sbin/firewalld"

On Windows, for all of those listed sensor types except full_process_command, you can use forward slashes in your sensor patterns to match backslashes in actual filepaths. That is possible because the entirety of the resource being matched against is just a filepath, and we can therefore make automated adjustments to the pattern before we use it. In contrast, for the full_process_command sensor, that convention is not possible, because we cannot anticipate what other use might be made of forward slashes in the command line. So on Windows, if you need to match some part of a process command that includes a filepath, you must quadruple every backslash in the filepath in the sensor pattern in the instructions file, to match one literal backslash in the filepath on the command line.

On Windows particularly, since the filesystem normally treats filepath components in a case-insensitive fashion, the issue of how to match command names is likely to come up when you construct your sensor pattern. The sensor pattern processing does not impose any modifiers such as case-insensitivity on the entire pattern, because we cannot guess your intentions. However, such modifiers can be cloistered inside the pattern if desired, by constructions such as:

# Match the command name case-insensitively, without capturing it.
pattern = "\\\\(?i:searchindexer.exe)\\s"

# Make the entire pattern case-insensitive, without capturing any of it.
pattern = "(?i)\\\\searchindexer.exe\\s"
type = open_local_port
resource = "IP address blocks"

This sensor looks to see whether a particular port or set of ports are open (generally in a LISTEN state, not just in general use on an ESTABLISHED connection) on the local machine. For this sensor type, the resource specifies a list of space-separated IP address blocks, in CIDR-block notation, used to qualify the port matching (e.g., to specify on what network interface the port is open). The pattern specifies one or more port numbers to probe. Broadly speaking, a matched port must be open on some network address within at least one address range which is positively specified by the resource, and within no address range which is negatively specified by the resource. In that regard, IPv4 and IPv6 addresses are handled separately, though their respective CIDR blocks can be mixed arbitrarily in the specified sensor resource. Each individual port which is found to be open will ultimately generate a separate service instance for each service in the sensor definition's service_profile or service; this sensor type used alone is not intended to test that multiple ports are all open before instantiating a single copy of each service in the service_profile or the single named service. If we want that, we can either AND together the results of multiple <condition> objects in the sensor definition, or implement a separate open_local_port_collection sensor type.

There is no default value for the sensor resource. To match ports, you must supply a resource value. CIDR blocks are positively specified in the sensor resource by simply being present there. CIDR blocks are negatively specified in the sensor resource by being prefixed with a "!" character. This can be used, for instance, to suppress matching of ports open on the loopback interface (! Only IPv4 CIDR blocks affect matching against ports open on IPv4 interfaces, and only IPv6 CIDR blocks affect matching against ports open on IPv6 interfaces.

A positive or ::/0 wildcard CIDR block will match a port on any IPv4 or IPv6 interface, respectively. But these wildcards are not allowed as negative CIDR blocks, because they would match all respective addresses and thus completely block any sensor matching of that address type. There is no point in that. If you want to not match any IPv4 addresses, just don't mention any IPv4 CIDR blocks, positive or negative. The same thing applies with IPv6.

Wildcards as a general idea don't just apply with respect to the sensor resources. Ports can be opened by a program either using a specific IPv4 or IPv6 address, or by using an IPv4 or IPv6 wildcarded address (effectively, all zeroes). If a port is found to be open on a wildcarded IP address, it automatically matches every non-negated CIDR block of the same address type (IPv4 or IPv6). If you wish to exclude such wildcarded IP addresses from consideration, you must provide an explicit negated CIDR block of that address type that lists the wildcard address with a full-length netmask (i.e., either ! or !::/128).

IPv4-in-IPv6 mapped addresses, namely those in the ::ffff:0:0/96,64:ff9b::/96, and ::/96 address ranges, are not handled in any special way by the sensor matching. They are treated as ordinary IPv6 addresses. No attempt is made to recognize and handle any other special addresses (network, broadcast, multicast, etc.) in distinct ways.

Each resource component can be one of several objects:

  • a specific IPv4 CIDR block, representing some restricted range of IPv4 addresses, but also matching any wildcard IP address

  • a specific IPv6 CIDR block, representing some restricted range of IPv6 addresses, but also matching any wildcard IP address

  • a negated IPv4 CIDR block, preventing recognition of all IPv4 addresses included in the CIDR block

  • a negated IPv6 CIDR block, preventing recognition of all IPv6 addresses included in the CIDR block

  • a specific IPv4 wildcard address designator (""), corresponding to the network programming constant INADDR_ANY

  • a specific IPv6 wildcard address designator ("::/0"), corresponding to the network programming constant IN6ADDR_ANY_INIT

The address components of discovered listening ports may have any of the following forms:

  • a specific IPv4 address

  • a specific IPv6 address

  • an IPv4 wildcard address, which may vary between platforms and be seen in diagnostic output as either or, but which is normalized (to before resource matching

  • an IPv6 wildcard address, which may vary between platforms and be seen in diagnostic output as either * or[::] or ::, but which is normalized (to ::)  before resource matching

The address matching across all those possibilities can seem a bit complicated. So to see how this plays out in practice, and to guide the construction of your own sensor resource values, consider the following situation. Assume the following IPv4 ports are seen to be open on the machine: (a wildcarded IP address, visible in all domains) (an address in the loopback domain) (an address in a particular private domain)

Then assume we have the following partial sensor definition (missing cardinality and perhaps some other directives) to play with, to match one or more of those open ports as sensor instances:

# Test a port number against a defined resource, for various definitions of the
# resource. The important thing to notice here is that open ports with wildcard
# IP addresses ( will unconditionally match a resource CIDR block such as
# "" even though you didn't specify "" as a valid part of the
# CIDR block. That is the nature of a wildcarded IP address for the open port.
<service "NTP">
    type = open_local_port

    # See below for example resource values.
    resource = " ... CIDR block(s) ... "

    # The well-known port 123 is used by NTP.
    pattern = "123"

    service = "ntp"

Here is what address:port combinations would match under various values of the sensor resource, keeping in mind that every match will result in a separate discovered sensor instance:

# Match all ports on all IPv4 network interfaces:
# matches
# matches
# matches
resource = ""

# Match all IPv4 ports in the loopback domain, plus IP addresses seen as
# wildcards since they will logically also be seen in the loopback domain:
# matches
# matches
resource = ""

# Match all IPv4 ports in a particular private domain, plus IP addresses seen as
# wildcards since they will logically also be seen in that same private domain:
# matches
# matches
resource = ""

# Match all IPv4 ports on all IPv4 network interfaces, except for ignoring
# IP addresses seen as wildcards by explicitly excluding them:
# matches
# matches
resource = " !"

# Match all IPv4 ports on all IPv4 network interfaces, except for ignoring
# IP addresses in the loopback domain by explicitly excluding them:
# matches
# matches
resource = " !"

# Match all IPv4 ports on all IPv4 network interfaces, except for ignoring
# IP addresses in the private address domain by explicitly excluding them:
# matches
# matches
resource = " !"

# Match all IPv4 ports in the loopback domain, ignoring IP addresses seen as
# wildcards by explicitly excluding them:
# matches
resource = " !"

# Match all IPv4 ports in a particular private domain, ignoring IP addresses
# seen as wildcards by explicitly excluding them:
# matches
resource = " !"

# Match only IPv4 IP addresses seen as wildcards:
# matches
resource = ""

# Match no IPv4 ports, because no positive (non-negated) CIDR block is given:
# (no ports matched)
resource = "!"

The pattern for this sensor type is a space-separated list of explicit port numbers and port ranges. Whatever local IP address is used by the open port will become the $MATCHED1$ value. Whatever port number is actually found to match will become the $MATCHED2$ value, without any explicit parentheses being used in the pattern to indicate capturing the port number.

Port ranges are included specifically because the user might want to specify a very large number of possible ports for a particular sensor. Port ranges can be specified using either the commonly understood single-dash notation or the standard Perl double-dot range operator as the punctuation between the first and last port numbers in the range. To simplify both parsing of the pattern and human understanding, no spaces are allowed around the range punctuation. Thus we have:

pattern = "7000-7500"     # Standard notation for a large set of consecutive numbers.
pattern = "7000..7500"    # Exact same thing, but expressed as a Perl range.
pattern = "7000 - 7500"   # Not allowed.
pattern = "7000 .. 7500"  # Not allowed.

If we allowed spaces around the range punctuation, then a pattern like "60 70 - 80 90" would get visually too confusing. Specifying that as "60 70-80 90" makes the grouping clearer, so we simply enforce that.

We considered adding support for well-known port names in the pattern, such as the ports listed in /etc/services on non-Windows systems, or C:\Windows\System32\drivers\etc\services on Windows. However, that would make validation of the sensor definitions more difficult, since the set of available port names may differ from machine to machine. For that reason, we have stuck to simple port numbers.




Using a prefix like that would allow us not to split the open_local_port sensor type into separate open_local_tcp_port and open_local_udp_port sensor types.

type = open_named_socket

Some processes listen on a port not by opening an anonymous socket, but by opening a socket which is named by some path which is visible in the filesystem. For instance, /usr/local/groundwork/postgresql/.s.PGSQL.5432is used by GroundWork PostgreSQL to listen for incoming local connections. This sensor type is different from the open_local_port sensor type both because the pattern directive regular expression is matched against a filesystem path rather than a port number, and because it draws the list of candidate items (in this case, filepaths) from a source (netstat output) which does not need qualification partitioning as we do with CIDR blocks for anonymous ports. (If any such partitioning is desired, it would probably be a qualification as to whether the named socket is on a locally-mounted or remote-mounted filesystem. If there turns out to be a practical use case for this or some other resource-level qualification, talk to GroundWork.)

The sensor pattern is matched against all the open named socket paths on the machine, without pre-qualification by a sensor resource definition. This sensor type might typically be used to sense an open MySQL or PostgreSQL database port.

As noted earlier, the list of candidate socket paths is drawn from netstat output instead of being generated from fileglobs. That can produce some items that don't completely look like absolute filepaths. Here are some examples, drawn from a Linux machine:


Not having a sensor resource to nail down particular filepaths also means that your pattern directive regular expression will probably need to include much of the filepath(s) you intend to match to determine whether the named socket of interest is open. If you have difficulty constructing your pattern directive regex to match the named socket you expect to see, you can look at "netstat -a" output to see what it sees, and what your pattern will be up against.

Sockets named in the filesystem (that is, the AF_UNIX socket family) is only becoming available in Windows internal builds in early 2018; it is not yet in public release. Until that happens and GroundWork has a chance to test this feature, the open_named_socket sensor is not supported under Windows.

When this sensor type becomes available on Windows, named socket paths will ordinarily include backslashes, making it complicated to construct corresponding sensor patterns. So to avoid the ugliness of needing to double each backslash in a pattern match and then double it again to escape backslash interpretation as the instructions file is read, we support using forward slashes in sensor patterns for such paths even on the Windows platform:

pattern = "C:/foo/bar"

As with path matching for patterns in similar sensors, on the Windows platform the forward slashes in your pattern will match backslashes in the actual filepath, and any matched backslashes will be returned as such in strings captured as part of your sensor pattern.

Auto Discovery Trigger File Format

We want to make the content of the trigger file flexible to a degree, both to allow for a few possible commands that we have in mind for the first implementation, and to allow the same file format to continue to be used as the software evolves. The data should be very simple and self-describing. Since this is intended to be just a transient trigger file, we do not currently imagine passing complex instructions in this file (such as "hey, the discovery-instructions file has rules for matching almost anything in the world, but really, for you, just pay attention to these particular areas"). As such, we do not see the need to include a format-version number in this file. Any directives which are not understood by a reader of this file are flagged as such so the failure to process them is documented, and they are logged and otherwise ignored.

Catalog of Possible Auto Setup Trigger Directives

Currently, there are only a few directives supported in the trigger file. GroundWork will provide options to set these directives on the autosetup command that creates trigger files.


Specify how the discovery results are to be handled if they simply duplicate what happened in the last round of similar-type (dry-run or live-action) discovery. (Dry-run means any value of last_step before do_configuration.) This is an optional directive; it defaults to optimize. The idea is to block wasted effort on both the client and server that would not change any outcome except for recording the fact that another pass of auto-discovery was run. The possible values are:

ignore: If the GDMA client finds that the discovery results are an exact match of the last time this type of auto-discovery was run, it will log that fact locally and go no further. That is to say, even if last_step is set to send_results or some later stage, and even if the GDMA client has no externals in hand and last_step is do_configuration, no results will be sent to the server if the results of this pass are identical to those of the previous pass of such auto-discovery.

optimize: Behave exactly as ignore would have, except in the special case where last_step is do_configuration and the GDMA client has no externals in hand that correspond to those exact discovery results (i.e., that are timestamped later than the earlier run that produced those results). This allows a live-action pass of auto-discovery to punch through any previous server-side problems that caused configuraton to fail, and re-run the steps there so new externals can be generated and then picked up by the GDMA client, while not bothering the server if nothing would change.

force: If the GDMA client finds that the discovery results are an exact match of the last time this type of auto-discovery was run, it will log that fact locally but continue on. That is to say, if last_step is set to send_results or some later stage, the duplicate results will be sent on to the server in the usual manner, and the server should process them all the way through the specified last_step. This setting is provided to heal all prior wounds, ensuring that all components are fully synchronized, at the cost of possibly doing some work that won't change anything.

The most common setting for if_duplicate is:

if_duplicate = "optimize"


Specify how errors will be handled if last_step is either do_analysis or test_configuration and the auto-setup processing gets that far. The possible values are:

ignore: Errors encountered while either analyzing discovery results or running a dry-run application of those results to the Monarch configuration will be ignored, with respect to reporting them to any outside agency. This is the default behavior. It is assumed that an administrator is manually supervising some testing, and will be taking the trouble to look for any problems that might appear. The autosetup tool can help with this.

post: Errors encountered when either analyzing discovery results or running a dry-run application of those results to the Monarch configuration will be posted to the production monitoring, as though they had occurred when last_step was do_configuration. The nature of the reporting will be softened in that it will result in a WARNING-level event instead of a CRITICAL-level event, since this was just a dry-run problem. This behavior might be useful for running periodic selective automated tests to verify that there would be no problems if a pass of live-action auto-discovery were to become necessary.

The most common setting for soft_error_reporting  is:

soft_error_reporting = "ignore"


Optionally, specify a change policy specific to just the pass of discovery run by this trigger, temporarily overriding the default_change_policy set in the GroundWork server's config/register_agent_by_discovery.conf file. This directive is not required to be present in the trigger file, and there is not yet any guarantee that any value of this directive other than non_destructive will be supported on the GroundWork server.

The most common setting for change_policy  is:

change_policy = "non_destructive"


Specify how far to proceed with processing the auto-discovery instructions.  Possible values are:

ignore_instructions: This is a no-op. The GDMA client should not try to download the instructions file. This would be used only for testing the ability for the GDMA client to find and download the trigger file itself.

fetch_instructions: The GDMA client should attempt to download the instructions file, but take things no further.

do_discovery: The GDMA client should run a pass of auto-discovery, but do nothing with the results except to capture them locally in a file. It should only log errors locally if they should arise, or log the successful completion of the discovery actions.

send_results: The GDMA client should send auto-discovery results, failed or not, to the GroundWork server. Said results will include the values of all directives in the trigger file, so the server will know how far to proceed on its side. The server will do a minimal amount of inspecting the results, mainly to validate their construction as a proper auto-discovery packet and then to check the last_step value to see if further work is needed on the server. If the received data constitutes a valid packet, it will be stored in the auto-discovery results directory on the server, so the results can be inspected manually and analyzed via the autosetup  tool.

do_analysis: The GroundWork server should fully analyze the returned auto-discovery results, checking them for internal errors, inconsistencies, and client-side failures, but go no further. Problems will be logged on the server side, and the analysis results will be stored in the auto-discovery results directory for debugging and troubleshooting purposes.

test_configuration: Using the outcome of the analysis, the GroundWork server should do a dry run of configuring the GDMA client in Monarch, taking account of the current content of host and service profiles, service definitions, and so forth. Problems will be logged on the server side, but any changes made to the database for test purposes will be rolled back and not committed.

do_configuration: The GroundWork server should attempt to put all auto-discovery results into production, by both changing the Monarch database as needed and building externals for this GDMA client.

The ignore_instructionsfetch_instructions, and do_discovery choices for last_step are mostly for early development testing. The server has no notion of when the trigger file is downloaded or when the GDMA client takes any subsequent action, so it has no idea that anything should be done with the trigger file. If the trigger file stays around, the GDMA client will find it again on its next polling cycle and interpret it all over again. So it is up to the administrator to manage deleting the trigger file on the server once testing is done. That can be done either manually, by customer scripting, or with the autosetup tool.

For send_results or any of the later choices for the last_step, the last action on the server will be to remove the trigger file, after the other actions described above for that last_step are complete. This means that any given trigger file effectively runs a one-shot action. If the administrator wants to run another cycle of auto-discovery, another trigger file will need to be created and moved into place. That can be easily done with the autosetup tool.

As a practical matter, if you're going to do a dry run for most diagnostic purposes, you may as well set last_step to do_analysis or test_configuration. This both makes the trigger deletion automatic, and puts some data back on the server where it will be easier to access.

The most common settings for last_step  are:

# Use this for a full dry run.
last_step = "test_configuration"

# Use this for a full production run.
last_step = "do_configuration"

Server-side Configuration Options

The config/register_agent_by_discovery.conf file on the GroundWork server contains a number of directives that control the handling of auto-discovery results. We list them only briefly here. See the config file for more information.

enable_processing = yes
debug_level = 1
max_logfile_size = 10000000
max_logfiles_to_retain = 5
instructions_directory = "/usr/local/groundwork/apache2/htdocs/gdma_instructions"
trigger_directory = "/usr/local/groundwork/apache2/htdocs/gdma_trigger"
results_directory = "/usr/local/groundwork/gdma/discovered"
max_input_size = 1000000 
default_change_policy = "non_destructive"
hostname_qualification = "full"
default_host_profile = "gdma-{HOST_OS}-host"
default_hostgroup = "Auto-Registration"
assign_hostgroups_to_existing_hostgroup_hosts = false
default_monarch_group = "auto-registration"
assign_monarch_groups_to_existing_group_hosts = false
customer_network_package = "AutoRegistration"
compare_to_foundation_hosts = false
match_case_insensitive_foundation_hosts = false
force_hostname_case = "lower"
force_domainname_case = "lower"
use_hostname_as_key = false
use_mac_as_key = false
host_address_selection = " ::/0"
rest_api_requestor = "agent auto-setup"
ws_client_config_file = "/usr/local/groundwork/config/"
GW_RAPID_log_level = "WARN"
log4perl_config = ...

Related Resources