About streaming data
Zenoss Cloud provides a variety of methods for gathering model, metric, and event data without a Collection Zone. Each method uses the data receiver resources of the Zenoss API to update Zenoss Cloud. Updates are either a single transaction or a regular flow of data (hence the name streaming data).
Streaming data methods are grouped into three types:
A connector gathers data from public cloud applications with just a little configuration.
The Zenoss Collector framework gathers data through agents or a variety of plugins, including ZenPack Adapter. ZenPack Adapter provides the services that ZenPacks require—without a Collection Zone.
An independent agent sends data either directly to Zenoss Cloud or indirectly, through a Zenoss Collector proxy. Agents are distributed as Docker images or platform-native packages.
In addition, you can create customized applications from scratch with data receiver resources or with existing Zenoss libraries, such as the OpenCensus library for Go. All your streaming data flows can be customized with the policy service, a declarations-driven data and operations management interface.
Zenoss Cloud provides ready-made connectors for public cloud applications. To enable monitoring, you need only configure a few fields—no agent installation is required. Supported connectors:
The Zenoss Collector framework supports both agent-based and agentless collection of metric, model, and event data. The framework is a small, pre-configured Kubernetes deployment that runs under MicroK8s on a host in your environment. Zenoss Collector communicates with Zenoss Cloud using a standard HTTPS connection.
Agentless collection is supported with ZenPack Adapter, which enables the use of Zenoss ZenPacks without a Collection Zone.
Zenoss Collector is in alpha release and might not be available in your environment. For more information, please contact your Zenoss representative.
Zenoss provides the following purpose-built clients, which stream data to Zenoss Cloud through a standard HTTPS connection:
Agents are distributed either as Docker images or as platform-native packages.
The Zenoss agent for Kubernetes is a monitoring daemon that collects key cluster metrics from kube-apiserver and streams them directly to Zenoss Cloud. The agent sends both metric and model data for pods, containers, nodes, namespaces, and the cluster itself. The metric data can be viewed in dashboards or in Smart View; a Kubernetes dashboard template is available. The model data includes dependency relationships, which enable Smart View analyses of related cluster entities.
The Zenoss agent for Kubernetes supports Kubernetes 1.10 through 1.16 and requires metrics-server. You include the agent in a cluster with a deployment and Kubernetes schedules the agent in its own pod, on one node. The size of a cluster determines how much RAM the agent consumes; in small clusters, it consumes approximately 15MB. The agent uses incremental change notifications to collect data, rather than a polling interval, and so receives updates any time a monitored property changes. The agent sends a batch of metric data to Zenoss Cloud every 60 seconds and sends a batch of model data when the model data changes. The source code of the agent is public and an image containing its binary is available on Docker Hub.
The StatsD agent is a Zenoss backend for the Atlassian gostatsd server.
The Zenoss backend takes advantage of the tags feature of
gostatsd to add information to incoming metric data. Primarily,
tags are used to uniquely identify the
entity supplying metric data
and to provide important metadata about the metrics or entity.
Optionally, tags can be used to associate an entity with other entities
that affect it (model data).
Some tags can be applied to all entities, by configuring
gostatsd, while other tags must be applied to specific
entities, by modifying application code. The modifications are
lightweight, and Zenoss provides examples for the most popular
The policy service provides centralized data and operations management through declarations rather than code.
Transform incoming data with ingest policies.
Separate metadata from data in incoming metric data streams to create distinct entities and metrics, without writing customized ETL scripts. Or add metadata to event data streams, to categorize events or facilitate action service processing.
Identify entity types with entity policies.
Provide a tag for instances of an entity type, or standardize metadata from different sources so that, for example, all Kubernetes clusters have the same tags and can be easily found.
Manage anomaly detection with anomaly policies.
Add or subtract metrics from the list of metrics for which the anomaly detection service trains models.
A datasource associates an incoming stream of data with one or more policies. Policies are not used unless they are included in a datasource.
When to customize
You do not need to customize any of the default policies that Zenoss provides to monitor your environment. The defaults have been tested extensively in production and work as designed.
You may wish to customize an anomaly policy to add a metric to the anomaly detection service. You are welcome to do so at any time.
If you wish to integrate an application or add a new, custom datasource, please contact Zenoss Support to arrange assistance creating a custom ingest or entity policy.