LogicMonitor.DataSDK - the Java library for the LogicMonitor Metrics ingest API

LogicMonitor is a SaaS-based performance monitoring platform that provides full visibility into complex, hybrid infrastructures, offering granular performance monitoring and actionable data and insights. Metrics-Ingest provides the entry point in the form of public rest APIs for ingesting metrics into LogicMonitor. For using this application users have to create LMAuth token to get access id and key or create Bearer token from santaba.

  • SDK version: 0.0.1-alpha

Metrics Ingestion Example

SDK must be configured with LogicMonitor.DataSDK Configuration class. While using LMv1 authentication set LM_ACCESS_ID and LM_ACCESS_KEY properties, In Case of BearerToken Authentication set LM_BEARER_TOKEN property. Company's name or Account name must be passed to LM_COMPANY property. All properties can be set using environment variable.

For metrics ingestion user must create a object of Resource, DataSource, DataSourceInstance and DataPoint using LogicMonitor.DataSDK.Model namespace, also Map should be created in which ' Key' hold the Time(in epoch) for which data is being emitted and 'Value' will the value of datapoint.

//Pass autheticate variable as Environment variable.
Configuration conf = new Configuration();
final boolean batch = false;
Metrics metrics = new Metrics(conf, 10, batch, responseInterface);

Resource resource = Resource.builder().ids(resourceIds).name(resourceName).build();
DataSource dataSource = DataSource.builder().name(dataSourceName).group(dataSourceGroup).singleInstanceDS(false).build();
DataSourceInstance dataSourceInstance = DataSourceInstance.builder().name(instanceName).build();
DataPoint dataPoint = DataPoint.builder().name(cpuUsage).build();
Map<String, String> cpuUsageValue = new HashMap<>();

cpuUsageValue.put(String.valueOf(Instant.now().getEpochSecond()), cpuUsageMetric);
metrics.sendMetrics(resource, dataSource, dataSourceInstance, dataPoint2, cpuUsageValue);

While Metrics Ingestion we will be passing either single request or batching request, "Batching is Bluk of request passed in single API call". To determine if the request is batching request or single request, we have boolean variable as "batch" which can be true or false accordingly by default batch is set as true.

We also have Gzip functionality where the data will be send in the compressed form.The gzip format is a technique used to speed up the sending of data over the internet .Gzip compression is used increase the throughput of data. we have boolean variable as "gzip" which can be true or false accordingly by default, Gzip is set as true.

We have also implemented rate limit in Data-SDK where "requestPerMin" is integer variable which is used to set maximum number of request to be invoke per min, by default it's set to 100. This helps to avoid the data loss from new request above maximum limit.This is "Time-Based-Rate-Limit".

There is a size-based rate limiting feature that limits the payload with 104858 bytes being the payload limit during compression and 1048576 bytes being the no compression payload limit. 100 instances are allowed.

Read below for understanding more about Models in SDK.

Model

  • Resource

Resource resource = new Resource(ids, name, description, properties, create);

Ids(Dictonary<String, String>):
An Dictionary of existing resource properties that will be used to identify the resource. See Managing Resources that Ingest Push Metrics for information on the types of properties that can be used. If no resource is matched and the create parameter is set to TRUE, a new resource is created with these specified resource IDs set on it. If the system.displayname and/or system.hostname property is included as resource IDs, they will be used as host name and display name respectively in the resulting resource.

Name(String):
Resource unique name. Only considered when creating a new resource.

Properties(Dictonary<String, String>):
New properties for resource. Updates to existing resource properties are not considered. Depending on the property name, we will convert these properties into system, auto, or custom properties.

Description(String):
Resource description. Only considered when creating a new resource.

Create(bool):
Do you want to create the resource.

  • DataSource

DataSource dataSource = new DataSource(dataSourceName, dataSourceGroup, displayName, id);

Name(String):
DataSource unique name. Used to match an existing DataSource. If no existing DataSource matches the name provided here, a new DataSource is created with this name.

DisplayName(String):
DataSource display name. Only considered when creating a new DataSource.

Group(String):
DataSource group name. Only considered when DataSource does not already belong to a group. Used to organize the DataSource within a DataSource group. If no existing DataSource group matches, a new group is created with this name and the DataSource is organized under the new group.

Id(int):
DataSource unique ID. Used only to match an existing DataSource. If no existing DataSource matches the provided ID, an error results.

  • DatasourceInstance

DataSourceInstance dataSourceInstance = new DataSourceInstance(name, displayName, description, properties);

Name(String):
Instance name. If no existing instance matches, a new instance is created with this name.

DisplayName(String):
Instance display name. Only considered when creating a new instance.

Properties(Dictionary<String, String>):
New properties for instance. Updates to existing instance properties are not considered. Depending on the property name, we will convert these properties into system, auto, or custom properties.

Description(String):
Resource description. Only considered when creating a new resource.

  • DataPoint

DataPoint dataPoint = new DataPoint(name, description, aggregationType, description);

Name(String):
Datapoint name. If no existing datapoint matches for specified DataSource, a new datapoint is created with this name.

AggregationType(String):
The aggregation method, if any, that should be used if data is pushed in sub-minute intervals. Allowed options are “sum”, “average”, "percentile" and “none”(default) where “none” would take last value for that minute. Only considered when creating a new datapoint. See the About the Push Metrics REST API section of this guide for more information on datapoint value aggregation intervals.

Description(String):
Datapoint description. Only considered when creating a new datapoint.

Type(String):
Metric type as a number in String format. Allowed options are “guage” (default) and “counter”. Only considered when creating a new datapoint.

  • Value

Map<String, String> value = new HashMap<>();

Value is a dictionary which stores the time of data emittion(in epoch) as Key of dictionary and Metric Data as Value of dictionary.