PreEmptive Analytics Workbench User Guide

Transformation Configuration

Transformations allow manipulation of the data from Queries or other Transformations, to change how the data is displayed in the Portal.

The Transformation part of the pipeline is heavily based on a similar section of Vega, a third-party visualization grammar. However, not all parts of Vega are supported; please read the sections below carefully.

Transformation configuration files have the extension .transformation.

Common Syntax

The syntax for specifying a transformation:

{
    "datasets": [
        { 
            "name": "data_from_another_component",
            "input": "namespace:filename.extension>dataset",
            "transform": [{ ... }]
        },
        {
            "name": "data_from_this_transformation",
            "source": "data_inline"
            "transform": [{ ... }]
        },
        { 
            "name": "data_inline", 
            "values": [12, 23, 47, 6, 52, 19],
            "transform": [{ ... }]
        }
    ]
}

Each object within the datasets array describes a Transformation Dataset, which will be available to other Transformations and Widgets. The input to a Transformation Dataset can be any of:

  • a Query Dataset
  • a Transformation Dataset from another Transformation
  • a Transformation Dataset from the same Transformation
  • a hardcoded array of in-line data

The Dataset definition syntax is similar to Vega's Data Properties syntax with some changes:

  • name (string): the name of this Transformation Dataset.
  • a data source, specified as one of:
    • input (string): a Dataset from another component (a Query or another Transformation), specified by the Dataset referencing syntax.
    • source (string): the name of another Dataset within this Transformation.
    • values (array<object>): a hardcoded array of data.
  • transform: an array of transform operations to perform on the data set. See below for details.

Recall that before any transforms are executed, the original values of the data source are stored in a sub-object called data on each element. This applies to all three data source types, including other Transformation Datasets.

Note also that each Transformation requires at least one Dataset that takes an input data source.

Available Transforms

Transformation Datasets are built using the Data Manipulation Transforms specified by Vega. Each is expressed in the following form:

{
    "type": "transform-name",
    "transform-specific-properties": ...
}

Not all transforms from Vega are supported. Transforms which are fully supported are:

  • aggregate**
  • copy
  • facet*
  • filter
  • formula*
  • log**
  • sort
  • stats
  • transpose**
  • zip

* these transforms have been modified from their original Vega forms.
** these transforms are introduced by the Workbench and not part of the original Vega framework.

We advise using the log transform when developing and debugging Transformations, then removing this logging in production.

See below for details on each of the supported transforms.

Aggregate (new)

Perform a stats transform on multiple fields.

Properties:

  • type: aggregate
  • fields (array<string>): field names for which to calculate stats
  • as (array<string>): corresponding new field names to be created, to store the calculations (must be same length as fields)

The result of the aggregate transform is the same as with the stats transform, except the various calculation results are found within sub-objects whose names are determined by the as property.

Example

preemptive:key-stats.transformation uses this transform to calculate statistics on various fields of daily session information. The transform is defined:

{
    "type":"aggregate",
    "fields":[
        "data.StopCount",
        "data.StartCount",
        "data.CompleteCount",
        "data.Count",
        "data.MinLength",
        "data.MaxLength",
        "data.NewUsers",
        "data.ReturningUsers",
        "data.UniqueUsers"
    ],
    "as":[
        "StopCount",
        "StartCount",
        "CompleteCount",
        "Count",
        "MinLength",
        "MaxLength",
        "NewUsers",
        "ReturningUsers",
        "UniqueUsers"
    ]
},

The Dataset is derived from sessions-by-day.query, so before it is modified by this transform it looks, in part, like this:

[
    {
        "data": {
            "Time": 1416873600000,
            "Count": 51,
            "UniqueUsers": 2
            ...
        }
    },
    {
        "data": {
            "Time": 1418601600000,
            "Count": 22,
            "UniqueUsers": 1
            ...
        }
    ,
    {
        "data": {
            "Time": 1418688000000,
            "Count": 1,
            "UniqueUsers": 1
            ...
        }
    }
]

After the aggregate transform calculates values for each group, the Dataset consists of a single element:

[
    {
        "Count": {
            "sum": 74,
            "max": 51,
            "mean": 24.666666666666664,
            "min": 1,
            ...
        },
        "UniqueUsers": {
            "sum": 4,
            "max": 2,
            "mean": 1.3333333333333333,
            "min": 1,
            ...
        },
        ...
    }
]

Statistical information has been calculated for the specified fields from the original Dataset and stored in corresponding objects in the resulting Dataset. Because our input Dataset was not faceted, the resulting Dataset only has one element, which applies to the entirety of the input Dataset. See the example for the stats transform for a case where the input Dataset is faceted, and how this affects the output.

Copy

Copy one or more values from a sub-object to the top-level object of each element in the array.

Properties:

  • type: copy
  • from (string): path to the sub-object to copy from
  • fields (array<string>): an array of field names whose values should be copied
  • as (optional, array<string>): an array of corresponding new field names to be created on the top-level object (must be same length as fields)
    • Defaults to the value of fields.

The result is the same as before, but with the values of each specified field (fields) copied to the top-level object under their new names (as).

Example

preemptive:sessions-by-hour-of-day.transformation copies a subset of values from the data sub-object to its parent with the following transform:

{
    "type":"copy",
    "from":["data"],
    "fields":["Time","StartCount","StopCount","Count"],
    "as":["Time","StartCount","StopCount","Count"]
}

For instance, if the initial Dataset looks like the following:

[
    {
        "data":
        {
            "Time": 20,
            "StartCount": 2,
            "StopCount": 1,
            "Count": 1
        }
    }
]

The transform will copy those fields to the parent object:

[
    {
        "Time": 20,
        "StartCount": 2,
        "StopCount": 1,
        "Count": 1,
        "data":
        {
            "Time": 20,
            "StartCount": 2,
            "StopCount": 1,
            "Count": 1
        }
    }
]

Facet (modified)

Group data into "facets", each having a unique set of values for specified fields. This is similar to the "group by" operation in SQL.

Properties:

  • type: facet
  • keys (array<string>): an array of field names that will be used to organize the data
  • as (optional, array<string>, new to Workbench): corresponding new field names to be created on the root of the facet for each key (must be same length as keys)
    • Defaults to not copying keys to separate fields.
  • sort (optional, array<string>): sort criteria for elements within each facet (follows same rules as sort transform)
    • Defaults to no guaranteed sort order.

The result is a Vega "facet": a hierarchical collection of arrays with the root of each array specifying the unique keys all of its elements share. This data format can be used with other transforms, most of which will operate on each facet's values separately.

Example

preemptive:version-adoption.transformation is given a set of session data, where each entry has a unique application and day. However, we want to organize this data primarily by time, and have sub-entries for each application (so it can be displayed in one graph in the preemptive:version-adoption.widget). This is done by the following transformation:

{
    "type" : "facet",
    "keys" : ["data.Time"],
    "as" : ["Time"]
}

For instance, the initial data might look like this, where v1.0 of the application has data for two days, but v2.0 only has data for the second day:

[
    {
        "data":
        {
            "AppId_Version": { "format": "Application v1.0", ... },
            "Time": 1412553600000,
            "Count": 2
        }
    },
    {
        "data":
        {
            "AppId_Version": { "format": "Application v1.0", ... },
            "Time": 1412640000000,
            "Count": 3
        }
    },
    {
        "data":
        {
            "AppId_Version": { "format": "Application v2.0", ... },
            "Time": 1412640000000,
            "Count": 1
        }
    }
]

After the facet transform, this data would be grouped into two facets, one for each day:

{
    "key": "",
    "values" :
    [
        {
            "key" : "1412553600000",
            "Time" : 1412553600000,
            "values" :
            [
                {
                    "data":
                    {
                        "AppId_Version": { "format": "Application v1.0", ... },
                        "Time": 1412553600000,
                        "Count": 2
                    }
                }
            ]
        },
        {
            "key" : "1412640000000",
            "Time" : 1412640000000,
            "values" :
            [
                {
                    "data" :
                    {
                        "AppId_Version" : { "format" : "Application v1.0", ... },
                        "Time" : 1412640000000,
                        "Count" : 3
                    }
                },
                {
                    "data" :
                    {
                        "AppId_Version" : { "format" : "Application v2.0", ... },
                        "Time" : 1412640000000,
                        "Count" : 1
                    }
                }
            ]
        }
    ]
}

Notice that the data.Time value that was unique among all entries in each facet is now also copied to the root of the facet under the name Time, because it was specified in the as property.

Filter

Remove elements that do not pass a predicate.

Properties:

  • type: filter
  • test (string): a string containing a JavaScript expression that evaluates to a boolean value, indicating whether an element should be retained or not
    • The element that is being evaluated is stored in the variable d.
    • All methods and constants of the JavaScript Math object are supported without needing to specify that prefix.

Note: This transform is not to be confused with the concept of a user applying "filters" (such as to only show a certain time range), which are applied on a per-Report level and automatically passed to Queries as they run. That filtering applies on the server; this transform operates on data after it has been returned by the server to the Portal, and cannot be disabled by the user of the Portal.

Example

Let's say we want to have a table that displays String-based custom data information, but only for data that was originally delivered from the Feature Tick named DialogClosed. We construct a Dataset that takes preemptive:custom-data-string-summary.query as an input, and apply the transform:

{
    "type": "filter",
    "test": "d.data.EventCode === \"Feature.Tick\" && d.data.Source === \"DialogClosed\""
}

Note a few things about our testing expression:

  • The fields that were originally stored by the input Query are referred to with the prefix d.data.: the d because we are checking the current element, and data because the input Dataset was copied into that sub-object when the Transform Dataset was created.
  • The operators === and && carry their meanings from JavaScript (strict-equality and logical-AND, respectively).
  • Because we are defining this expression as a string, we escape our quotes that are used within the expression: i.e., \".

When this filter is applied to the following incoming data:

[
    {
        "data":
        {
            "EventCode": "Feature.Tick",
            "Source": "DialogClosed",
            "Key": "ClosedWithShortcutKey"
            "Value": "false"
            "Count": 3
        }
    },
    {
        "data":
        {
            "EventCode": "Feature.Tick",
            "Source": "DialogClosed",
            "Key": "UserSaved"
            "Value": "true"
            "Count": 5
        }
    },
    {
        "data":
        {
            "EventCode": "Feature.Tick",
            "Source": "Processing",
            ...
        }
    }
]

It is reduced to the following:

[
    {
        "data":
        {
            "EventCode": "Feature.Tick",
            "Source": "DialogClosed",
            "Key": "ClosedWithShortcutKey"
            "Value": "false"
            "Count": 3
        }
    },
    {
        "data":
        {
            "EventCode": "Feature.Tick",
            "Source": "DialogClosed",
            "Key": "UserSaved"
            "Value": "true"
            "Count": 5
        }
    }
]

Note that a single DialogClosed event can contain multiple Custom Data key-value pairs, but any association among the pairs sent on a single event are not tracked by the server by default; therefore, the Portal cannot determine whether, for example, the 3 ClosedWithShortcutKey = false reports overlapped with any of the UserSaved == true reports. To track this, you will need to modify the data-processing pipeline, probably by using a Custom Data Filter, then make Portal components to display that information.

Formula (modified)

Create a new field on each element.

Properties:

  • type: formula
  • field (string): the name of the new field to be created
  • expr (string): a string containing a JavaScript expression that evaluates to the value that should be stored in the new field
    • The element that is being modified is stored in the variable d.
    • All methods and constants of the JavaScript Math object are supported without needing to specify that prefix.
  • data_type (optional, string, new to Workbench): sets the field's metadata type, which is used to display the data (see data_types)
    • Defaults to no type.
  • as (optional, string, new to Workbench): sets the field's metadata as, which determines the field's display name
    • Defaults to the value of field.
Example

preemptive:sessions-by-os.transformation is given session data, aggregated by operating system (OS). Each element contains the total length of all sessions, as well as the number of sessions, that ran on a particular OS. It is more practical to Portal users to see the average session length, rather than the total length, so this Transformation adds a new field for that:

{
    "type" : "formula",
    "field" : "AvgLength",
    "expr" : "d.data.TotalLength/d.data.CompleteCount",
    "data_type" : "timelength"
}

Note that the as property is not defined, so the Widget is responsible for providing a suitable display name when using this new field (or else the field's internal name, AvgLength, will be displayed).

When this transform is applied to the following incoming data:

[
    {
        "data":
        {
            "OS": "Windows 8.1",
            "TotalLength": 1230000,
            "Count": 3
        }
    },
    {
        "data":
        {
            "OS": "Windows 8",
            "TotalLength": 7008000,
            "Count": 8
        }
    }
]

It creates the field AvgLength on each element:

[
    {
        "AvgLength": 410000,
        "data":
        {
            "OS": "Windows 8.1",
            "TotalLength": 1230000,
            "Count": 3
        }
    },
    {
        "AvgLength": 876000,
        "data":
        {
            "OS": "Windows 8",
            "TotalLength": 7008000,
            "Count": 8
        }
    }
]

Log (new)

Write the current contents of the Dataset to the browser's JavaScript console. This is useful when learning to use the transforms and when debugging.

Properties:

  • type: log
  • label (string): a label that will be written along with the Dataset - useful if multiple log transforms are active at once
  • meta (boolean): whether to log the metadata object, which contains information about fields (e.g., the full name)
    • Defaults to false.

This does not affect the Dataset.

Example

A simple example of using logging to determine how the facet transform works:

{
  "datasets": [
    {
      "name": "entities",
      "input": "my.query",
      "transform": [
        { "type": "log", "label": "init" },
        {
            "type" : "facet",
            "keys" : ["data.Time"],
            "as" : ["Time"]
        },
        { "type": "log", "label": "faceted" }
      ]
    }
  ]
}

The developer can view the state of the Dataset both before and after the facet transform via the JavaScript console, then remove these log transforms when the component is ready for production use.

Sort

Reorder the elements based on the value of one or more fields.

  • type: sort
  • by (Array<string>): a list of fields to use as sort criteria
    • sorting is ascending by default; prepend a field name with - to indicate descending order for that field
Example

preemptive:exceptions-summary.transformation sorts the results of preemptive:exceptions.query in descending order of exception count:

{
    "type": "sort",
    "by": "-data.Count"
}

Exceptions that are encountered more frequently will be placed at the beginning of the Dataset, and unless a Widget overrides this sorting, it will be displayed in this order, though some Widgets (such as the table) might allow users to change the sorting at their leisure.

Stats

Calculate statistics for a single quantitative field. The statistics calculated are: count, min, max, sum, mean, variance, stdev, and optionally median. Use the aggregate transform to perform this action on multiple fields.

Properties:

  • type: stats
  • value (string): the field name for which to calculate stats
  • median (optional, boolean): whether to calculate the median
    • Defaults to false.

If the Dataset is in a faceted (grouped) form, the result will be an array, with each element corresponding to a facet and having the facet's key, values, and the calculated statistics as fields on that element.

If the Dataset is not faceted, the result will be an array with a single object, with fields corresponding to all of the data in the Dataset before this transform. The original values will be lost.

Example

Unlike the other examples on this page, this one does not come from any of the default preemptive-namespace components, which instead use the aggregate transform to accomplish similar results, only over multiple fields at once.

Let's say we are tracking a mobile application that makes requests to a web service. The web service works best if each feature is used at a consistent rate between days (i.e., "Feature A" can be used less frequently than "Feature B" overall, but we prefer that "Feature A" is used at a similar frequency on both Monday and Tuesday of a given week).

We start with a modified version of the preemptive:features.query, which now also aggregates on Time by day:

{
  "name": "Summary",
  "domain": "PreEmptive.Features",
  "aggregate_on": [
    {
      "field": "Time",
      "options": "day"
    },
    {
      "field": "FeatureName"
    }
  ],
  "fields": [
    {
      "field": "AppId_Version"
    },
    {
      "field": "Count"
    }
  ]
}

Then, we define a new transformation that takes this new query as an input and uses the following transforms:

{"type": "facet", "keys": ["data.FeatureName"], "as": ["FeatureName"]},
{"type": "stats", "value": "data.Count" },

We first facet the query result by the Feature Name (so we can calculate statistics for each feature separately). This results in something like the following:

{
    "key": "",
    "values" :
    [
        {
            "key" : "Feature A",
            "FeatureName" : "Feature A",
            "values" :
            [
                {
                    "data":
                    {
                        "FeatureName" : "Feature A",
                        "Time" : 1418688000000,
                        "Count": 5
                    }
                },
                {
                    "data":
                    {
                        "FeatureName" : "Feature A",
                        "Time" : 1418601600000,
                        "Count": 1
                    }
                }
            ]
        },
        {
            "key" : "Feature B",
            "FeatureName" : "Feature B",
            "values" :
            [
                {
                    "data":
                    {
                        "FeatureName" : "Feature B",
                        "Time" : 1418688000000,
                        "Count": 2
                    }
                },
                {
                    "data":
                    {
                        "FeatureName" : "Feature B",
                        "Time" : 1418601600000,
                        "Count": 2
                    }
                }
            ]
        }
    ]
}

Then, we perform a stats transform on data.Count, producing metrics for that field, kept separate for each facet (i.e., for each feature):

[
    {
        "key" : "Feature A",
        "FeatureName" : "Feature A",
        "values" : [ ... ],
        "mean": 3,
        "stdev": 2.8284271247461903,
        ...
    },
    {
        "key" : "Feature B",
        "FeatureName" : "Feature B",
        "values" : [ ... ],
        "mean": 2,
        "stdev": 0,
        ...
    }
]

When displayed in a Widget (such as the Table), we can see that Feature A has a higher standard deviation than Feature B - so we can adjust our web service to account for more irregular request volume to Feature A's needed resources.

Transpose (new)

Create a new data object on each input facet, with field names and values based off existing field values. This resembles transposing two columns of data into two rows, and treating one row as the column name.

Properties:

  • type: transpose
  • by (string): the field whose value to use as the new field's name
  • as (string): the field whose value to use as the new field's friendly name
  • value (string): the field whose value to use as the new field's value

The result will be an array of facet objects (functionally identical to the hierarchical facet object that was input), with an additional transposed object at the root of each facet containing the transpose results.

Example

preemptive:version-adoption.transformation facets session information by time, then transposes to create an association between application identity and the number of sessions (within each facet). This is needed because the dependent Widget, preemptive:version-adoption.widget, uses time as its x-axis, and multiple y-axis values based on this transposed table. The transform is defined:

{
    "type" : "transpose",
    "by" : "data.AppId_Version.value",
    "as" : "data.AppId_Version.format",
    "value" : "data.Count"
}

Continuing from our example for the facet transform, this transform will produce the following Dataset:

[
    {
        "key" : "1412553600000",
        "Time" : 1412553600000,
        "transposed":
        {
            "Application v1.0": 2
        },
        "values" :
        [
            {
                "data":
                {
                    "AppId_Version": { "format": "Application v1.0", ... },
                    "Time": 1412553600000,
                    "Count": 2
                }
            }
        ]
    },
    {
        "key" : "1412640000000",
        "Time" : 1412640000000,
        "transposed":
        {
            "Application v1.0": 3,
            "Application v2.0": 1
        },
        "values" :
        [
            {
                "data" :
                {
                    "AppId_Version" : { "format" : "Application v1.0", ... },
                    "Time" : 1412640000000,
                    "Count" : 3
                }
            },
            {
                "data" :
                {
                    "AppId_Version" : { "format" : "Application v2.0", ... },
                    "Time" : 1412640000000,
                    "Count" : 1
                }
            }
        ]
    }
]

Zip

Augment this Dataset with another Dataset. The current Dataset is treated as the primary (or left-hand-side) Dataset, while the other Dataset is the secondary (or right-hand-side) Dataset.

There are two ways this transform can match the elements of the Datasets:

  • by key: elements of the two Datasets are matched if specified fields have the same values.
  • by index: the first element of this Dataset is matched to the first element of the secondary Dataset, and so on.

By Key

Augment this Dataset with another Dataset, with element matching based on specified fields.

Properties:

  • type: zip
  • with (string): the name of the secondary Dataset (which must be declared earlier in this Transformation)
  • as (string): the location within the primary Dataset elements to append the corresponding secondary Dataset
  • key (string): the field name within the primary Dataset to use as a key
  • withKey (string): the field name within the secondary Dataset to use as a key
  • default (optional, object): if there is no matching value found in the secondary Dataset, a JSON object to use as a default
    • If specified, this transform functions similarly to a SQL left-join.
    • If not specified, this transform functions similarly to a SQL inner-join.
    • Note that when specifying a default value, it is important to maintain the schema of the secondary dataset. Therefore, a wrapping data field should be included that contains any necessary fields and their default values. Any missing fields (e.g. expected by consumers of the transform) will be treated as undefined.
Example

preemptive:service-level.transformation combines session and exception information into one Dataset, to calculate how the average number of exceptions per session occurred per day. Within the aggregation Dataset:

{
    "type":"zip",
    "with":"exceptions-over-time",
    "as":"Exceptions",
    "key":"data.Time",
    "withKey":"data.Time",
    "default": {
        "data": {
            "Count": 0
        }
    }
},

Note that we declared another Dataset, exceptions-over-time, within this Transformation just for this zip transform, as the with property can only reference other Datasets within this Transformation:

{
    "name":"exceptions-over-time",
    "input":"preemptive:exceptions-over-time.query"
},

Note also our use of the default property. If a particular day has session data (in the primary dataset) but not exception data (from the secondary dataset), we set the Exceptions.Count field to 0 to reflect this.

Consider the following prior state of the aggregation (session information) Dataset:

[
    {
        "data" :
        {
            "Time" : 1416873600000,
            "Count" : 51,
            ...
        }
    },
    {
        "data" :
        {
            "Time" : 1418601600000,
            "Count" : 22,
            ...
        }
    }
]

And the following exceptions-over-time (exception information) Dataset:

[
    {
        "data" :
        {
            "Time": 1418601600000,
            "Count": 11,
            ...
        }
    }
]

When the zip transform is applied, the resulting Dataset looks like this:

[
    {
        "data" :
        {
            "Time" : 1416873600000,
            "Count" : 51
        },
        "Exceptions":
        {
            data :
            {
                "Count" : 0
            }
        }
    },
    {
        "data" :
        {
            "Time" : 1418601600000,
            "Count" : 22
        },
        "Exceptions":
        {
            data :
            {
                "Time" : 1418601600000,
                "Count" : 11,
                ...
            }
        }
    }
]

Each of the elements in the primary (session information) Dataset receives an additional sub-object, called Exceptions (based off of the as property in our definition). This contains a copy of the matching element from the secondary (exception information) Dataset, or the value of the default property if no matching element was found.

By Index

Augment this Dataset with another Dataset, with element matching based on the order of the Datasets.

Properties:

  • type: zip
  • with (string): the name of the secondary Dataset (which must be in this Transformation)
  • as (string): the location within the primary Dataset elements to append the corresponding secondary Dataset

If the secondary Dataset is shorter than the primary Dataset, the secondary Dataset elements will be re-used in a "looping" fashion. If the secondary Dataset only has one element (e.g., as the result of a stats transform), this causes zip to function as an "add to all" operation.

Example

preemptive:feature-summary.transformation operates on feature data. In order to determine the percentage of sessions that a particular feature occurred in, it needs to know how many total sessions occurred in the currently-queried time period. Luckily, this sum is already available in the day_metrics Dataset from preemptive:key-stats.transformation, which produces an array with a single element, which contains statistical information for sessions.

We can add the contents of this one element to all elements in our feature Dataset by applying a index-based zip transform:

{
    "type" : "zip",
    "with" : "key-stats-metrics",
    "as" : "key-stats"
}

Note that we declared another Dataset, key-stats-metrics, within this Transformation just for this zip transform, as the with property can only reference other Datasets within this Transformation:

{
  "name": "key-stats-metrics",
  "input": "key-stats.transformation>day_metrics",
  "transform":[
  ]
}

Consider the following prior state of the aggregation (feature information) Dataset:

[
    {
        "data" :
        {
            "FeatureName" : "Feature A",
            "Sessions" : 2,
            ...
        }
    },
    {
        "data" :
        {
            "FeatureName" : "Feature B",
            "Sessions" : 7,
            ...
        }
    }
]

And the following key-stats-metrics (session statistics) Dataset:

[
    {
        "data" :
        {
            "Count":
            {
                "sum": 9,
                ...
            }
        }
    }
]

When the zip transform is applied, the feature entries are all augmented with the sole session entry:

[
    {
        "data" :
        {
            "FeatureName" : "Feature A",
            "Sessions" : 2,
            ...
        },
        "key-stats":
        {
            "data" :
            {
                "Count":
                {
                    "sum": 9,
                    ...
                }
            }
        }
    },
    {
        "data" :
        {
            "FeatureName" : "Feature B",
            "Sessions" : 7,
            ...
        },
        "key-stats":
        {
            "data" :
            {
                "Count":
                {
                    "sum": 9,
                    ...
                }
            }
        }
    }
]


Workbench Version 1.2.0. Copyright © 2016 PreEmptive Solutions, LLC