Re-imagined Applications Demand Re-imagined Application Analytics

February 2nd, 2015 by Sebastian Holst

Traditional applications are being replaced with the many-to-many pairing of Apps to Services where core functionality is supplied via cloud-based software and delivered via a multitude of apps running across devices and runtimes. Beyond the obvious runtime combinatory complexity, the apps and services are typically developed by different organizations with independent release cycles under disparate business models. As a consequence, an application’s scope – the sum total of its software and content – has shifted from concrete to ethereal where ingredients can change or evolve from session to session.

Outdated Analytics Patterns Can Only Offer Limited Insight

Analytics solutions built to focus on a single stack (or analyze stacks side-by-side), e.g. mobile apps or web sites or internal servers - or focus on a single stakeholder or persona, e.g. IT ops or web commerce – are poorly positioned to capture the dynamic, interoperable nature of modern application deployments or the increasingly diverse community of application stakeholders.

Shared Runtime Data is the Tie That Binds Components into an Organic Application

Apps track the services they call and services track the apps they serve through tokens and other shared parameters. Not every argument exchanged plays this role – consequently, a working knowledge of the components and their interfaces is required to effectively piece together individual sessions, users, and activities.

PreEmptive Analytics: Built for Modern Deployments and Diverse Stakeholder Requirements

PreEmptive Analytics has been built from the ground up to offer an instrument-once and distribute-many approach supporting a portfolio of analytics endpoints as dynamic as the application components it monitors.

The following working sample illustrates how PreEmptive Analytics can instrument client and cloud components to provide unprecedented insight into app design, user behavior, and IT operations. The latest version of this app, the instrumentation, and the extensions to the PreEmptive Analytics Workbench can be found at GitHub PreEmptive Analytics Use Case Example.

The Sample App and The Sample Service

The sample app lets users submit anticipated expenses for pre-approval. The user identifies the expense category and estimated expense and submits the record to a managed service for centralized approval or rejection. The approval policies reside in the hosted service as do the historical records.

Every user session has an associated organization and unique user ID associated with it – this drives both policy and provides the “hook” to connect the client’s activity with the supporting software services.

The sample app is written in C#, instrumented with the PreEmptive Analytics API and then Xamarin is used to generate both Android and iOS instances.

The sample service is written in C#, also instrumented with PreEmptive Analytics and runs in an Azure Windows VM.

PLEASE NOTE – the analytics functionality demonstrated here is in no way dependent upon or specific to C#, .NET, Xamarin, Azure, Android, or iOS – this is one specific example to illustrate the general principles and capabilities of PreEmptive Analytics that can just as readily be applied to any WPF, Java, or C++ component – running on-premises and/or distributed across cloud services and devices.

Sample App Functionality

The app allows a user to time arbitrary workflows, throw exceptions, express preferences, and – last but not least – submit an anticipated expense for pre-approval.

The sample managed service

A user selects the expense category and estimated expense and submits the information for approval.

Based on the amount and other factors, the remote software service either approves or rejects the request. The client app informs the user in one of two ways.

As mentioned above, users can track arbitrary work flows that span (or work within) page and/or method boundaries by starting and stopping the following timer.


Also, as mentioned above – each user can observe the department and ID they are working under in each session.

PreEmptive Analytics Results

The following dashboards illustrate the cross-section of analytics supporting the full spectrum of application stakeholders from dev to DevOps to business owners.


The overview page requires no special configuration and is immediately populated (a latency measured in seconds) as runtime telemetry comes in from production. All versions of all components are available for inspection across client devices and cloud-services.

Even the vanilla overview page offers insights across component and stakeholder domains as illustrated by these four “feature” stats - all relate to the “expense approval” activity - but each represents a different perspective - providing insights into all of the moving pieces that come together to create the integrated user experience.

Timing May be Everything but all Time is Relative

User behavior, user experience, application service level, managed service service levels

Even without any special configuration, PreEmptive Analytics automatically breaks out usage and timing of:

  1. The Azure-based approval service (item 1 above) – IT operations cares about this perspective,
  2. The client-side call up to the Azure-based approval service (item 2 above) – dev and devops cares about this perspective,
  3. The time spent on the “mobile page” for expense approval (item 3 above) – UX design/dev care about this perspective, and
  4. The time inside the larger workflow that leads a user to the mobile page (item 4 above) – app owner cares about this perspective.

The close up of the feature tracking panel shows that 688ms of the client request is outside of the time actually consumed by the Azure service itself (690ms – 2ms). It also shows that once a user lands on the expense page, they spend almost 40 seconds filling it out and lastly, that the true workflow that takes the user into and out of this page is just over 50 seconds on average.

Application Service Levels

Deeper analysis is readily available as well – here the max, min, and average times that clients need to fulfill a client request are shown over time – alongside a “threshold” indicating a service level goal for the client-side service.

Business Activity

PreEmptive Analytics combines the multi-tiered instrumentation outlined above with application-specific data capture and analysis – enabling powerful business activity insights. The following chart shows the volume, ratios, and trending of expense request approvals versus rejections over time. This particular instrumentation is generated from the cloud-based service – ensuring an enterprise-wide view across applications, platforms, and users.

Server record of expense requests


PreEmptive Analytics goes far beyond counting occurrences of Application-specific data – any data point can be used to segment runtime telemetry as well – providing powerful, contextual insights as illustrated below.

Usage and Experience

Recall that each client session is assigned a department (or role) and a user ID. The following panel breaks out usage, users and exceptions by organization (a server-side lookup of the user ID) AND by the role.

NOTE that these dimensions can also be used as filters allowing stakeholders to focus on the most important organizations and the most importance roles inside those organizations. Below is a view into approvals and rejections by organization and role.

Business Activity

Selecting any combination of organizations and roles sets the focus to the most important constituents to my operation – for the first time, I can segment, monitor, and optimize for the organizations and people that matter most.

Bias by Organization

A user can simply select an organization (which is indexed through a CRM look-up of the license key as the data streams in from production at runtime) and usage, stability, and quality of only users from that organization (one or more can be selected).

After selecting “Up And Away Inc.” you can view both system activity and a business activity summary.

Bias by Role

Similarly, selecting just the “VIP” role shows VIP activity across organizations.

Keep in mind that the data in these tables is a “joined” view combining client-side information (role and activity) and cloud-based computing (request approval statistics).

The same business optimization can be applied to production incidents to support DevOps and support. The following panel shows the activity by user ID drilling down into specific exceptions.

Bias for DevOps

What’s next?

If your business is (or will soon be) dependent on applications whose logic is distributed across devices and runtimes and you believe that application development should be AT LEAST as customer-centric and attuned to your business’ priorities as any other part of your organization – then upgrading application analytics needs to be a priority (not much different than building application security into the dev process and not after).

Contact us to see how we’re helping organizations develop their application analytics practice to improve quality, satisfaction, and development ROI.

Welcome Xamarin Insights (seeing the forest through the trees)

October 8th, 2014 by Sebastian Holst

First, let me state for the record that I am a huge fan of Xamarin - when I say this, I mean to include both their great technology and their people (I’ve only met a few, but they’ve never disappointed). So with that out of the way, I listened with great interest as they announced Xamarin Insights at their user group this morning. As someone with a personal stake in the broad category of application analytics, you can imagine that when a company like Xamarin enters my space, they’re going to get my undivided attention.

My first reaction was that the name “Xamarin Insights” sounded a lot like Microsoft’s “Application Insights” and as I watched the presentation and then reviewed the web content, the similarities grew even stronger.

Of course, if you’re a developer on either of the (*) Insights teams you’re going to be mildly offended by this last statement as you no doubt see STARK differences - and, at some important level, you’re probably right - but I’m not on either dev team, I’m part of the PreEmptive Analytics team and so this is the area where I see the “STARK differences.” …and so that has prompted me to populate the following table comparing all three, Xamarin Insights, Application Insights, and PreEmptive Analytics.

I’ve tried to focus on material differences that are most likely to make one approach more effective than the other two - and to make this crystal clear - there are scenarios where each option is better suited than the other two - so understanding YOUR requirements is the first and MOST IMPORTANT step in selecting your optimal analytics solution.

Xamarin Insights
Application Insights
PreEmptive Analytics
Targeted appeal
Enterprises and ISVs targeting modern platforms
Enterprises and ISVs with established app portfolios
driving large, regulated, and secure operations extending into modern/mobile
Release status
Free with pricing TBD
Free with pricing TBD
Licensed by product component
Applications supported
API for C#/F# supporting native Xamarin targets (end-user
apps only)
API for C/C#/F#, JavaScript supporting Microsoft targets
(MODERN client-side AND server-side apps/components)
All apps supported by (*)Insights PLUS C, C++,
Java, traditional .NET, middle-tier, on-premises, etc.
Endpoint/analytics engines and portal
Multi-tenant hosted by Xamarin
Multi-tenant hosted by Microsoft
On-premises or hosted – hosting can be by 3rd
party or PreEmptive.
Events: Atomic mobile & page
Exceptions: Unhandled and caught
Custom: Strings
System and performance: mobile only
Events: Atomic mobile & page
Exceptions: Unhandled
Custom: Strings
System and performance: Modern only
Events: All  (*)Insights PLUS arbitrary workflow and
in-code spans
Exceptions: Unhandled, caught, thrown
Custom: Strings, serialized data structures from multiple
System and performance: all runtimes and surfaces
Supported organizations
Xamarin devs ONLY
Microsoft-based devs ONLY
All devs supported by (*)Insights PLUS all other
enterprise, ISV, and embedded app devs.
Data dimensions
Only data originated inside an app can be analyzed
Data inside an app AND data accessible from within Azure
account can be analyzed
Any data source available within an enterprise or via
external services can be mashed up to enrich telemetry
Opt-in/out policy enforcement
Offline caching
Extensible indexing and UI on a role-by-role basis (app
owner, dev mgr, etc.)
Injection of instrumentation for managed code
User and organization metrics
Yes including integration with Enterprise credentials
Automatic creation of TFS work items based upon business
rules and patterns
Embedded inside Visual Studio
Starting with VS2013/14
Since 2010

One thing i know for sure - no one will be building applications without analytics in the next few years - figuring this out for YOUR dev requirements will be a critical requirement soon enough - it’s not a question of IF - only WHEN - so, if applications are an important part of your life - this is something that you cannot postpone for much longer (it may already be too late!) Enjoy!

A Much Better Way to Obfuscate Windows Store / Appx Apps

July 17th, 2014 by earlz

Dotfuscator has supported Windows Store apps (i.e. Appx packages) since 2012, but integrating Dotfuscator into the development workflow has been difficult because of limitations in Visual Studio and the APPX build/publish process.

Until today. We’ve been working hard to find a way to improve this process, and we’ve found a way to make it totally automatic, after just a little bit of initial setup. As a warning, this uses an “internal” MSBuild target that may change in later versions of Visual Studio. If it does change, it’s likely that this method could be adapted to work, and we’ll update this post with new instructions.

This solution provides a number of benefits:

  • Automated obfuscation of Appx packages and AppxBundles from within Visual Studio and/or an automated build server (using MSBuild)
  • Automated obfuscation of multi-platform Appx and AppxBundle builds
  • Both of the above also work for Dotfuscator for Marketplace Apps

After the initial setup, obfuscation will be an automatic part of the “Create App Packages” process in Visual Studio. You’ll also be able to use a special build target to launch the Dotfuscator GUI to make it easy to configure obfuscation.

This solution is based on MSBuild, which is what Visual Studio uses under the covers to build your application. Knowledge of MSBuild is not required to follow this tutorial, but it might help make things easier to follow.

Supported Versions

This tutorial should work with these versions of Dotfuscator:

  • Windows 8 Appx packages: Dotfuscator Professional/Evaluation/Market Place Apps v4.9.8500 (October 2012) and newer
  • Windows 8.1 Appx packages or AppxBundles: Dotfuscator Professional/Evaluation/Market Place Apps v4.11 and newer
  • Windows 8.1/Windows Phone 8.1 Universal Appx Packages (see other notes below): Dotfuscator Professional/Evaluation/Market Place Apps v4.11 and newer

This will not work with Dotfuscator Community Edition because it doesn’t support MSBuild or obfuscation of marketplace applications.

Note: this is ONLY for Appx packages, not for Silverlight packages or any other package format, although the concepts could be adapted to work with other platforms.


To get to the point where everything just works, some initial setup is required. This involves manually editing some XML files, but should be fairly easy. Here is an overview of what must be done:

  1. Configure a new ManualObfuscation target in your Visual Studio solution and project
  2. Download and configure your Visual Studio project to use a custom .targets file
  3. Create or modify a Dotfuscator project file to receive a specific set of Project Properties from the new target(s)

So, let’s dive in.

Configure manual obfuscation

First, we must create the Manual Obfuscation configuration for our project. From within Visual Studio:

  1. Go to Build menu
  2. Go to Configuration Manager
  3. Click on the Configuration drop down next to your Windows Store App project
  4. Click “New…” on the drop down menu
  5. Name the new configuration “ManualObfuscation”
  6. Click the drop down to copy settings from Release (or Debug, your choice)
  7. Ensure that the “Create new solution configurations” checkbox is checked
  8. Click OK

You should see something like this at step #7:


Download the ObfuscateAppx.targets file

Next, download the ObfuscateAppx.targets file. Place ObfuscateAppx.targets in the same directory as the .csproj or .vbproj for your Windows Store app. That location is important; you’ll need to place a copy of that file in each Windows Store Application project for which you want to do this. This is not required for Windows Store Libraries dependencies to an application, however.

Next, you’ll want to at least take a peek at this file. The only thing that might have to be changed is the <DotfuscatorLocation> property. It’s hard-coded to the usual path for Dotfuscator under 64-bit systems. This must be changed if it is used on a 32-bit system and/or with a different version of Dotfuscator.

Optionally, DotfuscatorLocation can use an environment variable instead of a hard-coded path. This is preferable when using source control so that different environments can be used without changing this file.

For this to use an environment variable, the line with the <DotfuscatorLocation> tag must be changed to look like so:


This change will make it reference the environment variable DOTFUSCATOR_LOCATION. Make sure to properly set the environment variable before continuing with this tutorial.

Configure the Visual Studio Project

Now we need to tell Visual Studio to use the new obfuscation step that is defined in the ObfuscateAppx.targets file. To do this, we’ll edit your project’s .csproj or .vbproj file.

Note that a C# (or VB) project file (.csproj/.vbproj) is just a special MSBuild file, so it’s easy to just add an extra step using the normal conventions of MSBuild.

First, right click on the project in Solution Explorer. Click on the “Unload Project” option. Now, right click on the project again and click “Edit … “. Now, scroll all the way to the bottom of the XML. Copy and paste this line:

<Import Project="ObfuscateAppx.targets" />

Put it just before the </Project> closing tag. Now click save, and right click on the project and click “Reload project”.


The Dotfuscator project file

Now a project file for Dotfuscator is needed. There are two options here. The easy option is to download an empty Dotfuscator project that’s already setup. But if you already have a Dotfuscator project started, you’ll need to modify it to use the exposed project properties.

There are 6 project properties sent to Dotfuscator you can use:

  • inputdir — The directory where the appx package is
  • inputfile — The appx file(without directory name) you’ll be obfuscating
  • outputdir — The directory to output the obfuscated appx package to (should be the same as inputdir)
  • pfxfile — The private key file used for signing the appx package
  • mapoutputdir — The directory the mapfile is output to
  • mapfile — The name of the mapfile output

Using the template project

If you haven’t already created a Dotfuscator project for this app, you can just download this
dotfconfig.xml. Save dotfconfig.xml to the same directory as ObfuscateAppx.targets. See the “Ensure configuration filename is correct” section for how to use a different filename than dotfconfig.xml

Initialize the Configuration

Then the blank template configuration file must be “initialized”. When you first add an input to a project, Dotfuscator automatically pulls certain metadata out of that input and puts it into the config file, and we need to make sure that metadata is created before you try to obfuscate. To do that, we’ll open the template project in the Dotfuscator GUI, modify it slightly, and save it again. That will trigger the initialization that we need.

To do this, change the solution configuration to ManualObfuscation. Then, go to publish an appx package using the “Create App Packages” wizard of Visual Studio.

At this point you should see the standalone Dotfuscator GUI:


Here you can make exclusions or other configuration options. If you don’t have any settings to change, you’ll still need to make at least one change so that Dotfuscator thinks the project needs to be saved/updated. A quick way is to just toggle “library mode” on the executable twice. After making your changes, save the project.

After you’re done, exit Dotfuscator. Then you should see the option to certify the package. You can cancel this for right now

Modifying an existing project

If a Dotfuscator project already exists for this application, it’s fairly easy to convert it to use the required project properties. It’s easiest to hand-edit the XML to get this to work.

First, I’m assuming that the project is obfuscating the Appx package directly (rather than the executable). Second, I’m assuming that the project only has one input. If multiple packages with different configurations exist, like for x86 and ARM, you should delete the inputs until you’re left with just one package.

Next, grab an XML editor, like Visual Studio, and open the Dotfuscator project file.

First, you need to add the list of properties to the Dotfuscator configuration:

    <property name="inputdir" value="" />
    <property name="outputdir" value="" />
    <property name="inputfile" value="" />
    <property name="pfxfile" value="" />
    <property name="mapoutputdir" value="" />
    <property name="mapfile" value="Map.xml" />

This should go right after the Dotfuscator tag like so:


Afterwards, this is the next relevant bit of the project file:


These changes are needed:

  • change the CertificateFile directive to use the pfxfile property
  • change the file dir to use the inputdir property
  • change the file name to use the inputfile property

Here’s what it looks like after these changes:


Notice that the PreEmptiveStopWatch.exe attribute doesn’t need to be updated. This is because it’s within a package tag, which means that it’s an assembly inside the package

Now the next bit of XML:


Here, change the output directory to use the ${outputdir} property:


And finally, the renaming report/map file needs to be updated. It should look like so:


Change the two relevant pieces of text so that

  • The directory for the mapfile to be output becomes ${mapoutputdir}
  • The file name for the mapfile to be output becomes ${mapfile}

It should look like this afterwards:


That was easy. Notice also that there are a few renaming exclusions for this project in the middle of all of that. These don’t need to be touched at all, they will automatically work.

Ensure configuration filename is correct

The ObfuscateAppx.targets file assumes that the Dotfuscator configuration file is named dotfconfig.xml and is located in the same directory as ObfuscateAppx.targets. So, be sure to rename the configuration file to dotfconfig.xml. Optionally, ObfuscateAppx.targets can be modified so that it expects a different configuration filename. This can be done by changing the contents of the <DotfuscatorConfig> tag in the file.

Building Your Package

This is the best part. Make sure the solution configuration is set to Debug or Release (or anything other than “ManualObfuscation”). Make a package using the “Create App Packages” wizard of Visual Studio. If you chose to build a Release configuration, then you should now get an option to certify your package.

Wait, Dotfuscator didn’t run! Actually, if all went well, it did. It runs in a completely automated fashion. You can confirm that it obfuscated the package by going into the AppPackages folder of your project. Then, select the folder for the package that was just created (the highest version number). In this folder, there should be an unobfuscated directory AND a Map.xml file. If those two files exist, it’s a fairly good indication that Dotfuscator has obfuscated your package.

You can also look at the build output in Visual Studio, where you should see the log messages from Dotfuscator.

This is what the directory of an obfuscated package should look like:



To modify your Dotfuscator configuration, just change your solution configuration to “ManualObfuscation” and build. That should bring up the Dotfuscator GUI, where you can make all your changes and save. Building from within that GUI will also work. Note, after pushing the Dotfuscator “play” button, you will get a message like

Package C:\mypackage.appx of type Appx has changed. Do you want to reload the package?

At this point you should click “No” and exit Dotfuscator afterwards. Obfuscation happens in-place. This means that the source package is eventually replaced with the obfuscated package. If you push the “play” button more than once, you will be reobfuscating a previously obfuscated package. There is not a good way of avoiding this at the moment.

Note that if you add a new assembly to the application, you’ll need to do the procedure described above to “Initialize the Configuration”. Otherwise, the new assembly will be treated as an artifact and will not be obfuscated.

Multiple Platforms

There is only a single Dotfuscator project file. So, if you’re targeting multiple platforms, take note that if you run Manual Obfuscation on more than 1 platform, then you will be modifying the same Dotfuscator project. It is not currently possible to have separate Dotfuscator project files for each platform.

Also, when configuring your Dotfuscator project using Manual Obfuscation, you should only build the application package for one platform. You can do this by ensuring only 1 architecture has a checkbox beside it in the Select and Configure Packages screen. If you don’t do this, you may get an error from Dotfuscator or the Dotfuscator user interface may pop up more than once.

Other Notes

Sample Project

A sample project is also available with all of this already put together. This can be downloaded here:

Note: It assumes a hardcoded path to Program Files (x86) for Dotfuscator 4.12. You may have to change this in ObfuscateAppx.targets for it to work on your system.

From the command line/build server?

This also works from the command line and is suitable for use in a build-server environment. Just use msbuild MyProject.sln from the command line. Dotfuscator will automatically run when building the package just like it does in Visual Studio.

Does this support multiple platforms/automatic version numbers?

Yes! This will automatically run Dotfuscator multiple times for all of the platforms that Appx packages are created for. It also automatically picks up on the latest version number being used.

How do I obfuscate Universal Apps?

Universal Apps for Windows Store 8.1/Windows Phone 8.1 have a similar workflow to normal Windows Store apps. For the purposes of obfuscation though, the two targets are treated as separate projects. So, this means that you will need separate Dotfuscator configuration files for the Windows Store and Windows Phone projects, and will basically need to repeat the process laid out in this blog post to enable obfuscation support in both the Windows Store and Windows Phone csproj file.


Imported project was not found

If you get this error:

The imported project "C:\Foo\Bar\ObfuscateAppx.targets" was not found.
Confirm that the path in the <Import> declaration is correct, and that the file exists on disk

This means you probably copied the ObfuscateAppx.targets file to the wrong place. Ensure that it is in the same folder as your Windows Store App’s .csproj or .vbproj file.

The command exited with code 3

If you get an error like this:

The command ""C:\Program Files(x86)\PreEmptive Solutions\Dotfuscator Professional Edition 4.10\dotfuscator.exe" "C:\ ..... /g"
exited with code 3

This error means that dotfuscator.exe couldn’t be found. If using a hardcoded path for DotfuscatorLocation, ensure that it is correct. If using an environment variable, ensure that it is properly set. The path to which Dotfuscator is installed can vary depending on your operating system and version of Dotfuscator

The “PreEmptive.Tasks.Dotfuscate” task could not be loaded

If you experience an error like this, it means that MSBuild couldn’t find the Dotfuscator MSBuild DLL. Ensure that you are using a supported version of Dotfuscator (CE will not work!). Normally, this file is located in C:\Program Files\MSBuild\PreEmptive\Dotfuscator\4.0. If this file is missing, you probably don’t have Dotfuscator installed on your system. The path to which this file is installed can vary depending on your operating system and version of Dotfuscator. If Dotfuscator is properly installed, this shouldn’t happen.

There are no assemblies to process

Ensure that your Dotfuscator configuration file is correct. This can be caused by the package <file> tag not using the correct project properties.

Xml Validation Error…

Ensure that your Dotfuscator configuration file is a valid XML file.

Dotfuscator says Warning Input Assembly appears to be obfuscated

You may see this warning if you do not have Visual Studio set to automatically increment the version number of your package AND you have not modified the source code since last building the package. It’s recommended to let Visual Studio automatically increment the version number.

If you need to rebuild and obfuscate the package without making any source changes though for some reason, you should navigate to the project directory. From there, delete the obj folder. After doing this, Visual Studio will be forced to rebuild the package and you will not get this warning.

There’s something else happening!?

You can also change a Visual Studio option so that MSBuild is more verbose about its messages. To set this option:

  1. Go to Tools in the menu bar.
  2. Go to Projects and Solutions. Expand it.
  3. Go to Build and Run.
  4. Change the “MSBuild project build output verbosity” so that it’s something other than quiet. Normal is useful for debugging errors.
  5. Click OK.

Now, you should be able to look in the output tab to see exactly what the Obfuscate task executes and hopefully figure out what went wrong.

You should be able to see something like this:


Cross Platform Application Analytics: Adding Meat to Pabulum

April 22nd, 2014 by Sebastian Holst

Could I have chosen a title with less meaning and greater hype? I seriously doubt it.

We have all heard that you can gauge how important a thing or concept is to a community by the number of names and terms used to describe that thing (the cliche is Eskimos and ice) - and I proposed a corollary; you can gauge how poorly a community understands a thing or concept by how heavily it overloads multiple meanings onto a single name or term. …and "analytics," "platform," and even "application" all fall into this latter category.

What kind of analytics and for whom? What is a “platform?” And what does crossing one of these (or between them) even mean?

In this post, I’m going to take a stab at narrowing the meaning behind these terms just long enough to share some "tribal knowledge" on what effectively monitoring and measuring applications can mean - especially as the very notion of what an application can and should be is evolving even as we deploy the ones we’ve just built.

Application Analytics: If you care about application design and the development, test, and deployment practices that drive adoption – and if you have a stake in both the health of your applications in production and their resulting impact – then you’ll also care about the brand of application analytics that we’ll be focusing on here.

Cross Platform: If your idea of “an application” is holistic and encompasses every executable your users touch (across devices and over time) AND includes the distributed services that process transactions, publish content, and connect users to one another (as opposed to the myopic perspective of treating each of these components as standalone) – then you already understand what “a platform” really means and why, to be effective, application analytics must provide a single view across (and throughout) your application platform.

PreEmptive Analytics

At PreEmptive, we’d like to think that we’ve fully internalized this worldview where applications are defined less by any one instance of an executable or script and more meaningfully treated as a collection of components that, when taken together, address one or more business or organizational needs. …and this perspective has translated directly into PreEmptive Analytics’ feature set.

Because PreEmptive Analytics instrumentation runs inside a production application (as any application analytics instrumentation must), we find it helpful to divide our feature set into two buckets;

  1. Desired, e.g. those that bring value to our users like feature tracking and
  2. Required, e.g. those features that, if they do not behave, damage the very applications they are designed to measure.

How do you decide for yourself what’s desired versus required for your organization?

The list of “desired features” can literally be endless – and a missing “desired feature” can often be overlooked and forgiven because the user can be compensated with some other awesome feature that still makes implementing PreEmptive Analytics worthwhile. On the other hand, miss ANY SINGLE “required feature,” and the project is dead in the water – Violate privacy? Negatively impact performance or quality? Complicate application deployment? Generate regulatory, audit, or security risk? Any one of these issues is a deal breaker.

PreEmptive Analytics “required” cross platform feature set

Here’s a sampling of the kinds of features that our users often rely upon to hit their “required” cross platform feature set:

Platform, runtime, and marketplace coverage: will PreEmptive Analytics instrumentation support client, middle-tier, and server-side components?

PreEmptive Analytics instruments:

  • All .NET flavors (including 2.0 through WinRT and WP), C++, JavaScript, Java (including 8), iOS, and Android (plus special support for Xamarin generating native mobile apps across WP, iOS, & Android).
  • Further, our instrumentation passes Apple, Microsoft, Amazon, and Google marketplace acceptance criteria.

Network connectivity and resilience: will PreEmptive Analytics be able to capture, cache, and transport runtime telemetry across and between my users’ and our own networks?

PreEmptive instrumentation provides:

  • Automatic offline caching inside your application across all mobile, PC, cloud, and server components (with the exception of JavaScript). Special logic accommodates mobile platforms and their unique performance and storage capabilities. After automatically storing data when your application is offline, it will automatically stream the telemetry up once connectivity is reestablished.

PreEmptive Analytics endpoints can provide:

  • Longer-term data management for networks that are completely isolated from outside networks allowing you to arrange for alternative data access or transport while respecting privacy, security, and other network-related constraints.

Privacy and security at runtime and over time: will PreEmptive Analytics provide the flexibility to enforce your current and evolving security and privacy obligations?

PreEmptive Analytics instrumentation

  • Only collects and transmits data that has been explicitly requested by development. There is no unintended “over communication” or monitoring.
  • When data is transmitted, telemetry is encrypted over the wire.
  • Includes an extensible Opt-in switch that can be controlled by end users or through web-service calls allowing your organization to adjust and accommodate shifting opt-in and privacy policies without having to re-instrument and redeploy your applications.

PreEmptive Analytics endpoints can:

  • Reside and be managed entirely under your control – either on-premises or inside a virtual machine hosted in a cloud under your direct control.
  • They can be reconfigured, relocated, and dynamically targeted by your applications – even after your applications have been deployed.

Performance and bandwidth: will PreEmptive Analytics instrumentation impact my application’s performance from my users’ experience or across the network?

PreEmptive instrumentation:

  • Runs inside your applications’ process space in a low priority thread – never competing for system resources.
  • Utilizes an asynchronous queue to further optimize and minimize the collection and transmission of telemetry once captured inside your application.
  • Has “safety valve” logic that will automatically begin throwing away data packets and ultimately shut itself down when system resources are deemed to be too scarce – helping to ensure that your users’ experiences are never impacted.
  • Employs OS and device-specific flavors of all of the above ensuring that – even with injection post-compile – every possible step is taken to ensure that PreEmptive Analytics’ system and network footprint remains negligible.

What about the PreEmptive Analytics “desired” cross platform feature set? (The features that make analytics worth doing) As I’ve already said, this list is literally an endless one – If I were to list only the categories (let alone the features in each category), this would make an already long post into very very long post. So, the desired feature discussion will have to come later…

What’s the bottom Line for “Cross Platform Application Analytics?”

Be consistent – make sure your application analytics technology and practice are aligned with your definition of what an application actually is – and this is especially true when evaluating “cross-platform” architectures and semantics. A mismatch here will likely wipe out any chance of a lasting analytics solution, increase the cost of application analytics over time, and add to your technical debt.

Separate “needs” from “wants” – take every action possible to ensure that your application analytics implementation does no harm to the applications being measured and monitored either directly (performance, quality, …) or indirectly (security, reputation, compliance).

Want to put us through our paces? Visit and request an eval…

Application Analytics - Segmenting the Solutions

January 8th, 2014 by Gabriel Torok

There are many ways to get sharp insight into your production applications, including:

  • Creating your own “in house” analytics solution.
  • Using a turn-key public cloud solution (e.g. Google Analytics).
  • Using a client managed package that can be run on premises or in the “cloud” of your choice (e.g. PreEmptive Analytics).

Each one has pros and cons and I’d like to quickly look at them, as well as consider the “do nothing” approach.

Do Nothing about Application Analytics:

Many companies are busy fighting today’s “fires”, and lack the sharp insight into their applications running in production that would enable them to reduce future fires. Said another way, companies without application analytics are more likely to miss quality goals, have higher maintenance costs, and lower customer satisfaction. It’s a vicious cycle where the less time they spend implementing application analytics, the more they end up needing it.

With application analytics, they can prioritize work based on actual usage patterns, identify, triage and resolve problems before their customers are seriously impacted. Because of these obvious benefits, high performing companies are less and less likely to choose the “do nothing” path.

Home Grown Application Analytics:

In-house developed and maintained application analytics solutions can provide deep understanding around the unique needs of a particular company. However, application analytics is not usually a core competency for the company. Therefore, performance, maintenance, ongoing support, new feature development, and high costs are usually an issue. Also, the company will not benefit from ongoing improvements done by others.

Public Cloud Application Analytics:

Turn-key solutions like Google Analytics, New Relic, and (eventually) Microsoft’s App Insights will cost-effectively address issues relevant to the mass market. Their strength is in providing high performance, frequently updated, rich reports with universal appeal. Their weakness is in depth of customization and the ability for client control (can’t run on-premises or in a secure data center of client’s choosing).

Client Managed Application Analytics:

In between these two solution-types are client managed offerings like PreEmptive Analytics which offer the depth of control that usually comes with home-grown solutions, while also offering an out-of-the box experience similar to turnkey cloud solutions. Client-managed solutions have a deeper ability to customize the analytics than cloud solutions do. For example, cloud solutions can’t pivot on any arbitrary data or integrate internal business data, because they can’t scale that across all their customers in a multi-tenant solution. Also, some organizations are regulated, security conscious, and/or have other isolated scenarios that keep them from sending their application data into a public cloud infrastructure. Client managed application analytics solutions like PreEmptive Analytics try to maintain many of the benefits of a turn-key solution, while keeping the benefits of depth of customization and client control and privacy aspects of a home grown solution.

Summary of the Strengths of Client Managed vs. Public Cloud solutions:


Successful companies will use application analytics to ensure their applications are performing as expected and continuously improving. Whether, they build their own, utilize one from the public cloud, or utilize a client managed solution depends on their specific requirements.