A Much Better Way to Obfuscate Windows Store / Appx Apps

July 17th, 2014 by earlz

Dotfuscator has supported Windows Store apps (i.e. Appx packages) since 2012, but integrating Dotfuscator into the development workflow has been difficult because of limitations in Visual Studio and the APPX build/publish process.

Until today. We’ve been working hard to find a way to improve this process, and we’ve found a way to make it totally automatic, after just a little bit of initial setup. As a warning, this uses an “internal” MSBuild target that may change in later versions of Visual Studio. If it does change, it’s likely that this method could be adapted to work, and we’ll update this post with new instructions.

This solution provides a number of benefits:

  • Automated obfuscation of Appx packages and AppxBundles from within Visual Studio and/or an automated build server (using MSBuild)
  • Automated obfuscation of multi-platform Appx and AppxBundle builds
  • Both of the above also work for Dotfuscator for Marketplace Apps

After the initial setup, obfuscation will be an automatic part of the “Create App Packages” process in Visual Studio. You’ll also be able to use a special build target to launch the Dotfuscator GUI to make it easy to configure obfuscation.

This solution is based on MSBuild, which is what Visual Studio uses under the covers to build your application. Knowledge of MSBuild is not required to follow this tutorial, but it might help make things easier to follow.

Supported Versions

This tutorial should work with these versions of Dotfuscator:

  • Windows 8 Appx packages: Dotfuscator Professional/Evaluation/Market Place Apps v4.9.8500 (October 2012) and newer
  • Windows 8.1 Appx packages or AppxBundles: Dotfuscator Professional/Evaluation/Market Place Apps v4.11 and newer
  • Windows 8.1/Windows Phone 8.1 Universal Appx Packages (see other notes below): Dotfuscator Professional/Evaluation/Market Place Apps v4.11 and newer

This will not work with Dotfuscator Community Edition because it doesn’t support MSBuild or obfuscation of marketplace applications.

Note: this is ONLY for Appx packages, not for Silverlight packages or any other package format, although the concepts could be adapted to work with other platforms.


To get to the point where everything just works, some initial setup is required. This involves manually editing some XML files, but should be fairly easy. Here is an overview of what must be done:

  1. Configure a new ManualObfuscation target in your Visual Studio solution and project
  2. Download and configure your Visual Studio project to use a custom .targets file
  3. Create or modify a Dotfuscator project file to receive a specific set of Project Properties from the new target(s)

So, let’s dive in.

Configure manual obfuscation

First, we must create the Manual Obfuscation configuration for our project. From within Visual Studio:

  1. Go to Build menu
  2. Go to Configuration Manager
  3. Click on the Configuration drop down next to your Windows Store App project
  4. Click “New…” on the drop down menu
  5. Name the new configuration “ManualObfuscation”
  6. Click the drop down to copy settings from Release (or Debug, your choice)
  7. Ensure that the “Create new solution configurations” checkbox is checked
  8. Click OK

You should see something like this at step #7:


Download the ObfuscateAppx.targets file

Next, download the ObfuscateAppx.targets file. Place ObfuscateAppx.targets in the same directory as the .csproj or .vbproj for your Windows Store app. That location is important; you’ll need to place a copy of that file in each Windows Store Application project for which you want to do this. This is not required for Windows Store Libraries dependencies to an application, however.

Next, you’ll want to at least take a peek at this file. The only thing that might have to be changed is the <DotfuscatorLocation> property. It’s hard-coded to the usual path for Dotfuscator under 64-bit systems. This must be changed if it is used on a 32-bit system and/or with a different version of Dotfuscator.

Optionally, DotfuscatorLocation can use an environment variable instead of a hard-coded path. This is preferable when using source control so that different environments can be used without changing this file.

For this to use an environment variable, the line with the <DotfuscatorLocation> tag must be changed to look like so:


This change will make it reference the environment variable DOTFUSCATOR_LOCATION. Make sure to properly set the environment variable before continuing with this tutorial.

Configure the Visual Studio Project

Now we need to tell Visual Studio to use the new obfuscation step that is defined in the ObfuscateAppx.targets file. To do this, we’ll edit your project’s .csproj or .vbproj file.

Note that a C# (or VB) project file (.csproj/.vbproj) is just a special MSBuild file, so it’s easy to just add an extra step using the normal conventions of MSBuild.

First, right click on the project in Solution Explorer. Click on the “Unload Project” option. Now, right click on the project again and click “Edit … “. Now, scroll all the way to the bottom of the XML. Copy and paste this line:

<Import Project="ObfuscateAppx.targets" />

Put it just before the </Project> closing tag. Now click save, and right click on the project and click “Reload project”.


The Dotfuscator project file

Now a project file for Dotfuscator is needed. There are two options here. The easy option is to download an empty Dotfuscator project that’s already setup. But if you already have a Dotfuscator project started, you’ll need to modify it to use the exposed project properties.

There are 6 project properties sent to Dotfuscator you can use:

  • inputdir — The directory where the appx package is
  • inputfile — The appx file(without directory name) you’ll be obfuscating
  • outputdir — The directory to output the obfuscated appx package to (should be the same as inputdir)
  • pfxfile — The private key file used for signing the appx package
  • mapoutputdir — The directory the mapfile is output to
  • mapfile — The name of the mapfile output

Using the template project

If you haven’t already created a Dotfuscator project for this app, you can just download this
dotfconfig.xml. Save dotfconfig.xml to the same directory as ObfuscateAppx.targets. See the “Ensure configuration filename is correct” section for how to use a different filename than dotfconfig.xml

Initialize the Configuration

Then the blank template configuration file must be “initialized”. When you first add an input to a project, Dotfuscator automatically pulls certain metadata out of that input and puts it into the config file, and we need to make sure that metadata is created before you try to obfuscate. To do that, we’ll open the template project in the Dotfuscator GUI, modify it slightly, and save it again. That will trigger the initialization that we need.

To do this, change the solution configuration to ManualObfuscation. Then, go to publish an appx package using the “Create App Packages” wizard of Visual Studio.

At this point you should see the standalone Dotfuscator GUI:


Here you can make exclusions or other configuration options. If you don’t have any settings to change, you’ll still need to make at least one change so that Dotfuscator thinks the project needs to be saved/updated. A quick way is to just toggle “library mode” on the executable twice. After making your changes, save the project.

After you’re done, exit Dotfuscator. Then you should see the option to certify the package. You can cancel this for right now

Modifying an existing project

If a Dotfuscator project already exists for this application, it’s fairly easy to convert it to use the required project properties. It’s easiest to hand-edit the XML to get this to work.

First, I’m assuming that the project is obfuscating the Appx package directly (rather than the executable). Second, I’m assuming that the project only has one input. If multiple packages with different configurations exist, like for x86 and ARM, you should delete the inputs until you’re left with just one package.

Next, grab an XML editor, like Visual Studio, and open the Dotfuscator project file.

First, you need to add the list of properties to the Dotfuscator configuration:

    <property name="inputdir" value="" />
    <property name="outputdir" value="" />
    <property name="inputfile" value="" />
    <property name="pfxfile" value="" />
    <property name="mapoutputdir" value="" />
    <property name="mapfile" value="Map.xml" />

This should go right after the Dotfuscator tag like so:


Afterwards, this is the next relevant bit of the project file:


These changes are needed:

  • change the CertificateFile directive to use the pfxfile property
  • change the file dir to use the inputdir property
  • change the file name to use the inputfile property

Here’s what it looks like after these changes:


Notice that the PreEmptiveStopWatch.exe attribute doesn’t need to be updated. This is because it’s within a package tag, which means that it’s an assembly inside the package

Now the next bit of XML:


Here, change the output directory to use the ${outputdir} property:


And finally, the renaming report/map file needs to be updated. It should look like so:


Change the two relevant pieces of text so that

  • The directory for the mapfile to be output becomes ${mapoutputdir}
  • The file name for the mapfile to be output becomes ${mapfile}

It should look like this afterwards:


That was easy. Notice also that there are a few renaming exclusions for this project in the middle of all of that. These don’t need to be touched at all, they will automatically work.

Ensure configuration filename is correct

The ObfuscateAppx.targets file assumes that the Dotfuscator configuration file is named dotfconfig.xml and is located in the same directory as ObfuscateAppx.targets. So, be sure to rename the configuration file to dotfconfig.xml. Optionally, ObfuscateAppx.targets can be modified so that it expects a different configuration filename. This can be done by changing the contents of the <DotfuscatorConfig> tag in the file.

Building Your Package

This is the best part. Make sure the solution configuration is set to Debug or Release (or anything other than “ManualObfuscation”). Make a package using the “Create App Packages” wizard of Visual Studio. If you chose to build a Release configuration, then you should now get an option to certify your package.

Wait, Dotfuscator didn’t run! Actually, if all went well, it did. It runs in a completely automated fashion. You can confirm that it obfuscated the package by going into the AppPackages folder of your project. Then, select the folder for the package that was just created (the highest version number). In this folder, there should be an unobfuscated directory AND a Map.xml file. If those two files exist, it’s a fairly good indication that Dotfuscator has obfuscated your package.

You can also look at the build output in Visual Studio, where you should see the log messages from Dotfuscator.

This is what the directory of an obfuscated package should look like:



To modify your Dotfuscator configuration, just change your solution configuration to “ManualObfuscation” and build. That should bring up the Dotfuscator GUI, where you can make all your changes and save. Building from within that GUI will also work. Note, after pushing the Dotfuscator “play” button, you will get a message like

Package C:\mypackage.appx of type Appx has changed. Do you want to reload the package?

At this point you should click “No” and exit Dotfuscator afterwards. Obfuscation happens in-place. This means that the source package is eventually replaced with the obfuscated package. If you push the “play” button more than once, you will be reobfuscating a previously obfuscated package. There is not a good way of avoiding this at the moment.

Note that if you add a new assembly to the application, you’ll need to do the procedure described above to “Initialize the Configuration”. Otherwise, the new assembly will be treated as an artifact and will not be obfuscated.

Multiple Platforms

There is only a single Dotfuscator project file. So, if you’re targeting multiple platforms, take note that if you run Manual Obfuscation on more than 1 platform, then you will be modifying the same Dotfuscator project. It is not currently possible to have separate Dotfuscator project files for each platform.

Also, when configuring your Dotfuscator project using Manual Obfuscation, you should only build the application package for one platform. You can do this by ensuring only 1 architecture has a checkbox beside it in the Select and Configure Packages screen. If you don’t do this, you may get an error from Dotfuscator or the Dotfuscator user interface may pop up more than once.

Other Notes

Sample Project

A sample project is also available with all of this already put together. This can be downloaded here: ObfuscateAppxSample.zip

Note: It assumes a hardcoded path to Program Files (x86) for Dotfuscator 4.12. You may have to change this in ObfuscateAppx.targets for it to work on your system.

From the command line/build server?

This also works from the command line and is suitable for use in a build-server environment. Just use msbuild MyProject.sln from the command line. Dotfuscator will automatically run when building the package just like it does in Visual Studio.

Does this support multiple platforms/automatic version numbers?

Yes! This will automatically run Dotfuscator multiple times for all of the platforms that Appx packages are created for. It also automatically picks up on the latest version number being used.

How do I obfuscate Universal Apps?

Universal Apps for Windows Store 8.1/Windows Phone 8.1 have a similar workflow to normal Windows Store apps. For the purposes of obfuscation though, the two targets are treated as separate projects. So, this means that you will need separate Dotfuscator configuration files for the Windows Store and Windows Phone projects, and will basically need to repeat the process laid out in this blog post to enable obfuscation support in both the Windows Store and Windows Phone csproj file.


Imported project was not found

If you get this error:

The imported project "C:\Foo\Bar\ObfuscateAppx.targets" was not found.
Confirm that the path in the <Import> declaration is correct, and that the file exists on disk

This means you probably copied the ObfuscateAppx.targets file to the wrong place. Ensure that it is in the same folder as your Windows Store App’s .csproj or .vbproj file.

The command exited with code 3

If you get an error like this:

The command ""C:\Program Files(x86)\PreEmptive Solutions\Dotfuscator Professional Edition 4.10\dotfuscator.exe" "C:\ ..... /g"
exited with code 3

This error means that dotfuscator.exe couldn’t be found. If using a hardcoded path for DotfuscatorLocation, ensure that it is correct. If using an environment variable, ensure that it is properly set. The path to which Dotfuscator is installed can vary depending on your operating system and version of Dotfuscator

The “PreEmptive.Tasks.Dotfuscate” task could not be loaded

If you experience an error like this, it means that MSBuild couldn’t find the Dotfuscator MSBuild DLL. Ensure that you are using a supported version of Dotfuscator (CE will not work!). Normally, this file is located in C:\Program Files\MSBuild\PreEmptive\Dotfuscator\4.0. If this file is missing, you probably don’t have Dotfuscator installed on your system. The path to which this file is installed can vary depending on your operating system and version of Dotfuscator. If Dotfuscator is properly installed, this shouldn’t happen.

There are no assemblies to process

Ensure that your Dotfuscator configuration file is correct. This can be caused by the package <file> tag not using the correct project properties.

Xml Validation Error…

Ensure that your Dotfuscator configuration file is a valid XML file.

Dotfuscator says Warning Input Assembly appears to be obfuscated

You may see this warning if you do not have Visual Studio set to automatically increment the version number of your package AND you have not modified the source code since last building the package. It’s recommended to let Visual Studio automatically increment the version number.

If you need to rebuild and obfuscate the package without making any source changes though for some reason, you should navigate to the project directory. From there, delete the obj folder. After doing this, Visual Studio will be forced to rebuild the package and you will not get this warning.

There’s something else happening!?

You can also change a Visual Studio option so that MSBuild is more verbose about its messages. To set this option:

  1. Go to Tools in the menu bar.
  2. Go to Projects and Solutions. Expand it.
  3. Go to Build and Run.
  4. Change the “MSBuild project build output verbosity” so that it’s something other than quiet. Normal is useful for debugging errors.
  5. Click OK.

Now, you should be able to look in the output tab to see exactly what the Obfuscate task executes and hopefully figure out what went wrong.

You should be able to see something like this:


Cross Platform Application Analytics: Adding Meat to Pabulum

April 22nd, 2014 by Sebastian Holst

Could I have chosen a title with less meaning and greater hype? I seriously doubt it.

We have all heard that you can gauge how important a thing or concept is to a community by the number of names and terms used to describe that thing (the cliche is Eskimos and ice) - and I proposed a corollary; you can gauge how poorly a community understands a thing or concept by how heavily it overloads multiple meanings onto a single name or term. …and "analytics," "platform," and even "application" all fall into this latter category.

What kind of analytics and for whom? What is a “platform?” And what does crossing one of these (or between them) even mean?

In this post, I’m going to take a stab at narrowing the meaning behind these terms just long enough to share some "tribal knowledge" on what effectively monitoring and measuring applications can mean - especially as the very notion of what an application can and should be is evolving even as we deploy the ones we’ve just built.

Application Analytics: If you care about application design and the development, test, and deployment practices that drive adoption – and if you have a stake in both the health of your applications in production and their resulting impact – then you’ll also care about the brand of application analytics that we’ll be focusing on here.

Cross Platform: If your idea of “an application” is holistic and encompasses every executable your users touch (across devices and over time) AND includes the distributed services that process transactions, publish content, and connect users to one another (as opposed to the myopic perspective of treating each of these components as standalone) – then you already understand what “a platform” really means and why, to be effective, application analytics must provide a single view across (and throughout) your application platform.

PreEmptive Analytics

At PreEmptive, we’d like to think that we’ve fully internalized this worldview where applications are defined less by any one instance of an executable or script and more meaningfully treated as a collection of components that, when taken together, address one or more business or organizational needs. …and this perspective has translated directly into PreEmptive Analytics’ feature set.

Because PreEmptive Analytics instrumentation runs inside a production application (as any application analytics instrumentation must), we find it helpful to divide our feature set into two buckets;

  1. Desired, e.g. those that bring value to our users like feature tracking and
  2. Required, e.g. those features that, if they do not behave, damage the very applications they are designed to measure.

How do you decide for yourself what’s desired versus required for your organization?

The list of “desired features” can literally be endless – and a missing “desired feature” can often be overlooked and forgiven because the user can be compensated with some other awesome feature that still makes implementing PreEmptive Analytics worthwhile. On the other hand, miss ANY SINGLE “required feature,” and the project is dead in the water – Violate privacy? Negatively impact performance or quality? Complicate application deployment? Generate regulatory, audit, or security risk? Any one of these issues is a deal breaker.

PreEmptive Analytics “required” cross platform feature set

Here’s a sampling of the kinds of features that our users often rely upon to hit their “required” cross platform feature set:

Platform, runtime, and marketplace coverage: will PreEmptive Analytics instrumentation support client, middle-tier, and server-side components?

PreEmptive Analytics instruments:

  • All .NET flavors (including 2.0 through WinRT and WP), C++, JavaScript, Java (including 8), iOS, and Android (plus special support for Xamarin generating native mobile apps across WP, iOS, & Android).
  • Further, our instrumentation passes Apple, Microsoft, Amazon, and Google marketplace acceptance criteria.

Network connectivity and resilience: will PreEmptive Analytics be able to capture, cache, and transport runtime telemetry across and between my users’ and our own networks?

PreEmptive instrumentation provides:

  • Automatic offline caching inside your application across all mobile, PC, cloud, and server components (with the exception of JavaScript). Special logic accommodates mobile platforms and their unique performance and storage capabilities. After automatically storing data when your application is offline, it will automatically stream the telemetry up once connectivity is reestablished.

PreEmptive Analytics endpoints can provide:

  • Longer-term data management for networks that are completely isolated from outside networks allowing you to arrange for alternative data access or transport while respecting privacy, security, and other network-related constraints.

Privacy and security at runtime and over time: will PreEmptive Analytics provide the flexibility to enforce your current and evolving security and privacy obligations?

PreEmptive Analytics instrumentation

  • Only collects and transmits data that has been explicitly requested by development. There is no unintended “over communication” or monitoring.
  • When data is transmitted, telemetry is encrypted over the wire.
  • Includes an extensible Opt-in switch that can be controlled by end users or through web-service calls allowing your organization to adjust and accommodate shifting opt-in and privacy policies without having to re-instrument and redeploy your applications.

PreEmptive Analytics endpoints can:

  • Reside and be managed entirely under your control – either on-premises or inside a virtual machine hosted in a cloud under your direct control.
  • They can be reconfigured, relocated, and dynamically targeted by your applications – even after your applications have been deployed.

Performance and bandwidth: will PreEmptive Analytics instrumentation impact my application’s performance from my users’ experience or across the network?

PreEmptive instrumentation:

  • Runs inside your applications’ process space in a low priority thread – never competing for system resources.
  • Utilizes an asynchronous queue to further optimize and minimize the collection and transmission of telemetry once captured inside your application.
  • Has “safety valve” logic that will automatically begin throwing away data packets and ultimately shut itself down when system resources are deemed to be too scarce – helping to ensure that your users’ experiences are never impacted.
  • Employs OS and device-specific flavors of all of the above ensuring that – even with injection post-compile – every possible step is taken to ensure that PreEmptive Analytics’ system and network footprint remains negligible.

What about the PreEmptive Analytics “desired” cross platform feature set? (The features that make analytics worth doing) As I’ve already said, this list is literally an endless one – If I were to list only the categories (let alone the features in each category), this would make an already long post into very very long post. So, the desired feature discussion will have to come later…

What’s the bottom Line for “Cross Platform Application Analytics?”

Be consistent – make sure your application analytics technology and practice are aligned with your definition of what an application actually is – and this is especially true when evaluating “cross-platform” architectures and semantics. A mismatch here will likely wipe out any chance of a lasting analytics solution, increase the cost of application analytics over time, and add to your technical debt.

Separate “needs” from “wants” – take every action possible to ensure that your application analytics implementation does no harm to the applications being measured and monitored either directly (performance, quality, …) or indirectly (security, reputation, compliance).

Want to put us through our paces? Visit www.preemptive.com/pa and request an eval…

Application Analytics - Segmenting the Solutions

January 8th, 2014 by Gabriel Torok

There are many ways to get sharp insight into your production applications, including:

  • Creating your own “in house” analytics solution.
  • Using a turn-key public cloud solution (e.g. Google Analytics).
  • Using a client managed package that can be run on premises or in the “cloud” of your choice (e.g. PreEmptive Analytics).

Each one has pros and cons and I’d like to quickly look at them, as well as consider the “do nothing” approach.

Do Nothing about Application Analytics:

Many companies are busy fighting today’s “fires”, and lack the sharp insight into their applications running in production that would enable them to reduce future fires. Said another way, companies without application analytics are more likely to miss quality goals, have higher maintenance costs, and lower customer satisfaction. It’s a vicious cycle where the less time they spend implementing application analytics, the more they end up needing it.

With application analytics, they can prioritize work based on actual usage patterns, identify, triage and resolve problems before their customers are seriously impacted. Because of these obvious benefits, high performing companies are less and less likely to choose the “do nothing” path.

Home Grown Application Analytics:

In-house developed and maintained application analytics solutions can provide deep understanding around the unique needs of a particular company. However, application analytics is not usually a core competency for the company. Therefore, performance, maintenance, ongoing support, new feature development, and high costs are usually an issue. Also, the company will not benefit from ongoing improvements done by others.

Public Cloud Application Analytics:

Turn-key solutions like Google Analytics, New Relic, and (eventually) Microsoft’s App Insights will cost-effectively address issues relevant to the mass market. Their strength is in providing high performance, frequently updated, rich reports with universal appeal. Their weakness is in depth of customization and the ability for client control (can’t run on-premises or in a secure data center of client’s choosing).

Client Managed Application Analytics:

In between these two solution-types are client managed offerings like PreEmptive Analytics which offer the depth of control that usually comes with home-grown solutions, while also offering an out-of-the box experience similar to turnkey cloud solutions. Client-managed solutions have a deeper ability to customize the analytics than cloud solutions do. For example, cloud solutions can’t pivot on any arbitrary data or integrate internal business data, because they can’t scale that across all their customers in a multi-tenant solution. Also, some organizations are regulated, security conscious, and/or have other isolated scenarios that keep them from sending their application data into a public cloud infrastructure. Client managed application analytics solutions like PreEmptive Analytics try to maintain many of the benefits of a turn-key solution, while keeping the benefits of depth of customization and client control and privacy aspects of a home grown solution.

Summary of the Strengths of Client Managed vs. Public Cloud solutions:


Successful companies will use application analytics to ensure their applications are performing as expected and continuously improving. Whether, they build their own, utilize one from the public cloud, or utilize a client managed solution depends on their specific requirements.

PreEmptive Analytics Supports Xamarin Developers

December 11th, 2013 by Sebastian Holst

A first in “the last frontier” of application analytics instrumentation

Xamarin lets a developer write in C# and then generate native iOS, Android, Windows Phone, and Win8 applications. With PreEmptive Analytics API for Xamarin, the PreEmptive Analytics API (C#) can be consumed by Xamarin to produce fully instrumented native Android, iOS, Windows Phone, and WinRT apps.

PreEmptive’s application instrumentation (the portion of our analytics solution that collects usage and exceptions and transmits the resulting application telemetry for analysis) already covers virtually every contemporary runtime (.NET, Win8, Windows Phone, JavaScript, Java, Android, iOS, and C++), BUT, for each runtime supported, our instrumentation must be introduced either through post-compile injection directly into the assembly/executable (very cool in its own right) and/or via a PreEmptive API.

However, PreEmptive Analytics Instrumentation for Xamarin establishes an important precedent – it is the first application analytics instrumentation API built to work within an application generator rather than the target runtime itself. …like the rest of the Xamarin experience, application instrumentation can be a “code once” and “deploy to a heterogeneous set of optimized native apps” many times experience…

Application Instrumentation: a cornerstone of application analytics


In addition to data analysis, Application Analytics solutions must provide specialized instrumentation and telemetry transmission functionality. General purpose analytics solutions are typically built to “Ingest everything” providing “adaptors” that translate external data sources into a proprietary analytics framework. While flexible, this approach is predicated on the assumption that a safe and reliable means to collect and transport raw data is available; with application analytics, this is rarely the case.

In addition to the functional requirements to capture the right kinds of runtime telemetry, an application instrumentation solution must meet a host of performance, privacy, quality, and security requirements as well – requirements that vary wildly by industry, use case, and target audience.

Incomplete instrumentation solutions force development to instrument a single app multiple times or omit valuable telemetry from their analytics solution.

PreEmptive Analytics instrumentation is optimized to efficiently, securely, and reliably capture application telemetry without compromising user experience, privacy or compliance obligations.

PreEmptive Analytics Instrumentation for Xamarin

For more information, visit www.preemptive.com/xamarin or email sales@preemptive.com – NOTE – while registration is required, the API itself is free to download and use.

Is there a catch? Not really - but if you really want to avoid licensing fees entirely, you will want to install the Community Edition of PreEmptive Analytics for TFS (included with all SKUs of Visual Studio & TFS other than Express). You will need this to serve as the endpoint that receives your application telemetry. For a general overview of this SKU and Application Analytics in general, check out my article inside MSDN’s Visual Studio 2013 ALM site: Application Analytics: What Every Developer Should Know.

If you’re interested in scaled up capabilities, you may want to consider PreEmptive’s commercial offerings:

In EVERY case - these endpoints can be installed on-premises and are always development managed (PreEmptive can’t touch your data).

Here are a few more technical details around the new API;

Adding Analytics

REMEMBER – code once in C# and have all of this functionality manifest inside your native iOS and Android apps!

Tracking Feature Use

The most common usage of analytics is to track which features are popular among users and how they interact with them. You can indicate that a feature was used by using the FeatureTick method. You can track the duration of a feature’s use by using FeatureStart and FeatureStop.

Sending Custom Data

You can send custom data to the configured endpoint with any type of message. To send over the data you construct an object to hold key-value pairs. One common use case is to report the arguments a method was called with and what the method will return.

Reporting Exceptions

The API provides a simple way to report exceptional conditions in your application. The exception reports can be used to track exceptions reported by your application or from third party software. The report can also have user added information added to it to aid support staff. And of course you can always add Extended Key information to track application state.

Off-line Storage

Your application is not required to always have network connectivity. By default, the API will store messages locally when the configured endpoint cannot be reached. The messages will automatically be sent and removed from offline storage once the endpoint can be reached.

Message Queuing and Transmission

Messages are not immediately sent to the configured endpoint. The API queues messages and sends them either when a certain amount of time has elapsed, or when a number of messages have accumulated. On platforms where transmission may have a performance impact, such as on mobile devices, the transmission of messages can be directly controlled by your program.

Your Phone Can Be a Very Scary Place

September 10th, 2013 by Sebastian Holst

Mobile apps are changing our social, cultural, and economic landscapes – and, with the many opportunities and perks that these changes promise, come an equally impressive collection of risks and potential exploits.

This post is probably way overdue – it’s an update (supplement really) to an article I wrote for The ISSA Journal on Assessing and Managing Security Risks Unique to Java and .NET way back in 09’. The article laid out reverse engineering and tampering risks stemming from the use of managed code (Java and .NET). The technical issues were really secondary – what needed to be emphasized was the importance of having a consistent and rational framework to assess the materiality (relative danger) of those risks (piracy, IP theft, data engineering…).

In other words, the simple fact that it’s easy to reverse engineer and tamper with a piece of managed code does not automatically lead to a conclusion that a development team should make any moves to prevent that from happening. The degree of danger (risk) should be the only motivation (justification) to invest in preventative or detective measures; and, by implication, risk mitigation investments should be in proportion to those risks (low risk, low investment).

Here’s a graphic I used in 09’ to show the progression from managed apps (.NET and Java) to the risks that stem naturally from their use.

Managed code risks in the mobile world
Of course, managed code is also playing a central role in the rise of mobile computing as well as the ubiquitous “app marketplace,” e.g. Android and, to a lesser degree, Windows Phone and WindowsRT – and, as one might predict, these apps are introducing their own unique cross-section of potential risks and exploits.

Here is an updated “hierarchy of risks” for today’s mobile world:

I’ve highlighted risks that have either evolved or emerged within the mobile ecosystem – and these are probably best illustrated with real world incidents and/or trends:

Earlier this year, a mobile development company documented how to turn one of the most popular paid Android apps (SwiftKey Keyboard) into a keylogger (something that captures everything you do and sends it somewhere else).

This little example highlights all of the risks highlighted above:

  • IP theft (this is a paid app that can now be side loaded for free)
  • Content theft (branding, documentation, etc. are stolen)
  • Counterfeiting (it is not a REAL SwiftKey instance – it’s a fake – more than a cracked instance)
  • Service theft (if the SwiftKey app makes any web service calls that the true developers must pay for – then these users are driving up cloud expenses – and if any of these users write-in for support, then human resources are being burned here too)
  • Data loss and privacy violations (obviously there is no “opt-in” to the keylogging and the passwords, etc. that are sent are clearly private data)
  • Piracy (users should be paying the licensing fee normally charged)
  • Malware (the keylogging is the malware in this case)

In this scenario, the “victim” would have needed to go looking for “free versions” of the app away from the sanctioned marketplace – but that’s not always the case.

Symantec recently reported finding counterfeit apps inside the Amazon Appstore (and Amazon has one of the most rigorous curating and analysis check-in processes). I, myself, have had my content stripped and look alike apps published across marketplaces too - see my earlier posts Hoisted by my own petard: or why my app is number two (for now) and Ryan is Lying – well, actually stealing, cheating and lying - again).

Now these anecdotes are all too real, but they are by no means unique. Trend Micro found that 1 in every 10 Android apps are malicious and that 22% of apps inappropriately leaked user data – that is crazy!

For a good overview of Android threats, checkout this free paper by Trend Micro, Android Under Siege: Popularity Comes at a Price.

To obfuscate (or not)?
As I’ve already written – you shouldn’t do anything simply to make reverse engineering and tampering more difficult – you should only take action if the associated risks are significant enough to you and said “steps” would reduce those risks to an acceptable level (your “appetite for risk.”)

…but, seriously, who cares what I think? What to the owners of these platforms have to say?

Android “highly recommends” obfuscating all code and emphasizes this in a number of specific areas such as: “At a minimum, we recommend that you run an obfuscation tool” when developing billing logic. …and, they go so far as to include an open source obfuscator, Proguard – where again, Android “highly recommends” that all Android apps be obfuscated.

Microsoft also recommends that all modern apps be obfuscated (see Windows Phone policy) and they also offer a “community edition” obfuscator (our own Dotfuscator CE) as a part of Visual Studio.

Tamper detection, exception monitoring, and usage profiling
Obfuscation “prevents” reverse engineering and tampering; but it does not actively detect when attackers are successful (and, with enough skill and time – all attackers can eventually succeed). Nor would obfuscation defend against attacks or include a notification mechanism – that’s what tamper defense, exception monitoring, and usage profiling do. If you care enough to prevent an attack, chances are you care enough to detect when one is underway or has succeeded.

Application Hardening Options (representative – not exhaustive)
If you decide that you do agree with Android’s and Microsoft’s recommendation to obfuscate – then you have to decide which technology is most appropriate to meet your needs – again, a completely subjective process to be sure, but hopefully, the following table can serve as a comparative reference.