PreEmptive Analytics Supports Xamarin Developers

December 11th, 2013 by Sebastian Holst

A first in “the last frontier” of application analytics instrumentation

Xamarin lets a developer write in C# and then generate native iOS, Android, Windows Phone, and Win8 applications. With PreEmptive Analytics API for Xamarin, the PreEmptive Analytics API (C#) can be consumed by Xamarin to produce fully instrumented native Android, iOS, Windows Phone, and WinRT apps.

PreEmptive’s application instrumentation (the portion of our analytics solution that collects usage and exceptions and transmits the resulting application telemetry for analysis) already covers virtually every contemporary runtime (.NET, Win8, Windows Phone, JavaScript, Java, Android, iOS, and C++), BUT, for each runtime supported, our instrumentation must be introduced either through post-compile injection directly into the assembly/executable (very cool in its own right) and/or via a PreEmptive API.

However, PreEmptive Analytics Instrumentation for Xamarin establishes an important precedent – it is the first application analytics instrumentation API built to work within an application generator rather than the target runtime itself. …like the rest of the Xamarin experience, application instrumentation can be a “code once” and “deploy to a heterogeneous set of optimized native apps” many times experience…

Application Instrumentation: a cornerstone of application analytics


In addition to data analysis, Application Analytics solutions must provide specialized instrumentation and telemetry transmission functionality. General purpose analytics solutions are typically built to “Ingest everything” providing “adaptors” that translate external data sources into a proprietary analytics framework. While flexible, this approach is predicated on the assumption that a safe and reliable means to collect and transport raw data is available; with application analytics, this is rarely the case.

In addition to the functional requirements to capture the right kinds of runtime telemetry, an application instrumentation solution must meet a host of performance, privacy, quality, and security requirements as well – requirements that vary wildly by industry, use case, and target audience.

Incomplete instrumentation solutions force development to instrument a single app multiple times or omit valuable telemetry from their analytics solution.

PreEmptive Analytics instrumentation is optimized to efficiently, securely, and reliably capture application telemetry without compromising user experience, privacy or compliance obligations.

PreEmptive Analytics Instrumentation for Xamarin

For more information, visit or email – NOTE – while registration is required, the API itself is free to download and use.

Is there a catch? Not really - but if you really want to avoid licensing fees entirely, you will want to install the Community Edition of PreEmptive Analytics for TFS (included with all SKUs of Visual Studio & TFS other than Express). You will need this to serve as the endpoint that receives your application telemetry. For a general overview of this SKU and Application Analytics in general, check out my article inside MSDN’s Visual Studio 2013 ALM site: Application Analytics: What Every Developer Should Know.

If you’re interested in scaled up capabilities, you may want to consider PreEmptive’s commercial offerings:

In EVERY case - these endpoints can be installed on-premises and are always development managed (PreEmptive can’t touch your data).

Here are a few more technical details around the new API;

Adding Analytics

REMEMBER – code once in C# and have all of this functionality manifest inside your native iOS and Android apps!

Tracking Feature Use

The most common usage of analytics is to track which features are popular among users and how they interact with them. You can indicate that a feature was used by using the FeatureTick method. You can track the duration of a feature’s use by using FeatureStart and FeatureStop.

Sending Custom Data

You can send custom data to the configured endpoint with any type of message. To send over the data you construct an object to hold key-value pairs. One common use case is to report the arguments a method was called with and what the method will return.

Reporting Exceptions

The API provides a simple way to report exceptional conditions in your application. The exception reports can be used to track exceptions reported by your application or from third party software. The report can also have user added information added to it to aid support staff. And of course you can always add Extended Key information to track application state.

Off-line Storage

Your application is not required to always have network connectivity. By default, the API will store messages locally when the configured endpoint cannot be reached. The messages will automatically be sent and removed from offline storage once the endpoint can be reached.

Message Queuing and Transmission

Messages are not immediately sent to the configured endpoint. The API queues messages and sends them either when a certain amount of time has elapsed, or when a number of messages have accumulated. On platforms where transmission may have a performance impact, such as on mobile devices, the transmission of messages can be directly controlled by your program.

Your Phone Can Be a Very Scary Place

September 10th, 2013 by Sebastian Holst

Mobile apps are changing our social, cultural, and economic landscapes – and, with the many opportunities and perks that these changes promise, come an equally impressive collection of risks and potential exploits.

This post is probably way overdue – it’s an update (supplement really) to an article I wrote for The ISSA Journal on Assessing and Managing Security Risks Unique to Java and .NET way back in 09’. The article laid out reverse engineering and tampering risks stemming from the use of managed code (Java and .NET). The technical issues were really secondary – what needed to be emphasized was the importance of having a consistent and rational framework to assess the materiality (relative danger) of those risks (piracy, IP theft, data engineering…).

In other words, the simple fact that it’s easy to reverse engineer and tamper with a piece of managed code does not automatically lead to a conclusion that a development team should make any moves to prevent that from happening. The degree of danger (risk) should be the only motivation (justification) to invest in preventative or detective measures; and, by implication, risk mitigation investments should be in proportion to those risks (low risk, low investment).

Here’s a graphic I used in 09’ to show the progression from managed apps (.NET and Java) to the risks that stem naturally from their use.

Managed code risks in the mobile world
Of course, managed code is also playing a central role in the rise of mobile computing as well as the ubiquitous “app marketplace,” e.g. Android and, to a lesser degree, Windows Phone and WindowsRT – and, as one might predict, these apps are introducing their own unique cross-section of potential risks and exploits.

Here is an updated “hierarchy of risks” for today’s mobile world:

I’ve highlighted risks that have either evolved or emerged within the mobile ecosystem – and these are probably best illustrated with real world incidents and/or trends:

Earlier this year, a mobile development company documented how to turn one of the most popular paid Android apps (SwiftKey Keyboard) into a keylogger (something that captures everything you do and sends it somewhere else).

This little example highlights all of the risks highlighted above:

  • IP theft (this is a paid app that can now be side loaded for free)
  • Content theft (branding, documentation, etc. are stolen)
  • Counterfeiting (it is not a REAL SwiftKey instance – it’s a fake – more than a cracked instance)
  • Service theft (if the SwiftKey app makes any web service calls that the true developers must pay for – then these users are driving up cloud expenses – and if any of these users write-in for support, then human resources are being burned here too)
  • Data loss and privacy violations (obviously there is no “opt-in” to the keylogging and the passwords, etc. that are sent are clearly private data)
  • Piracy (users should be paying the licensing fee normally charged)
  • Malware (the keylogging is the malware in this case)

In this scenario, the “victim” would have needed to go looking for “free versions” of the app away from the sanctioned marketplace – but that’s not always the case.

Symantec recently reported finding counterfeit apps inside the Amazon Appstore (and Amazon has one of the most rigorous curating and analysis check-in processes). I, myself, have had my content stripped and look alike apps published across marketplaces too - see my earlier posts Hoisted by my own petard: or why my app is number two (for now) and Ryan is Lying – well, actually stealing, cheating and lying - again).

Now these anecdotes are all too real, but they are by no means unique. Trend Micro found that 1 in every 10 Android apps are malicious and that 22% of apps inappropriately leaked user data – that is crazy!

For a good overview of Android threats, checkout this free paper by Trend Micro, Android Under Siege: Popularity Comes at a Price.

To obfuscate (or not)?
As I’ve already written – you shouldn’t do anything simply to make reverse engineering and tampering more difficult – you should only take action if the associated risks are significant enough to you and said “steps” would reduce those risks to an acceptable level (your “appetite for risk.”)

…but, seriously, who cares what I think? What to the owners of these platforms have to say?

Android “highly recommends” obfuscating all code and emphasizes this in a number of specific areas such as: “At a minimum, we recommend that you run an obfuscation tool” when developing billing logic. …and, they go so far as to include an open source obfuscator, Proguard – where again, Android “highly recommends” that all Android apps be obfuscated.

Microsoft also recommends that all modern apps be obfuscated (see Windows Phone policy) and they also offer a “community edition” obfuscator (our own Dotfuscator CE) as a part of Visual Studio.

Tamper detection, exception monitoring, and usage profiling
Obfuscation “prevents” reverse engineering and tampering; but it does not actively detect when attackers are successful (and, with enough skill and time – all attackers can eventually succeed). Nor would obfuscation defend against attacks or include a notification mechanism – that’s what tamper defense, exception monitoring, and usage profiling do. If you care enough to prevent an attack, chances are you care enough to detect when one is underway or has succeeded.

Application Hardening Options (representative – not exhaustive)
If you decide that you do agree with Android’s and Microsoft’s recommendation to obfuscate – then you have to decide which technology is most appropriate to meet your needs – again, a completely subjective process to be sure, but hopefully, the following table can serve as a comparative reference.

Mobile Analytics: like playing horseshoes or bocce ball? (When close is “good enough”)

June 13th, 2013 by Sebastian Holst

A recent post on Flurry’s “industry insight” blog caught my eye. The post, The iOS and Android Two-Horse Race: A Deeper Look into Market Share, called out the fact that iOS app users spend more time inside applications than their Android counterparts and then posited three potential underlying causes (condensed here – visit their post for the full narrative):

  • One was that the two dominant operating systems have tended to attract different types of users (we’ll get back to this shortly – this is close).
  • A second possible reason was that the fragmented nature of the Android ecosystem creates greater obstacles to app development and therefore limits availability of app content (suggesting app quality is the driving force).
  • The third possible explanation offered by Flurry was that iOS device owners use apps so developers create apps for iOS users and that in turn generates positive experiences, word-of-mouth, and further increases in app use (combining the two reasons above I suppose).

What struck me in this post was that, while there’s no disputing Flurry’s observation about “time spent in apps” across platforms, the lack of precision within the “2.8 billion app sessions” they track every day made genuine root cause analysis virtually impossible – and led to, in my view, an erroneous conclusion (or, more precisely, a false set of options where the real mechanics were all but invisible).

Back in January, I published the blog post Marketplaces Matter and I’ve got the analytics to prove it where I compared two versions of one of my apps, Yoga-pedia, published through Google Play and Amazon marketplaces. What’s noteworthy here is that the apps are genuinely identical – functionality, UX, everything - …and yet, the total time spent inside the app distributed through the Amazon marketplace was 40% higher than from Google Play. Which, if you pivot the ratio, total time spent inside the app sourced from Google Play was 72% of the time spent inside the (identical) app sourced from Amazon.

Now, if I’m interpreting Flurry’s graph in the above blog for January 2013 properly (when my earlier stats were generated), it shows a nearly identical ratio (the total time in “Android apps” was ~75-80% of total time in iOS). So what does that suggest?

  1. iOS users and Android users clearly use different marketplaces – but marketplace source is not something tracked.
  2. iOS apps themselves are of course always different from Android apps (I have an iOS version of Yoga-pedia that is close to my Android flavors – but even these are different). This is a major variable that Flurry analytics cannot separate out – they are looking at the roll-up of all iOS apps and comparing them to all Android apps.
  3. Treating all Android apps as a single data set (which includes multiple marketplaces) – further obscures what may be one of the key drivers of user behavior – the marketplace community.

So – going back to the first hypothesis, that Android attracts a different class of user than does iOS, I think that is as close as they could come given the kind of data available – the real answer is most likely that the Apple marketplace attracts a different kind of user than does Google Play (and the mix of Amazon Android app users is probably not significant enough to move the big needle).

…And so that begs my original question – is this kind of imprecise (but still accurate) intelligence “good enough” (like horseshoes, bocce ball, and nuclear war)? If this was as far as true application analytics could take me – then maybe…

BUT, once I had identified the potential role that marketplaces can have – I was able to drill down even deeper to identify the other marketplace delta’s that were (at least to me), extremely valuable including:

  • Amazon click through rate (CTR) was 164% higher than the Google Play CTR
  • Google Play Ad Delivery Failure rate (ADFR) was 199% higher than the Amazon ADFR
  • Amazon user upgrade rate was 54% higher than the Google Play upgrade rate (from free to paid app version).

So, in my case, owning my own data and having an instrumentation and analytics platform able to capture data points specific to my needs (precision) turns out to be very important indeed.

So why would anyone use technology like Flurry’s? LOTS OF REASONS relating to ad revenue and all of the other monetization services they offer app developers (that’s why they’re in business) – and that’s I guess the big point. Services and technologies like Flurry’s are built for app monetization – and to the extent that some analytics are an important ingredient in their recipe – you can bet that they’ll nail it – but to do more would be over engineering at best and, more likely, pose a material risk to their entire business model.

For advertising across huge inventories of mobile apps, analytics should be a bit like playing horseshoes – knowing that I can expect iOS to generally perform better than Android is useful.

On the other hand, as a development organization, if I really want to fine tune my app and optimize for adoption, specific behaviors, and operational/development ROI – I need an application analytics solution built with that use case in mind – not only are alternative analytics solutions missing key capabilities, there are solid business reasons that say those alternative technologies should actively avoid adding those very capabilities for all time.

Enterprise, B2B and B2C Applications Analytics

May 14th, 2013 by Gabriel Torok

Cloud, mobile and distributed software services have made simulating “true” production impossible while production and release cycles have become more frequent. At the same time, communication and collaboration between development and operations has become a focal point for process improvement, spawning a trend in software development expressed by the term Development Operations (DevOps).

This is especially important as the focus shifts from long QA/user acceptance testing cycles to rapid identification and resolution of issues in production, and deployment of the fixed application back into production. This rapid identify-fix-deploy loop requires adoption of new tools and processes to be successful.

It will be increasingly important to have sharp insight into applications running in production. Without it, you will miss quality goals, have higher maintenance costs, and lower customer satisfaction. With it, you can prioritize work based on actual usage patterns, identify, triage and resolve problems before your customers are seriously impacted. You can also test changes to see how they affect user behavior and intended outcome, and drive both hard and soft costs to a minimum.

Collecting, analyzing and acting on application runtime data poses unique challenges both in terms of the types of data that need to be gathered and the metrics that measure success. Effective application analytics implementations must accommodate the diversity of today’s applications and the emergence of cloud, mobile and distributed computing platforms. Narrower analytics technologies such as standard reports provided in a cloud service will never fully satisfy development and management objectives for corporations.

Existing analytic solutions have almost exclusively resided in the cloud. This makes perfect sense from a technological implementation standpoint for the analytics vendor. However, for companies with sensitive data or that are constrained by government regulation, storing your data “in the cloud” is simply not an option. The only appropriate application analytic solution is one where data can be surfaced on a variety of endpoints (on premise and/or off premise) according to client-specific rules for compliance with relevant industry standards and regulations.

Comprehensive application analytics must support enterprise, B2B and B2C use cases including cloud, servers, web-based, traditional PC and mobile apps – and the data should stream within a private network or across public networks as well.

Our application analytics solutions achieve that objective. Let’s look at the pieces:

  • PreEmptive Analytics for TFS is a “Client-premises” or on-premises incident response solution that connects production incidents to development and operations via automated, intelligent, rule-driven creation and management of work items to decrease the mean time to fix an application.
  • PreEmptive Analytics Runtime Intelligence Service is a managed, multi-tenant service providing broad analytics and archival services – it’s a hassle-free, always up, analytics platform ideally suited to measure the most common metrics and KPIs.
  • PreEmptive Application Analytics Workbench is an on-premises solution that provides critical insight into the adoption, usage, performance, and impact of production applications to facilitate feedback-driven development and enhance software quality, user experience, and decrease the mean time to improve an application.

At this point you might be wondering which of these tools might be most useful to you now. That is where the Data Hub shines brightly.

The PreEmptive Analytics Data Hub is a client-premises endpoint that can be installed internally, on a “DMZ” server, or in the cloud - and it serves as the “one endpoint” for all of your applications, across all of our services – even as you expand and adjust your analytics strategies and implementations. The Data Hub monitors runtime data and routes that data to any/all other PreEmptive Analytics software and services (including other Data Hubs). The Data Hub is an enterprise-scale runtime data management and distribution service providing resilience (caching, retry and commit) and flexibility across architectures and platforms.

So you can instrument your apps, send them to the one endpoint you need, the Data Hub, and then slide in one or more of the available analytics solutions (including 3rd party solutions) that best meet your requirements. If your analytics toolset changes, you can make any necessary adjustment without having to re-instrument or redistribute your applications. Applications that do not have privacy or regulatory concerns could have runtime data forwarded to the “cloud”. And, Analytics for Applications that touch more sensitive data can be kept internally only. Runtime data can be sent to more than one place, providing a set of checks and balances. Flexible, powerful, secure, actionable… You can have your cake and eat it too.

Marketplaces Matter and I’ve Got the Analytics to Prove It

February 4th, 2013 by Sebastian Holst

As I’ve covered many times in earlier posts, I’ve used PreEmptive Analytics to instrument a family of mobile yoga apps from TheMobileYogi. These apps are deployed across iOS, Android and Windows. The yoga apps are packaged in a variety of ways. Two apps – Yoga-pedia (free) and A Pose for That (premium) – are direct-to-consumer using a “freemium” model that includes embedded ads inside yoga-pedia. There are also a white-labeled app platform that can quickly generate a “re-skinned” app personalized for yoga studios, retailers and other “wellness-centered” businesses. And with all of these combined, I’m happy to report that we’ve passed the 110K download mark and still growing by the thousands each week.

The Issue at Hand
One adoption/monetization “variable” that is rarely measured in a clean way is the impact/influence that an app’s marketplace can have on the success of the app itself. This is in large part a practical issue – it’s not easy to compare, for example, Apple’s App Store with Google Play because the apps themselves are often quite distinct from one another – and so isolating the marketplace influence from the apps themselves can be tricky. However, with Android, we publish identical apps through two very different marketplaces; Amazon’s Android App Store and Google’s Google Play marketplace. By focusing on apps that are identical in every way BUT the API calls to the respective marketplaces, we can start to drill into the direct and indirect consequences of marketplace selection.

Android makes up roughly 51% of TheMobileYogi downloads.
Android Downloads Graph
Android downloads combine both Amazon and Google Play adoption.

Android Downloads of Yoga-pedia
As of January 29, 2012, the total downloads of Yoga-pedia were:

  • 21,109 Amazon (36% of the total)
  • 36,981 Google Play (64% of the total) or said another way,
  • Google Play downloads were 75% greater than from Amazon.

    …But downloads only tell a very small part of the story. What are users doing AFTER they download the app? How often do they use the app, for how long, and what exactly are they doing when they are inside?

    Yoga-pedia Sessions
    Using PreEmptive Analytics Runtime Intelligence, we see that there are in fact striking differences between the Google Play user population and the Amazon user population.
    Amazon v Google Play Statistics
    One glaring difference is the total number of users in each community.

    The total unique users of from Google Play is 208% higher than that of Amazon.

    If we were to stop here, I think our conclusion would be obvious – Google play delivers more downloads and more unique users than Amazon – and that has to make it a clear winner right? (Note, there has been no difference in marketing, advertising, etc. between the two marketplaces – specifically, we have done none).

    …but if we were to stop here, we would be making a very big mistake!

    How much time is spent inside the app?
    Another glaring difference that our analytics reveal is the difference between the average session length of our users – Amazon users tend to stay inside the app almost 3 times longer!

    So – if we multiply the total number of sessions by the average session length, we can calculate how many hours were spent inside Yoga-pedia.

  • Amazon: (41,937 sessions) X (13.88 minutes per session) = 9,701 hours
  • Google Play: (75,346 sessions) X (5.5 minutes per session) = 6,907 hours
  • Total time spent inside the app distributed through the Amazon marketplace is 40% higher than from Google Play.

    If I am trying to maximize ad impressions, establish a brand or hold my user’s attention toward some other objective, Amazon now looks significantly more attractive to me than Google Play.

    User Behavior
    Since Amazon users spend so much more time inside Yoga-pedia – how is their behavior different and how does that translate into measurable value?

    Returning Users
    Returning Users Graph

    Returning users (in orange) form the majority of the Amazon session activity – Google Play users are less likely to use the app multiple times – they are ‘tire kickers’ for the most part. Returning users are roughly equivalent across the two marketplaces even though there are many more Google Play users overall.

    Returning users are loyal and a lasting “relationship” can be established – whether you’re selling something, hoping to influence their behavior, or tap their expertise – recurring users are always “premium.”

    Ad Click Through Rate (CTR)

    Moving to a more concrete metric – we can compare total impressions, Ad Click through Rates (CTR) as well as Ad Server Errors – for this analysis, we’re just looking at 30 days. Note: in both cases, the apps use AdMob.

    Google Play
    Ad Impressions
    Ad Delivery Failure
    Ad Failure Rate
    Click Through Count

    Amazon CTR is 164% higher than the Google Play CTR

    Google Play Ad Delivery Failure rate is (ADFR) 199% higher than the Amazon ADFR

    Now, it’s not really possible to isolate WHY these differences exist – but we can make some educated guesses. For CTR percentages – are Amazon users simply more conditioned or likely to buy stuff as compared to the typical Google Play user?

    For ADFR percentages, we’re using the same ad service API, so the ad service itself is not to blame. Are the devices being used by Google Play users (as a total population) of lower quality or are they connecting through networks that are not as reliable?

    Regardless, that kind of conversion delta is nothing to ignore.


    As I’ve already mentioned, in addition to pushing ads, Yoga-pedia is one half of a freemium model where we hope to get these users to upgrade to our commercial version, A Pose for That.

    With PreEmptive Analytics, I’ve instrumented the app to track the feature that takes a user back to their respective marketplace (positioned on the app upgrade page). The ratio of unique users (not sessions) to upgrade clicks tells another important story; how likely is an Amazon user versus a Google Play user to upgrade to our paid app?

    Google Play
    Upgrade Marketplace
    Unique Users
    Conversion Rate

    Amazon user conversion rate is 54% higher than the Google Play conversion rate.

    User Behavior Within My App

    Yoga-pedia offers its users two locations where a user can click to upgrade; in a “tell me more” about the premium app page and at the end of an “Intro” to the current Yoga-pedia app.

    By looking at the split of where users are more likely to “convert,” we can learn something important about the app’s design in general AND the differences between user patterns across marketplaces in particular. As a proportion, Amazon users are more likely to convert from the Intro page than their Google Play counterparts. The Intro page is “deeper” in the app (harder to find) and so this difference in usage pattern may imply a more thorough reading of embedded pages by Amazon users (and this would be supported by the much longer session times).

    Feature Upgrade Table

    Exceptions not only interrupt a user’s experience (with all of the bad things that flow from that), they are also a material expense (support, development, etc.). Given that we are talking about two virtually identical apps – would we expect one version to be more unstable (and therefore expensive) than the other?

    Google Play
    Errors per Session

    Whether or not we expected it, the Google Play version of Yoga-pedia has an error rate per session that is 15% higher than its Amazon equivalent.

    Again – the analytics at this level can’t tell us why – but we can still make an educated guess regarding the differences in phone type and network stability of the two populations.


    Of course, if you want to drill down into the specific exceptions (and examine stack traces, device types, carriers, etc – all of that is available through analytics as well.

    Here are exception details for the error rates described above. Anyone want to help me debug these?

    Top Exceptions Table

    Do Marketplaces Matter? Of Course They Do.
    Of course, different apps will yield different results – but I don’t think that there can be any question that each marketplace comes with its own unique bundle of user experience, service level, and general appeal – and that, taken together, these attract their own distinct constituencies (communities) with their own behaviors, likes, dislikes and demographics.

    App developers who chose to ignore the market, commerce and security characteristics that come with each marketplace will do so at their peril – the differences are real, they should influence your design and marketing requirements, and they will undoubtedly impact your bottom line and your chances of delivering a truly successful app.