Defining the Edge of Responsibility in Mobile Applications

Thursday, September 15, 2011

Rafal Los

0a8cae998f9c51e3b3c0ccbaddf521aa

If you work in banking, you're probably already had several of these discussions with your risk or fraud teams over the years. 

At what point does the responsibility of the vendor (you) stop, and the responsibility of the customer begin? 

This is a particularly difficult question when things like banking fraud come up, because we know through tough experience that even out-of-band PIN codes are relatively meaningless when the malware is living in your browser or on your computer and manipulating your transactions.

There are companies like Bank of America which offer, free of charge mind you, protective technologies from their business partners (Bank of America page here: https://www.bankofamerica.com/privacy/Control.do?body=privacysecur_sec_solutions) but typically they come with no warranties, little support, and are probably buried deep in the site somewhere. 

Here's the fundamental question though... where does your organization transfer responsibility to the customer?  More importantly, how do you define and communicate that?

As more and more functionality is moved to a browser or mobile device that isn't under the vendor or enterprise's control, we have to start to draw very clear lines between what I'm responsible for as the supplier of a service, and what you're responsible for as a consumer.  This carries over nicely into the Cloud Computing world, but today I want to address is from an application security perspective.

Think about when you write software, and specifications around input and output (data handling) and authentication/authorization are discussed as requirements.  For many years developers have been told not to trust anything not coming from their system...

But what if their system (say, a back-end web service) is communicating with a mobile platform (say, an application on an iPhone or Android or... whatever)?  Logically the same rules apply right?  Depends on who you ask apparently.

To see it slightly differently, an application that uses some home-built code on a mobile platform to authenticate a user can do it the easy way (verify the identity on the end client) or do it the hard way (verify the identity with multiple client <> server transactions). 

Given the speed at which apps are expected to perform, it's easier to either cache or simply write the routine on the client side - and then trust it.  The perils of this are obvious if you're a security professional, and even to some developers, yet many applications are found trusting the platform they're installed on simply because developers don't know how easy these platforms are to tamper with.

So at what point does liability for mis-use transfer from vendor to user?  When you develop your applications for the mobile market, do you consider these types of issues?  Do you clearly spell out the limitations of liability that are human readable, and help the user protect themselves - or do you hope that the user's machine isn't one of the vast numbers of compromised and trojaned hosts?  More importantly even ... how do you deal with hostile hosts like this?

How does a developer write a piece of code to maintain functionality in a hostile environment like a mobile platform?  There is a clear shift that must happen in trust - a shift over to security over functionality... but how does one accomplish that in today's business climate?  Are there any quick tips for making as few mistakes as possible?

  • Use built in cryptographic functionality when ever possible.  Generally the crypto functions that are on the mobile platform or remote system are better than what you can build yourself
  • Require step-validation for each critical transaction with a nonce to ensure that not only is each transaction the intended transaction but that it actually comes from the user, using your application as intended
  • Assume zero trust in the system - whether it's a desktop browser, a kiosk computer, or a mobile phone/tablet, this is the only way to ensure that you've accounted for all the failure modes
  • Use the mobile device or browser only as a presentation layer rather than attempting to do any business logic on the remote device, this ensures that critical business logic is done on the server side where you can monitor/inspect it
  • Use the appropriate level of security for data in motion - if you're sending credentials or critical data to web services it may be a good idea to use more than simple RESTful services, but a news feed may be just fine to send over a JSON request

In the end, it is critical to let your customers know where your responsibility ends, and to understand this yourself too.  One of the most dangerous things an organization can do is try to push that perimeter too far, and to protect every client... this can get not only incredibly costly, but also incredibly difficult to defend in court! 

Make sure your developers, program and project managers and security staff know your line of responsibility and communicate with your customers.

Cross-posted from Following the White Rabbit

Possibly Related Articles:
12724
Webappsec->General
Information Security
Software malware Application Security Liability Mobile Security Customers
Post Rating I Like this!
The views expressed in this post are the opinions of the Infosec Island member that posted this content. Infosec Island is not responsible for the content or messaging of this post.

Unauthorized reproduction of this article (in part or in whole) is prohibited without the express written permission of Infosec Island and the Infosec Island member that posted this content--this includes using our RSS feed for any purpose other than personal use.