When it goes to undertake security verification, the first question usually popping up is: tell me if you can break it? And if so, how difficult it is?
This question is valuable, as it can be directly related to the risk of fraud. Indeed, if I know how easy an adversary can break a system, it becomes possible to understand the threat, perform a risk analysis and define a strategy to remediate or respond to the risk.
Looking for vulnerabilities requires to skip several protections implemented in the mobile application, one after the other. The work is mostly driven to stress the solution by the mean of penetration testing with the aim to explore potential vulnerabilities. This requires a threat analysis, in other words to have defined in a preliminary stage what is at stake for the application: what are the sensitive assets that an adversary may look for and the related attack scenarios.
As the outcome of such analysis, a vulnerability analysis report is provided whose conclusion highlights security findings. A finding typically describes an attack scenario compromising an asset. The report shows the efforts needed to achieve this in terms of time, expertise or tooling. This is supposed to reflect a risk on the field. If the report is well written, it is also possible to get a good idea how strong protections in the mobile applications were, at least through the attack path tested.
The quality of the testing highly relies on the expert and his ability to create or to customize software tools to find his way through the protections. Giving the same code to two experts working in parallel leads likely to different results, even though the scope of work is narrowly framed. The outcome of such a process is neither reproducible nor decidable.
The cost for such analysis is not negligible. It also requires to find a “good” expert to perform the job. Depending on the quality of the protections, the job can take from a few days to several weeks, particularly if this involves an in-depth vulnerability analysis with a code review. As a result, such verification is non scalable.
Another valuable question about security is: am I confident that the protections are here and effective? This question often comes in second, although it is of primary importance. It is a matter of quality, yet it may be neglected as it cannot be tested like a normal feature. In terms of security, the human is often the weakest point of a system. It is particularly true when complexity is involved. And adding security in a mobile application is complex.
This explains why making verification on the final binary appears to be the right thing to do. Indeed, human mistakes may happen at different stages of the development process: a wrong file committed by a developer in the dependencies repository, a parameter that has not been well chosen. There are many scenarios. But what is the process that will detect the anomaly? And therefore, how can you confident that all security mechanisms are in place?
This becomes particularly critical now that Continuous Integration Continuous Deployment (CICD) process is used. Tool chains are automated to speed up the development cycle. As a consequence, the test phase and the quality assurance process must take place to prevent the automatic propagation of an issue. The CICD process will be a blessing if it takes the right input but it can be damaging if it is fed by a corrupted input.
A security mechanism cannot be tested as easily as a functional feature. This assertion summarizes well a common problematic in security: protections are designed to make attacks difficult. They are therefore not related to a functional behaviour. To test a functional feature, one has to execute a sequence of operation to reach the specific feature, and validate it works well. On the other hand, testing a security feature does not correspond to a normal behaviour, and therefore requires a specific mode of operation.
This is a bit like an airbag system embedded in a car. They are designed to mitigate the impact of an accident. In a normal drive, there is no reason for these systems to be triggered. To test their effectiveness, there is no other way than to provoke the conditions of a car crash and notice whether they behave in the expected manner.
Security measures in a mobile application needs to be tested in a similar way. It is necessary to provoke the conditions to trigger these measures. These conditions are supposed to reflect an adversary attempting to compromise the mobile application. Most of these security measures take place while the application is running.
Let’s take the example of root detection, classically implemented in secure mobile applications. Rooting a device indeed grants extra privileges to the user. This is often a condition to instrument malevolently a mobile application. Enabling a strong root detection consequently makes attacks painful and harder to conduct. In addition to making the attacker work harder, it may also be used to detect if the device of a legitimate user has been infected by a malware. There are several techniques to perform such detection and they must be enabled at the right time of the transaction. Taking the example of a mobile payment application: it should not be possible to perform a payment if the device is rooted. But how to check these detections have been proficiently implemented? The most proficient strategy is to create the conditions of someone trying to root the device before the application starts or during its processing. If the application detects the anomaly, this means that the protection is effective.
As for all features in a software, security protections have to be tested. This is necessary to gain assurance that everything is in order. For security protections, it requires to handle things in a different way than verifying functional behaviours.
How to verify the effectiveness of the protections?
To achieve this, it is necessary to create the testing conditions for any single protection. The testing conditions are different for each protection. For example: testing the root detection requires to root the device at different times of a transaction. Testing that code instrumentation is not possible requires the installation of some framework on the device. These conditions do not belong to the normal condition of usage. This means that such testing cannot be operated on a non-modified device with the original software.
Modifying the runtime execution is usual when conducting vulnerability analyses. At the end of a vulnerability analysis including penetration testing, the report should provide information about the strength of the encountered security protections, as they have been stressed during the penetration testing. Besides, only protections present in the attack path are evaluated. So another attacker with a different background may use a different attack path and not address one protection. This means that the answer about the protection is not systematic either.
With regards to the cost, the time and the coverage, a vulnerability analysis does not look like an optimal option. The efforts are too costly for implementing a verification, which would discourage the run of systematic verifications. A task dedicated to review the right presence of each protection could be designated as a security verification. The aim would be to perform a verification systematically on each update of the application.
This is important to understand that security verification and vulnerability analysis serve different purposes. Typically, if the security verification concludes that a protection is missing, this does not mean that the solution is vulnerable. It signifies that the application does not reach the right quality of coding, and typically does not meet a defined best practice. Indeed, security protections are not directly related to a vulnerability and overcoming it does not lead to direct exploitation. However, they made the application harder to attack. Does that mean that the application vulnerabilities are more exposed? Yes, certainly. And this is why checking the presence of protections is necessary.
If we take the example of the root detection. Root detections increase the difficulty for hackers to find their way in a mobile application. If root detection has not been enabled properly and turns out to be ineffective: does that mean that the application is vulnerable? Not necessarily. However we know that the application is more prone to attacks and its exposure to fraud is larger.
A naive way to undertake a security verification is to do it manually. Purposely, it should be managed on the mobile application binary to target the solution released on the field. For doing this, it requires a blend of knowledge, expertise and a minimum set of tools. The expert would have to create the conditions one after the other to witness if the mobile application shows the right behaviour.
The most optimal way to perform a security verification is to go for a dedicated automated tool. This is for this purpose that eShard developed esChecker solution: from the mobile application binary, esChecker technology performs a set of security verification and provides a simple way to review the results. It offers to the industry the opportunity to perform systematic and deterministic verifications on their mobile application.