Vulnerability Analysis:
Over the past few years, our team has manually analyzed numerous vulnerabilities in open source software. Be it a critical design flaw that eventually lead to the retrieval of encrypted mails in a widely used mail client's add-on that send and receive digitally signed and encrypted messages using the OpenPGP standard, or a simple implementation flaw on the improper neutralization of script-related HTML tags in a popular web framework , or even a memory corruption issue due to the miscalculation of buffer's length by just one element (OBOE - Off-by-one error) in an image processing system's library (however, successful exploitation of the memory corruption issues depends on multiple factors).
A few consolidated observations on software development addressing security:
- The fundamental idea of `trust` that involve enforcing the internal/external trust boundaries during the design phase was not given enough importance while the focus was predominantly on the software usability.
- Unrealistic assumptions about the source of communication.
- Improper handling of the exceptional conditions.
(In)Security Policy:
Most of the fundamental security issues could have been handled prematurely, if a decent security policy was drafted by forming a set of security considerations on what actions can and cannot be permitted and then reflecting the same in code implementation. Lack of adequate trust boundaries can lead to a number of vulnerabilities.
For example, consider a web application’s role-based authorization flaws where the proper authorization checks against a critical resource are not in place, thus allowing the non-privileged user to view/modify sensitive information that he/she is not intended to. A rule of not allowing the non-privileged user to access a critical resource by forming a trust boundary between privileged and non-privileged users in the software system must be addressed in the security policies. Let's consider a non-trivial Improper authorization scenario in a popular object-relational database system with a huge code base. While `upserting`, there are no checks in place against the table permissions and row-level security policies to verify if the executing user had permission to perform a "SELECT" operation. Identifying authorization flaws in large and complex code bases is not a fairly straightforward process.
The severity of a security vulnerability is usually determined by a numerical score that is calculated after analyzing the vulnerability characteristics. However the consequences of a successful exploit or the potential impact of a vulnerability are expressed in terms of Confidentiality, Integrity and, Availability. Hence, all the scenarios that have the potential to impact the confidentiality, integrity and, availability of the system must be ideally addressed in the security policies.
It's 2018; most of the security-conscious developers won't try a feat to code and implement a custom encryption/hashing algorithm. Apart from preferring the standard and strong cryptographic algorithms either for encryption or hashing purposes, the next few considerations in the security policy must be:
- Whether the sensitive information is encrypted before storage or transmission.
- Whether the encryption keys used are managed in a secure manner.
Implementing the cryptographic best practices could address a majority of fundamental confidentiality and integrity security concerns such as:
- Whether the chosen cryptographic algorithm is properly implemented as per the standards (RFC).
- Whether the Initialization Vector (IV) is being reused.
- Whether the chosen cryptographic hash function is collision free and whether the random salt values are used. (if no salt value is used, rainbow attacks could be successful).
Note: The above-mentioned scenarios are prevalent and get flagged in the `Insecure cryptographic implementations` category during most of the penetration testing engagements
Few Approaches:
Approaches vary according to the target application and the type of vulnerability that is supposed to be analyzed. Analyzing a simple web application business logic flaw or some improper user-input sanitization issue consumes less time and effort; usually by forward-tracing the data flow. However, that's not the case when analyzing memory corruption issues. For example, while analyzing a simple memory corruption issue (sometimes fortunate enough to obtain the respective crash's ASAN report) in a library with a few KLOC, tracing the code backwards, observing the control flow across multiple functions in the function call hierarchy, and determining the vectors causing the issue is a productive, straightforward approach. When analyzing a not-so-easy memory corruption issue in an application with a large code base with custom memory allocators (especially in C++, ex: OpenCV), a working crash PoC might be required to debug the application binary and analyze the flow observing corrupted structure members. In most complex cases, the crash is triggered due to the disruption of adjacent data structures (from the location where an affected structure's member is populated or performed a logical operation, considering the input from an untrusted source) by the malformed input which then occurs at a different location than where the disrupted structure member is being used.
However, the application's operational environment will not come under the purview of the analysis. Nonetheless, a general assumption is made, that the environment is following the best practices. Hence, the following scenarios are not usually considered (in general, not platform specific):
- Whether the host in which the application is installed, is deployed in a properly segmented network.
- Whether the deployed host is configured with EMET or any additional security enhancements for its kernel. Eg: PaX / SELinux etc.
- Whether the application is configured with non-default settings.
- Whether the application's binary is compiled with stack smashing support, ASLR, DEP/NX bit, heap protection mechanisms, etc.
Conclusion:
There are ontological expectations on uncovering most of the application's security loopholes with a greater dependency on black-box penetration testing methodologies or COTS source-code analysis tools. Such assessments are not ideally complete and such expectations will not tend to be met because the fundamental notion of black-box assessments is completely dependent on the auditor’s `understanding` about the application. Furthermore, most of the source-code analysis tools (either COTS or open source) simply identifies the vulnerable code patterns by performing lexical analysis – with custom lexical specifications, performing a set of NFA and DFA transitional operations, matching the input code patterns with the designated rule sets (eg: empty try-catch blocks, variables that are never used etc.) and then produce the relevant output whether the particular piece of code is flagged. Thus, the importance of a manual audit is immense.
The people who have developed a software have an obvious advantage in knowing its internals. Therefore, the efforts in enforcing a proper security policy (without a deviation toward prioritizing usability concerns) by properly identifying the trust boundaries and performing the basic threat modelling activities before the implementation phase will definitely not go futile. The software is hardened even before the implementation phase in a meaningful way.
References:
- CVE-2017-17844. Fix commit: https://sourceforge.net/p/enigmail/source/ci/9cd82c5bd7b816525a85eb0d8ddf3accd96097f9
- CVE-2008-2302. Fix commit: https://github.com/django/django/commit/7791e5c050cebf86d868c5dab7092185b125fdc9
- CVE-2017-14314. Fix commit: http://hg.code.sf.net/p/graphicsmagick/code/rev/2835184bfb78
- CVE-2017-15099. Fix commit: https://github.com/postgres/postgres/commit/3f80895723037c0d1c684dbdd50b7e03453df90f
Credit: ACE Team - Loginsoft