The best way to learn about logic flaws is not by theorizing, but through acquaintance with some actual examples. Although individual instances of logic flaws differ hugely, they share many common themes, and they demonstrate the kinds of mistake that human developers will always be prone to making. Hence, insights gathered from studying a sample of logic flaws should help you to uncover new flaws in entirely different situations.
Example 1: Fooling a Password Change Function
The authors have encountered this logic flaw in a web application implemented by a financial services company and also in the AOL AIM Enterprise Gateway application.
The application implemented a password change function for end users. It required the user to fill out fields for username, existing password, new password, and confirm new password.
There was also a password change function for use by administrators. This allowed them to change the password of any user without the need to supply the existing password. The two functions were implemented within the same server-side script.
The client-side interface presented to users and administrators differed in one respect — the administrator’s interface did not contain a field for an existing password. When the server-side application processed a password change request, it used the presence or absence of the existing password parameter to indicate whether the request was from an administrator or an ordinary user.
In other words, it assumed that ordinary users would always supply an existing password parameter.
The code responsible looked something like this:
String existingPassword = request.getParameter(“existingPassword”);
if (null == existingPassword)
trace(“Old password not supplied, must be an administrator”);
trace(“Verifying user’s old password”);
Once the assumption has been explicitly stated in this way, the logic flaw becomes obvious. Of course, an ordinary user can issue a request that does not contain an existing password parameter, because users control every aspect of the requests they issue.
This logic flaw was devastating for the application. It enabled an attacker to reset the password of any other user and so take full control of their account.
Example 2: Proceeding to Checkout
The authors encountered this logic flaw in the web application employed by an online retailer.
The process of placing an order involved the following stages:
1. Browse the product catalog and add items to the shopping basket.
2. Return to the shopping basket and finalize the order.
3. Enter payment information.
4. Enter delivery information.
The developers assumed that users would always access the stages in the intended sequence, because this was the order in which the stages are delivered to the user by the navigational links and forms presented to their browser. Hence, any user who completed the order process must have submitted satisfactory payment details along the way.
The developers’ assumption was flawed for fairly obvious reasons. Users control every request that they make to the application and so can access any stage of the ordering process in any sequence. By proceeding directly from stage 2 to stage 4, an attacker could generate an order that was finalized for delivery but that had not actually been paid for.
Example 3: Rolling Your Own Insurance
The authors encountered this logic flaw in a web application deployed by a financial services company.
The application enabled users to obtain quotations for insurance, and if desired, complete and submit an insurance application online. The process was spread across a dozen stages, as follows:
■ At the first stage, the applicant submits some basic information, and specifies either a preferred monthly premium or the value the applicant wishes insurance for. The application offers a quotation, computing whichever value the applicant did not specify.
■ Across several stages, the applicant supplies various other personal details, including health, occupation, and pastimes.
■ Finally, the application is transmitted to an underwriter working for the insurance company. Using the same web application, the underwriter reviews the details and decides whether to accept the application as is, or modify the initial quotation to reflect any additional risks.
Through each of the stages described, the application employed a shared component to process each parameter of user data submitted to it. This component parsed out all of the data in each POST request into name/value pairs, and updated its state information with each item of data received.
The component which processed user-supplied data assumed that each request would contain only the parameters that had been requested from the user in the relevant HTML form. Developers did not consider what would happen if a user submitted parameters that they had not been asked to supply.
Of course, the assumption was flawed, because users can submit arbitrary parameter names and values with every request. As a result, the core functionality of the application was broken in various ways:
■ An attacker could exploit the shared component to bypass all server-side input validation. At each stage of the quotation process, the application performed strict validation of the data expected at that stage, and rejected any data that failed this validation. But the shared component updated the application’s state with every parameter supplied by the user. Hence, if an attacker submitted data out of sequence, by supplying a name/value pair which the application expected at an earlier stage, then that data would be accepted and processed, with no validation having been performed. As it happened, this possibility paved the way for a stored cross-site scripting attack targeting the underwriter, which allowed a malicious user to access the personal information belonging to other applicants.
■ An attacker could buy insurance at an arbitrary price. At the first stage of the quotation process, the applicant specified either their preferred monthly premium or the value they wished to insure, and the application computed the other item accordingly. However, if a user supplied new values for either or both of these items at a later stage, then the application’s state was updated with these values. By submitting these parameters out of sequence, an attacker could obtain a quotation for insurance at an arbitrary value and arbitrary monthly premium.
■ There were no access controls regarding which parameters a given type of user could supply. When an underwriter reviewed a completed application, they updated various items of data, including the acceptance decision. This data was processed by the shared component in the same way as for data supplied by an ordinary user. If an attacker knew or guessed the parameter names used when the underwriter reviewed an application, then the attacker could simply submit these, thereby accepting their own application without any actual underwriting.
Example 4: Breaking the Bank
The authors encountered this logic flaw in the web application deployed by a major financial services company.
The application enabled existing customers who did not already use the online application to register to do so. New users were required to supply some basic personal information, to provide a degree of assurance of their identity. This information included name, address, and date of birth, but did not include anything secret such as an existing password or PIN number.
When this information had been correctly entered, the application forwarded the registration request to back-end systems for processing. An information pack was mailed to the user’s registered home address. This pack included instructions for activating their online access via a telephone call to the company’s call center and also a one-time password to use when first logging in to the application.
The application’s designers believed that this mechanism provided a very robust defense against unauthorized access to the application. The mechanism implemented three layers of protection:
■ A modest amount of personal data was required up front, to deter a malicious attacker or mischievous user from attempting to initiate the registration process on other users’ behalf.
■ The process involved transmitting a key secret out-of-band to the customer’s registered home address. Any attacker would need to have access to the victim’s personal mail.
■ The customer was required to telephone the call center and authenticate himself there in the usual way, based on personal information and selected digits from a PIN number.
This design was indeed robust. The logic flaw lay in the actual implementation of the mechanism.
The developers implementing the registration mechanism needed a way to store the personal data submitted by the user and correlate this with a unique customer identity within the company’s database. Keen to reuse existing code, they came across the following class, which appeared to serve their purposes:
After the user’s information was captured, this object was instantiated, populated with the supplied information, and stored in the user’s session. The application then verified the user’s details, and if they were valid, retrieved that user’s unique customer number, which was used in all of the company’s systems. This number was added to the object, together with some other useful information about the user. The object was then transmitted to the relevant back-end system for the registration request to be processed.
The developers assumed that making use of this code component was harmless and would not lead to any security problem. However, the assumption was flawed, with serious consequences.
The same code component that was incorporated into the registration functionality was also used elsewhere within the application, including within the core functionality, which gave authenticated users access to account details, statements, funds transfers, and other information. When a registered user successfully authenticated herself to the application, this same object was instantiated and saved in her session to store key information about her identity. The majority of the functionality within the application referenced the information within this object in order to carry out its actions — for example, the account details presented to the user on her main page were generated on the basis of the unique customer number contained within this object.
The way in the code component was already being employed within the application meant that the developers’ assumption was flawed, and the manner in which they reused it did indeed open up a significant vulnerability.
Although the vulnerability was serious, it was in fact relatively subtle to detect and exploit. Access to the main application functionality was protected by access controls at several layers, and a user needed to have a fully authenticated session to pass these controls. To exploit the logic flaw, therefore, an attacker needed to perform the following steps:
■ Log in to the application using his own valid account credentials.
■ Using the resulting authenticated session, access the registration functionality and submit a different customer’s personal information. This causes the application to overwrite the original CCustomer object in the attacker’s session with a new object relating to the targeted customer.
■ Return to the main application functionality and access the other customer’s account.
A vulnerability of this kind is not straightforward to detect when probing the application from a black-box perspective. However, it is also hard to identify when reviewing or writing the actual source code. Without a clear understanding of the application as a whole and the use made of different components in different areas, the flawed assumption made by developers may not be evident. Of course, clearly commented source code and design documentation would reduce the likelihood of such a defect being introduced or remaining undetected.
Example 5: Erasing an Audit Trail
The authors encountered this logic flaw in a web application used in a call center.
The application implemented various functions enabling helpdesk personnel and administrators to support and manage a large user base. Many of these functions were security-sensitive, including the creation of accounts and the resetting of passwords. Hence, the application maintained a full audit trail, recording every action performed and the identity of the user responsible. The application included a function allowing administrators to delete audit trail entries. However to protect this function from being maliciously exploited, any use of the function was itself recorded, so the audit trail would indicate the identity of the user responsible.
The designers of the application believed that it would be impossible for a malicious user to perform an undesirable action without leaving some evidence in the audit trail that would link them to the action. An attempt by an administrator to cleanse the audit logs altogether would always leave one last entry that would point the finger of suspicion at them.
The designers’ assumption was flawed, and it was possible for a malicious administrative user to carry out arbitrary actions without leaving any evidence within the audit trail that could identify them as responsible. The steps required are:
1. Log in using your own account, and create a second user account.
2. Assign all of your privileges to the new account.
3. Use the new account to perform a malicious action of your choice.
4. Use the new account to delete all of the audit log entries generated by the first three steps.
Each of these actions generates entries in the audit log. However, in the last step, the attacker deletes all of the entries created by the preceding actions. The audit log now contains a single suspicious entry, indicating that some log entries were deleted by a specific user — that is, by the new user account that was created by the attacker. However, because the previous log entries have been deleted, there is nothing in the logs to link the attacker to anything suspicious. The perfect crime.
NEXT is..Avoiding Logic Flaws…………………………..,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,