The big software programs that run on PCs today consist of thousands — and sometimes even millions — of lines of source code (the step-by-step instructions that give the program its functionality and personality). That’s a lot to keep track of — and (as with anything made by people) flaws can creep in.
Software can be as complex as any mechanical orchestration of gears, shafts, pulleys, bearings, levers, switches, and so on. But unlike physical machinery, software is almost entirely abstract — it’s a set of electronic instructions. You can’t just hold it in your hand or put a wrench on it to tighten it up; that makes it harder to examine for strength, robustness, resiliency, and integrity.
It’s difficult to watch software’s inner workings in action: There is rarely something you can actually watch. Even a word-processing program consists of mostly mathematical calculations, data buffers, table lookups, device management, and other steps that mostly push electrons around, far removed from what we see on the screen.
Challenges like this make it difficult to know whether a complex software program is error-free and whether it does exactly what it’s supposed to do — and nothing else.
The problem of ensuring that a single software program is error-free is compounded by the way that computers and networks interact today. Within a single computer, there may be dozens of different programs all running at once, talking with each other on a variety of topics.
But there are some people who work day and night looking for such obscure situations. Their very existence is driven by the need to find flaws, especially those that can be exploited.
Errors can make software programs function in ways other than those the makers intended. Even so, many errors in common programs (such as word processors) aren’t even observable. They happen somewhere inside the program, where they’re sometimes hard to detect. Those that are observable are commonly called “bugs.”
There is one type of bug in a software program that is called a vulnerability. The word vulnerability implies some type of a weakness. When a person is vulnerable, he or she can be hurt more easily. Likewise, a vulnerability — also known as a security flaw or security hole — means a program is (in effect) gullible; certain conditions or instructions can make it per- form some function that it should not be allowed to do. This would be kind like the not-so-diligent security guard who stops watching the building’s entrance when a certain pretty woman walks in and starts sweet-talking him, permitting the crooks with the loot to stroll right by unnoticed.
Serious vulnerabilities permit a program to perform functions that corrupt or damage software or information on the computer. Now and then, some vulnerabilities are so critical that they are easily exploited by persons with advanced knowledge and ill intent. The result? Reprehensible: the release of damaging viruses, Trojan horses, and Internet worms, causing banks to close, airlines to cancel flights, and e-voting machines to elect robots to public office (okay, so maybe not all vulnerabilities are bad).
Closing the holes
A natural response to the threat of viruses is to fix the vulnerabilities. When a software company such as Microsoft discovers a vulnerability in one of its software programs, the response is to devise some sort of change to the program. The change must permit the program to continue functioning correctly, but eliminate the vulnerability. Not always easy.
When a vulnerability is discovered, the software company assigns the task of designing a patch to one or more junior programmers (think I’m kidding?) — a patch is nothing more than a correctly rewritten part of the existing computer program. The assignment is to change a bit of the flawed program in a way that — with any luck — permits it to retain all its intended functionality, eliminating only the security hole.
To simplify the patching procedure, the software companies rarely just make a newer version of a program file, but instead they package the corrected program file within another program that installs the patch for you. So rather than having to wade through instructions that tell you (among other things) to replace the file winnt/system32/dcom3.dll dated February 3, 2004, with the file by the same name dated April 4, 2004, all you need to do is double-click the installer program. (Whew! That was a close one.)
The installer program can also do a lot of checking and testing. Most patches only work with one particular version of a program, so the installer program must perform some other checks:
- It checks the version of the program being patched.
- It checks to see whether the patch has already been installed.
- It determines whether there’s enough disk space on the computer for the patch to be installed without causing errors.
- It creates some log-file entries to document when this patch was installed on the computer.
- It creates entries in a special “uninstall” directory for later use if you decide to remove the patch.