Quantcast
Channel: CodeSection,代码区,SQL Server(mssql)数据库 技术分享 - CodeSec
Viewing all articles
Browse latest Browse all 3160

The Problems with Patching Software (2003)

$
0
0

Early one Saturday morning last January, from a computer located somewhere within the seven continents, or possibly on the four oceans, someone sent 376 bytes of code inside a single data packet to a SQL Server. That packet―which would come to be known as the Slammer worm―infected the server by sneaking in through UDP port 1434. From there it generated a set of random IP addresses and scanned them. When it found a vulnerable host, Slammer infected it, and from its new host invented more random addresses that hungrily scanned for more vulnerable hosts.

Slammer was a nasty bugger. In the first minute of its life, it doubled the number of machines it infected every 8.5 seconds. (Just to put that in perspective, in July 2001 the famous Code Red virus doubled its infections every 37 minutes. Slammer peaked in just three minutes, at which point it was scanning 55 million targets per second.)

Then, Slammer started to decelerate, a victim of its own startling efficiency as it bumped into its own scanning traffic. Still, by the 10-minute mark, 90 percent of all vulnerable machines on the planet were infected. But when Slammer subsided, talk focused on how much worse it would have been had Slammer hit on a weekday or, worse, carried a destructive payload.

Slammer’s maniacal binge occurred a full six months after Microsoft had released a patch to prevent it. Those looking to cast blame―and there were many―cried a familiar refrain: If everyone had just patched his system in the first place, Slammer wouldn’t have happened.

But that’s not true. And therein lies our story.

Slammer was unstoppable. Which points to a bigger issue: Patching no longer works.

Partly, it’s a volume problem. There are simply too many vulnerabilities requiring too many combinations of patches coming too fast. Picture Lucy and Ethel in the chocolate factory―just take out the humor.

But perhaps more important and less well understood, it’s a process problem. The current manufacturing process for patches―from disclosure of a vulnerability to the creation and distribution of the updated code―makes patching untenable. At the same time, the only way to fix insecure post-release software (in other words, all software) is with patches.

This Hobson’s choice has taken patching and the newly minted discipline associated with it, patch management, into the realm of the absurd.

Hardly surprising, then, that philosophies on what to do next have bifurcated. Depending on whom you ask, it’s either time to patch less―replacing the process with vigorous best practices and a little bit of risk analysis―or it’s time to patch more―by automating the process with, yes, more software.

"We’re between a rock and a hard place," says Bob Wynn, former CISO of the state of Georgia. "No one can manage this effectively. I can’t just automatically deploy a patch. And because the time it takes for a virus to spread is so compressed now, I don’t have time to test them before I patch either."

How to Build a Monster

Patching is, by most accounts, as old as software itself. Unique among engineered artifacts, software is not beholden to the laws of physics; it can endure fundamental change relatively easily even after it’s been "built." Automobile engines, by contrast, don’t take to piston redesigns once they roll off the assembly line nearly so well.

This unique characteristic of software has contributed to a software engineering culture that generally regards quality and security as obstacles. An adage among programmers suggests that when it comes to software, you can pick only two of three: speed to market, number of features, level of quality. Programmer’s egos are wrapped up in the first two; rarely do they pick the third (since, of course, software is so easily repaired later, by someone else).

Such an approach has never been more dangerous. Software today is massive (windows XP contains 45 million lines of code) and the rate of sloppy coding (10 to 20 errors per 1,000 lines of code) has led to thousands of vulnerabilities. CERT published 4,200 new vulnerabilities last year―that’s 3,000 more than it published three years ago. Meanwhile, software continues to find itself running evermore critical business functions, where its failure carries profound implications. In other words, right when quality should be getting better, it’s getting exponentially worse.

Patch and Pray

stitching patches into these complex systems, which sit within labyrinthine networks of similarly complex systems, makes it impossible to know if a patch will solve the problem it’s meant to without creating unintended consequences. One patch, for example, worked fine for everyone―except those unlucky users who happened to have a certain Compaq system connected to a certain RAID array without certain updated drivers. In which case the patch knocked out the storage array.

Tim Rice, network systems analyst at Duke University, was one of the unlucky ones. "If you just jump in and apply patches, you get nailed," he says. "You can set up six different systems the same way, apply the same patch to each, and get one system behaving differently."

Raleigh Burns, former security administrator at St. Elizabeth’s Medical Center, agrees. "Executives think this stuff has a Mickey Mouse GUI, but even chintzy patches are complicated."

The conventional wisdom is that when you implement a patch, you improve things. But Wynn isn’t convinced. "We’ve all applied patches that put us out of service. Plenty of patches actually create more problems―they just shift you from one vulnerability cycle to another," Wynn says. "It’s still consumer beware."

Yet for many who haven’t dealt directly with patches, there’s a sense that patches are simply click-and-fix. In reality, they’re often patch-and-pray. At the very least, they require testing. Some financial institutions, says Shawn Hernan, team leader for vulnerability handling in the CERT Coordination Center at the Software Engineering Institute (SEI), mandate six weeks of regression testing before a patch goes live. Third-party vendors often take months after a patch is released to certify that it won’t break their applications.

All of which makes the post-outbreak admonishing to "Patch more vigilantly" farcical and, probably to some, offensive. It’s the complexity and fragility―not some inherent laziness or sloppy management―that explains why Slammer could wreak such havoc 185 days after Microsoft released a patch for it.

"We get hot fixes everyday, and we’re loath to put them in," says Frank Clark, former senior vice president and CIO of Covenant Health, whose six-hospital network was knocked out when Slammer hit, causing doctors to revert to paper-based care. "We believe it’s safer to wait until the vendor certifies the hot fixes in a service pack."

On the other hand, if Clark had deployed every patch he was supposed to, nothing would have been different. He would have been knocked out just the same.

Attention Hackers: Weakness Here

slammer neatly demonstrates everything that’s wrong with manufacturing software patches. It begins with disclosure of the vulnerability, which happened in the case of Slammer in July 2002, when Microsoft issued patch MS02-039. The patch steeled a file called ssnetlib.dll against buffer overflows.

"Disclosure basically gives hackers an attack map," says Gary McGraw, CTO of Cigital and the author of Building Secure Software. "Suddenly they know exactly where to go. If it’s true that people don’t patch―and they don’t―disclosure helps mostly the hackers."

Essentially, disclosure’s a starter’s gun. Once it goes off, it’s a footrace between hackers (who now know what file to exploit) and everyone else (who must all patch their systems successfully). And the good guys never win. Someone probably started working on a worm to attack ssnetlib.dll as soon as Microsoft released MS02-039.

In the case of Slammer, Microsoft built three more patches in 2002―MS02-043 in August, MS02-056 in early October and MS02-061 in mid-October―for related SQL Server vulnerabilities. MS02-056 updated ssnetlib.dll to a newer version; otherwise, all of the patches played together nicely.

Then, on October 30, Microsoft released Q317748, a nonsecurity hot fix for SQL Server.

Danger: Patch Under Construction

Q317748 repaired a performance-degrading memory leak. But the team that built it had used an old, vulnerable version of ssnetlib.dll. When Q317748 was installed, it could overwrite the secure version of the file and thus make that server as vulnerable to a worm like Slammer as one that had never been patched.

"As bad as software can be, at least when a company develops a product, it looks at it holistically," says SEI’s Hernan. "It’s given the attention of senior developers and architects, and if quality metrics exist, that’s when they’re used."

Which is not the case with patches.

Patch writing is usually assigned to entry-level maintenance programmers, says Hernan. They fix problems where they’re found. They have no authority to look for recurrences or to audit code. And the patch coders face severe time constraints―remember there’s a footrace on. They don’t have time to communicate with other groups writing other patches that might conflict with theirs. (Not that they’re set up to communicate. Russ Cooper, who manages NTBugtraq, the Windows vulnerability mailing list, says companies often divide maintenance by product group and let them develop their own tools and strategies for patching.) There’s little, if any, testing of patches by the vendors that create them.

Ironically, maintenance programmers write patches using the same software development methodologies employed to create the insecure, buggy code that they are supposed to be fixing. It’s no surprise then that these Dr. FrankenPatches produce poorly written products that can break as much as they fix. For example, an esoteric flaw found last summer in an encryption program―one so arcane it might never have been exploited―was patched. The patch itself had a gaping buffer overflow written into it, and that was quickly exploited, says Hernan. In another case last April, Microsoft released patch MS03-013 to fix a serious vulnerability in Windows XP. On some systems, it also degraded performance by roughly 90 percent. The performance degradation required another patch, which wasn’t released for a month.

Slammer feasted on such methodological deficiencies. It infected both servers made vulnerable by conflicting patches and servers that were never patched at all because the SQL patching scheme was kludgy. These particular patches required scripting, file moves, and registry and permission changes to install. (After the Slammer outbreak, even Microsoft engineers struggled with the patches.) Many avoided the patch because they feared breaking SQL Server, one of their critical platforms. It was as if their car had been recalled and the automaker mailed them a transmission with installation instructions.

Background Vulnerabilities Come to the Fore

the initial reaction to Slammer was confusion on a Keystone Kops scale. "It was difficult to know just what patch applied to what and where," says NTBugtraq’s Cooper, who’s also the "surgeon general" at vendor TruSecure.

Slammer hit at a particularly dynamic moment: Microsoft had released Service Pack 3 for SQL Server days earlier. It wasn’t immediately clear if SP3 would need to be patched (it wouldn’t), and Microsoft early on told customers to upgrade their SQL Server to SP3 to escape the mess.

Meanwhile, those trying to use MS02-061 were struggling mightily with its kludginess, and those who had patched―but got infected and watched their bandwidth sucked down to nothing―were baffled. At the same time, a derivative SQL application called MSDE (Microsoft Desktop Engine) was causing significant consternation. MSDE runs in client apps and connects them back to the SQL Server. Experts assumed MSDE would be vulnerable to Slammer since all of the patches had applied to both SQL and MSDE users.

That turned out to be true, and Cooper remembers a sense of dread as he realized MSDE could be found in about 130 third-party applications. It runs in the background; many corporate administrators wouldn’t even know it’s there. Cooper estimated it could be found in half of all corporate desktop clients. In fact, at Beth Israel Deaconess Hospital in Boston, MSDE had caused an infestation although the network SQL Servers had been patched.

When customers arrived at work on Monday and booted up their clients, which in turn loaded MSDE, Cooper worried that Slammer would start a reinfestation, or maybe it would spawn a variant. No one knew what would happen. And while patching thousands of SQL Servers is one thing, finding and patching millions of clients with MSDE running is another entirely. Still, Microsoft insisted, if you installed SQL Server SP3, your MSDE applications would be protected.

It seemed like reasonable advice.


Viewing all articles
Browse latest Browse all 3160

Trending Articles