Monday, March 30, 2009

Antivirus: Can't Live With It, Can't Live Without It

Like you and many other responsible Internet users, I run antivirus software on my desktop. Even leaving the performance issues out of it, I find it very frustrating to configure and use. I get that it's complicated behind the scenes, but how many knobs do you really need to say that you'd like your computer protected from lurking evil? As a security geek, I'd like a few more knobs than most people, and a little more explanation, but somehow what I want isn't available and what I don't want is slathered all over my screen in popups and screen after screen of nigh-useless configuration options. Here's my take on what antivirus software does:

  • Detect viruses (and spyware or whatever else)
  • Respond to viruses
  • Update detection functionality

That's it. Anything else is just variations on a theme, or possibly a smoke screen of some sort.

Detect viruses

Though you wouldn't know it to look at any AV I've tried, this should be simple to configure. There are only 3 questions:
  • When to look? E.g. continuously [starting in 10 minutes], once right now, regularly in the future
  • Where to look? E.g. in memory, on disk, in traffic coming from outside (email, removable media, downloads)
  • What to look for? E.g. viruses, spyware, tracking cookies, root kits

For some reason AV vendors seem to scatter these questions over multiple screens with no rhyme or reason (at least from the user's point of view) . In the AV I'm running on one of my machines, where to look is configured differently depending on when to look, so if you scan continuously, you can white-list a directory, but if you scan regularly, no white-list applies. Of course it's not all organized by when to look; there's one section organized purely by where to look, located in parallel to the when to look sections.

Respond to viruses

Sooner or later, your software will find a virus. If you want your users to think, "wow! good thing I'm protected" rather than "yecch! I must run straight to my blog and post about how awful AV software is", help your users respond to the situation constructively. Here are their likely questions:

What's going on?

Hopefully, your users are reasonably cautious and therefore don't see your "I found a virus" dialog regularly. Certainly last week was the first time I've seen this dialog in the course of my regular-user computing (I've seen it plenty when it involves my valiant attempts to protect my malware cache). Most such dialogs try to answer this question with a link to more information, which seems like a fine idea to me.

Ladies and gentlemen, such a link had better work. At the moment your user sees this link, he has been startled out of his task to respond to what he is likely to see as an emergency. He is likely to have a negative emotional response to being startled, the delay in his work, the nature of the emergency (a virus? ewww), et cetera. Even if your target user won't understand anything technical you might have to say about the virus, this is the moment you need to convince your wary and probably already irritated user that there is something bad going on, but you, the AV vendor, know all about it. In my case, it took me to a generic search page on my vendor's web site (don't do this), which claimed it didn't know of such a virus when I used the exact name shown on the dialog (good grief, people). When I tried again with a substring of the name given in the dialog,
it found an entry with the exact same title it claimed it couldn't find only seconds previously (don't do this). This essentially blank entry contained no information whatsoever about when the virus was discovered, its typical effects, how it spreads, related viruses, how to get rid of an infection, or anything else I (or a normal user) might want to know. In short, information in this entry was thin enough that even non-security folks would be wondering whether this was real. Which brings us to the next question.

Are you sure?

It's possible that if your users are not security geeks, and you handled the previous question well enough, you wouldn't need to face this question. Perhaps some kind of certainty meter would be enough in some cases (but keep it hooked up to something real, please).

My first instinct was to hand a copy of this supposed virus off to our in-house equivalent of Virus Total for a second (and following) opinion. Unfortunately, the AV was now making the files in question very difficult to access (it's supposed to, after all), and after an hour or so, I decided it didn't matter if it were a false positive, because there was no way I was going to be able to use or copy the files anyway, unless I switched AV (which was tempting at that point). As an AV vendor, if you are confident in your diagnosis, I suggest you provide a button to consult Virus Total or similar. I'd certainly trust you more, and your less-informed users probably would too.

What can I do about it?

Users are hoping for options like "remove", "repair damage" or possibly "quarantine". If you present options that will not work (e.g. for consistency so the same options always show up), grey them out and provide some kind of indicator why they won't work, hopefully including a way to make it work. In my case, both remove and the equivalent of "remove as superuser" landed me in an endless loop of dialog boxes involving UAC and the AV software (but not, interestingly, ever actually attempting to become Administrator as one would expect). The equivalent of "quarantine" also failed. In the end, I had to empty the quarantine area of an old copy of my previously quarantined-against-my-will malware stash, from before I moved it to a machine with a decent OS, quarantine, and then empty quarantine again. This solution required a fair amount of hunting around and more information about how AV usually works than was readily apparent from the dialogs. I don't think most AV users would be able to do it, e.g. an acquaintance whose AV (a different brand) kept alerting about viruses but wouldn't let her get rid of them, who ended up having to send her machine in for service last month.

Another option I'd like to see is "submit for re-analysis" or similar. Users could do this if they think it is a false positive. With an automated service to see the results from files that have already been re-analyzed, this could add an option like "ignore" to the available options (better put the file on a white list so you don't bother the user again, though).

Update detection functionality

Trust me, the user does not want to hear about this in real-time. By all means, log plenty and request assistance from the user if something you can't fix automatically goes wrong with your attempt. I used to use some AV that assured me it was updating successfully even though my network was quiescent and the only file changed on disk was the log file -- quiet failure of protective equipment is never desirable. But as soon as it's going right again, shut up and go away! Do not make your innocent user click through dialog after dialog if she happens to disconnect from the net for a week.

When to look, and maybe where to look if there are choices, is the only thing you need to configure. Snag information about proxy settings and the like from the OS; don't make us configure them again.

Yes, there are some other aspects of operation some of us might want to control. But chances are good that from the user's perspective, most of them are about one of the above functions, so do your users a favor and make them accessible from the same place as the related settings. Your users' lives would be better if they could configure your software to do what they want, and organizing settings in a way that makes sense to the user is a good starting point.


Thursday, March 26, 2009

Changing a tire while going 60mph

In my previous post Big, white, puffy clouds can still evaporate, I mentioned the Google Gmail outage from February and pointed out the implied resiliency (or lack there-of) of cloud-based platforms. Rich Miller over at just posted an interview with Urs Holzle from Google entitled "How Google Routes Around Outages." The title of this post is a quote from the referenced article, and it seems rather fitting.

On the one hand, I was a bit surprised to read Holzle admit the preference to have human involvement in remediating certain types of outages. It seems Google has automated technological controls to handle single system and small-scale failures (a whole equipment rack, etc.) within a single data center; but larger scale outages (such as a whole data center) are mitigated manually. On the other hand, I can understand the desire to have explicit human oversight; large-scale failures tend to have unpredictable cascading effects that are hard to account for with automated systems. In fact, Holzle mentions that the very systems designed to handle larger-scale outages tend to be the same systems responsible for the outages; the February GMail outage is an explicit example.

Having said that, I still believe clouds need to be as self-sustaining as possible (preferably in automated fashion) despite the current limits of technology and engineering. That implied resiliency and redundancy is one of the core value propositions of clouds. Without it, clouds get devalued into not much more than a traditional managed hosting platform/service.

My suggestion to organizations considering a purchase from a cloud-based vendor is to have a due-diligence discovery session regarding the cloud architecture that vendor has engineered. You want to see something akin to a fairly detailed Visio diagram depicting multiple datacenter/regional locations, with appropriate systems in each location and the roles they provide to the overall cloud platform and offered services. You want to see redundancy in systems and roles, with distribution and backups across disparate regions. Look for points of failure, and ask them how the entire cloud handles an outage and thwarts a service interruption during the failure for any given node you happen to point to. Especially point to an entire datacenter/region and ask what happens if that entire area of the cloud system goes down due large-scale power outage, natural disaster, etc. Also be wary of how cloud entrance points could become affected during an outage; it is great if the overall service will naturally shift processing to a secondary region if the entire first region experiences an outage, but it is also not so great if you have to explicitly go back and reconfigure your organization to point to the secondary region. Even though the processing survived, your point of entrance changed and thus you still had an outage. You definitely want to ensure all points of presence/entrance are fault tolerant.

Overall, going with a cloud vendor still requires you to think about redundancy, fault tolerance, business continuity, etc. And it might require a little extra effort up front in order to ferret out the appropriate details from your cloud vendor during the initial evaluation. Fortunately, once you are satisfied with the resiliency offered by the vendor's cloud architecture, you can then focus your attention elsewhere--the vendor is the one that has to deal with the implementation and ongoing maintenance of the necessary architecture redundancy. And that is one headache that is definitely nice to outsource.

Until next time,
- Jeff

Wednesday, March 25, 2009

Hugs Not Bugs

At CanSecWest last week, a group of researchers (Charlie Miller, Alex Sotirov and Dino Dai Zovi) stirred up a fair bit of controversy by pushing a "No More Free Bugs" campaign. The premise of which was that they are unwilling to put in the time and effort to research software vulnerabilities and then simply hand them over to software vendors free of charge. This concerned those in the 'Hugs Not Bugs' camp who feel that researchers have a moral obligation to turn over vulnerability details to software vendors free of charge, as such an approach is in the best interest of all involved.

Full Disclosure

Before diving into my personal views on the subject of paid vulnerability research let me first provide full disclosure (blogger style). I know and respect the researchers that stood up an CanSecWest. Charlie Miller in particular served as a technical reviewer for the Fuzzing book that I co-wrote along with Pedram Amini and Adam Greene. Charlie is a great researcher and a stand-up guy.

My stance on paying for vulnerability research certainly isn't much of a secret. I ran the iDefense/Versign Vulnerability Contributor Program (VCP) for several years. We secured the intellectual property rights to hundreds of vulnerabilities during my tenure and worked with affected vendors to ensure that patches were ultimately produced for all issues. Our motivation was certainly financial and was driven by acquiring early access to vulnerability information and providing workarounds to clients until official patches became available. TippingPoint/3com went on to launch a similar program with their Zero Day Initiative (ZDI), which was actually launched by former iDefense employee, David Endler who was instrumental in launching the VCP program as well.

During my time at iDefense I came to realize the value of vulnerabilities. I also came to understand just how much software vendors benefited from such information. Bugs are not good for vendors as they result in negative publicity and reduce consumer confidence in a given product or vendor. Therefore, vendors have a very real incentive to ensure that they receive vulnerability information as quickly as possible and that such information does not become public knowledge. For a long time, vendors have promoted the hugs not bugs philosophy. Why wouldn't they? When researchers freely hand over vulnerability details, vendors receive monetary value at no cost and therein lies the failure. In a free market economy, value is not given away for very long before a market evolves. This led to the creation of programs such as the VCP and ZDI. Vendors have also heavily benefited from these programs as they still receive vulnerability details free of charge. The fee to the researcher is paid by a middle man.

Current State

It still amazes me that the VCP and ZDI remain the only open, main stream programs paying for vulnerabilities. Believe me, there is plenty of room at the party for others. Bugs are not going away. That does not however mean that there aren't other entities willing to pay for vulnerabilities. Unfortunately, other entities have no interest in sharing vulnerability information with software vendors. Government entities and criminal organizations are very willing to pay for vulnerabilities. However, they have very different motivations and seek to use the information for offensive purposes meaning that they have little interest in seeing patches emerge.

Despite understanding the value of vulnerabilities, vendors have been unwilling (at least openly) to pay for such information. The argument given is typically that they fear they will be held hostage by researchers demanding increasingly outrageous sums of money and threatening to publicly disclose the bug if they aren't paid. This argument just doesn't fly. Bug hunters already have numerous avenues to profit form their research should they choose to. If they seek the highest bidder they will find one and vendors will never be in the bidding process to begin with.

My Solution

My stance is simple. It's time for vendors to start paying for the value that they receive. Mozilla has for years had a program known as the Mozilla Security Bug Bounty Program. The program is quite basic - contributors receive $500 (and a t-shirt!) for critical security bugs which meet specific criteria. Now $500 pales in comparison to the $50K+ that may be available for the juiciest bugs but in my opinion the total value isn't the point. Those motivated solely by cash will go elsewhere, but I believe that the majority of researchers have little interest in negotiating back room deals to get full value. Rather, they are motivated by doing good, but want to be compensated to at least some degree for their hard work.

What's the appropriate price point for vendors? I'll leave that to the free market to decide. After all, that's what the invisible hand is for. Let's however consider the $5K value that was used by the Pwn2Own contest. By my count, during 2008, Microsoft issued 46 critical advisories. That amounts to $230K in payouts for such information. Now let's double that number to convert from advisories to actual vulnerabilities. We're still at less than $500K, which amounts to less than the salary of a three to four experienced bug hunters on staff and even if Microsoft hired a hundred bug hunters they'd fail to identify all of the bugs found by outside researchers. It's also 2x the value that Microsoft is willing to pay as a bounty to catch those behind the Conficker worm. A worm that wouldn't have seen the light of day if critical Microsoft vulnerabilities had not existed in the first place.

Let's be honest, vendors have been paying for vulnerability knowledge for some time through employee salaries, lavish parties, recruiting efforts, etc., so why has it always been considered taboo to pay for the information directly? It time for a leading vendor (I'm talking to you Microsoft) to step up and establish a program to pay third party researchers for the work that they do. Establish fixed prices (non-negotiable) for vulnerabilities that fit into your already well established severity rating system. Don't worry about paying more than everyone else but pay enough to compensate researchers for their time. I suspect that you'll be pleasantly surprised by the results.

Hugs Not Bugs was dead a long time ago...Charlie, Dino and Alex just made it official.

- michael

Monday, March 23, 2009

New browser security model needed for SaaS?

As the age of Software as a Service (SaaS) becomes realized, we are finding that this new way of doing business can sometimes be at odds with previous methodologies and processes. Note that this is not particularly insightful as stated since it is derivative of the fundamental definition of disruptive innovation. But the trick with SaaS is that, on the surface, we have the notion of enacting the same standard processes we enact on any other given day (i.e. running software); the only perceived difference is that the software is running external to the enterprise instead of internal to it. Most people do not see that as a significantly different context—running software is running software, whether inside or outside the organizational boundaries.

What people often fail to immediately see is that there is more at play here. Namely, history has given us operational models that are sensitive to the location of where the software is running. The web browser, exemplified by Internet Explorer, is a great example. IE's security model segregates different destinations into appropriate security zones. Each security zone is configured to a different risk profile. The traditional zones include explicitly trusted destinations, intranet destinations, and the rest of the Internet. Under this model, the Internet security zone is risk-averse while the intranet and trusted security zones are more lax.

SaaS challenges this established model of security zones since it involves taking software functionality typically operating internal to the organization (in the intranet security zone) and moving it external to the organization (in the Internet security zone). The same software functionality is now acting under an entirely different risk profile since it is located in a different zone, and that can impact the functionality and/or the user experience.

Let us look at a particular situation applicable to the Zscaler offering. Organizations traditionally host their HTTP proxies within their organizational boundaries. Those proxies operate in the intranet security zone; within this zone, the browser (IE) is willing to automatically authenticate to the proxies using native Windows authentication. The result is that the authentication process is transparent to the user, and everything just works. Now consider what happens when those proxies are moved outside the organization as part of a SaaS: the proxies fall into the Internet zone—and the browser does not natively perform transparent authentication in that zone. In fact, it requires significant browser re-configuration (and arguably some non-ideal loosening of security restrictions) before the browser will once again exhibit this behavior outside the intranet zone. Administrators have grown complacent with automatic HTTP proxy authentication within the organization; we would even venture to call the feature ubiquitous. So imagine the administrators' surprise when this seemingly fundamental HTTP proxy behavior is upset when shifting to a SaaS-provided HTTP proxy. Absolutely nothing about the technology has changed—the only difference is the browser-enforced security model relating to where the resource is located.

So what needs to happen? Vendors need to review and refresh their security models to ensure they are compatible with the SaaS paradigm of transferring once-internal resources out onto the Internet and into the cloud. Fortunately most SaaS services acting as a normal HTTP web site destination can be accommodated by explicit inclusion on the trusted destinations list; SaaS-provided HTTP proxy services are at a much larger loss because that mechanism does not directly apply to proxies. In fact, the proxy configurability and support offered by most web browsers often suffers from legacy networking assumptions and feature atrophy. Internet Explorer in particular introduced the (now) standard crop of proxy capabilities
in IE 5 (WPAD and PAC support, circa 1999); we are now one decade later and there have been no additions to IE's proxy configurability or feature support since then. Other browsers have normalized to the same feature set as well, offering no new innovation in that area. I believe that the last ten years have brought about many process changes that warrant revisiting the role that HTTP proxies play in the enterprise, how the web browser should relate to those roles, and how SaaS can affect legacy assumptions of those roles.

So web browser vendors, please hear my plea: after ten years, it is time to blow the dust off of your security models and proxy configurability to ensure they are in-line with the latest SaaS and technology offerings and expectations.

Until next time,
- Jeff

Wednesday, March 18, 2009

Mobile Bowser (In)security

For the past couple of days, the Interwebs have been buzzing about pending new features in iPhone OS 3.0. One Item that barely received any mention whatsoever, but was pleasantly jaw-dropping for me, can be seen in the image below, in the lower right hand corner. It is the addition of anti-phishing capabilities for Mobile Safari.

Why should this be impressive? Firefox 2 and Internet Explorer 7 both added phishing filters back in 2006, with betas available as early as 2005. It's impressive because it's the first significant security feature of any kind in a mobile browser. Today, desktop browsers have a number of important security features, yet surprisingly, while mobile browsers have fancy features like touch screens and auto-zoom, security remains elusive. Let's compare the security features in major desktop/mobile web browsers:


As mentioned, FireFox and Mozilla first added support for phishing blacklists over two years ago and since then it has become standard functionality in desktop web browsers. FireFox and Safari leverage the Google SafeBrowsing initiative, while Microsoft follows a proprietary path. Regardless, phishing protection, despite being standard issue on the desktop, is a no-go on mobile least until iPhone OS3.0 is released this summer.

Malicious URLs

Like phishing, malicious URL protection takes advantage of blacklists to prevent users from visiting a site, which is known to host malicious content. Malicious URL protection was added after phishing but has now also become a standard feature. Once again, FireFox and Safari leverage the Google SafeBrowsing, while other vendors go it on their own, or through partnerships.

Extended Validation SSL Certificates

I question the true value of EV SSL Certificates and their adoption has been slow at best. Regardless, if they have any hope at better protecting end users, they must be supported by web browsers. It is therefore encouraging to see that they are supported by all major desktop browsers (but no mobile browsers).

Cross-Site Scripting (XSS)

With the release of IE 8, Microsoft will become the first major browser vendor to provide built-in support for XSS attacks. Early reviews of the XSS inspection engine included in IE 8 look promising. This, in my opinion is the single most important step in finally reducing the risk posed but what has long been the single most prevalent web application vulnerability.


Microsoft went for the full sweep by also being the first vendor to introduce protections against clickjacking. However, their proposed protections also require special server side code. While they should be commended for their efforts, this is one control that is destined for failure.

Mobile Browsers

So why have have mobile browsers not yet included security features. Let's look at the possibilities.

1.) Mobile browsers do not have the storage capacity or processing power to accomodate security functionality.

Comment: My iPhone has 16GB of storage and better graphics than last-gen gaming consoles.

Verdict: Busted!

2.) Mobile browsers are not commonly subjected to attacks due to limited capabilities/use and security controls are not therefore necessary.

Comment: Mobile browsers have nearly equivalent functionality to thier desktop counterparts. They are fully capable of handling JavaScript, AJAX and if you're not an Apple fanboy...even Flash. Mobile browsers are also starting to constitute a meaningful percentage of overall web traffic. I personally actually prefer using my mobile browser for certain tasks, such as reviewing blog headlines, checking sports scores and scanning tweets. I prefer the simplicity of a mobile browser for simple tasks as it allows me to quickly review contents.

Verdict: Busted!

3.) We will never learn from our mistakes.

Comment: We have said for years (decades) that security must be baked in, as opposed to being brushed on. Yet, when it comes to quickly getting a product to market in order to win market share, security is consistently thrown out the window.

Verdict: Confirmed!

- michael

Sunday, March 15, 2009

The World According to TinyURL

According to Wikipedia, TinyURLs have been banished from the likes of MySpace, Yahoo! Answers, Orkut and Wikipedia itself due to concerns that it is a favored tool of spammers. It's not entirely surprising given that there are no shortage of stories relating to spammer's use of the URL shortening service. TinyURL is clearly concerned by this bad rap. They have added a preview feature, allowing users to get a sneak peek at the site before being redirected and they specifically call out spammers in their terms of service:

Using [TinyURL] for spamming or illegal purposes is forbidden and any such use will result in the TinyURL being disabled and you may be reported to all ISPs involved and to the proper governmental agencies.

For those not familiar with TinyURL, it's a simple and popular service which allows a user to submit any lengthy URL and in return receive a URL of the format - (e.g. , where 'x' represents a base 36 (a-z + 0-9) character. While it may at first glance appear that they'd quickly run out of addresses, such an approach actually renders 2+ billion unique URLs. When a user browses to a TinyURL, the service returns a 301 'Moved Permanently' status code along with a Location header of the true URL that the browser then requests. When TinyURL was first launched in 2002, the primary motivation was to enable linking to newsgroup postings which tend to have lengthy, ugly URLs. Today however, it's more popular than ever given the increasing use of mobile applications and Twitter, which limits posts to 140 characters.

I personally have always wondered if TinyURL was getting a bad rap. While I agree that it provides a basic form of URL obfuscation by hiding the true destination from an end user, there are certainly no shortage of such techniques, especially when dealing with naive victims that are fooled by a TinyURL. Moreover, the browser is simply redirected - it does ultimately make a separate request for the true destination URL. As a result, browser controls such as blacklists are still effective as they would catch the 'evil' site on the second request, following the redirection. I've also heard rumor that TinyURL now actively filters for bad content - if they aren't they certainly should be.

In an effort to satisfy my curiosity, I set out to see the world according to TinyURL. Fortunately, it's not hard at all to determine which pages TinyURL is redirecting users to. TinyURL by design is open, so authentication is not required to use the service. I could therefore automate the process of looping through a subset of TinyURLs, request the URL and record the Location header returned in the 301 response. I came up with the following Perl code:


use LWP 5.64;
use Math::Base36 ':all';

use LWP::UserAgent;

my $browser = LWP::UserAgent->new;

for ($count = 1; $count <= 100000; $count++) { my $tiny = encode_base36($count); my $url = "$tiny"; my $response = $browser->get($url);

print $url, " --> ";
print $response->header("Location"), "\n";


The code is pretty basic, it simply loops 100K times, converts the count to a base36 number, requests the associated TinyURL and then records the Location header in the response. TinyURL does not for some reason implement any verification to ensure that the submitted URL adheres to standards or even exists. Therefore, the resulting list of URLs, not surprisingly included close to 10K entries that did not represent legitimate URLs. Of the legitimatly formed URLs that were found, they could be divided into the following protocols:
  • HTTP - 90,541
  • HTTPS - 1,036
  • FTP - 339
Now it wasn't hard to spot URLs that suggested a malicious purpose, such as those which attempted to further obfuscate the true domain, redirect to LAN based resources or even resources on a local machine. While such TinyURLs may have been registered for use in a planned attack, they also suggest a lack of understanding as to how browsers and the web itself actually functions. Most posed no threat whatsoever.


I leveraged the IP::Country::Fast Perl module and ran through the cleaned up URLs to determine where the destination pages were hosted. The United States was by far the most common location for servers to be hosted at, with 98.93% of the TinyURLs investigated. After the US, the following countries made up the top 20:

What does this tell us? Very little...but it is a pretty graph.

URL Categories

URL categories are a little more interesting. I ran the clean TinyURLs collected through our classification engine to determine the overall 'type' of content that was showing up. If the service was being abused, I would expect to see 'malicious' categories dominating the list.

Definitely nothing too scandalous here. While there was some volume for 'questionable' categories such as Nudity (1,047), Pornography (964) and Anonymizers (151)...clearly malicious categories - Malware (126) and Phishing (14) had minimal volume given the overall population.


I would assume that if TinyURL were heavily leveraged in attacks, that a sizable portion of content would point to executable files, which attackers are trying to social engineer victims into downloading. 465 TinyURLs did directly reference *.exe files. However, understandably, not all URLs were available for download at the time of the test. Of the 197 executables that I was able to retrieve, all were run through AV and a grand total of zero were reported to be infected.


This is an area where I fully expected to get some solid results. To identify potential phishing URLs, I leveraged PhishTank, a great open collection of human verified phishing data. Specifically, I took advantage of their check URL functionality which permits you to submit an individual URL and receive XML data detailing whether or not the page represents a verified phishing site. All of the cleaned up TinyURL data was submitted (~90K URLs) and none represented confirmed phishing URLs.

Malicious URLs

This time around I took advantage of the Google SafeBrowsing Diagnostic page and once again automated the process. I was only able to get through about half of the full list of URLs as my script needed to incorporate a delay to ensure that it didn't get blocked, however, the results seem fairly conclusive. While malicious URLs were identified, of ~50K URLs only the following five were found to be black listed:

Poor would appear that you're getting a bum rap after all. Yes, we did identify some malicious content but certainly not enough to justify your lifetime ban from some of the cool web 2.0 parties.

- michael

Friday, March 13, 2009

Patch to Get Some New Holes

According to many news outlets (but interestingly, not Panda Labs' blog, which most of the reports cite), a couple of days ago, Panda Labs said MS09-008 doesn't fix one of the vulnerabilities it set out to fix. This complaint may or may not be the same as the one from nCircle, in which the patch installs differently and does not provide future protection if the vulnerability has already been exploited (calling this behavior an incomplete fix seems pretty reasonable to me). There's a whoops or two in here somewhere, be it an unsuccessful fix from Microsoft, Panda publicly disclosing a vulnerability without giving the vendor time to provide a patch, news outlets propagating a bogus story, Panda not posting anything to back up their claim, or my inability to find the actual data Panda posted. Conveniently, I don't need the details to ramble on about patches & security in general: whether this patch in particular had problems or not, security patches have holes, too.

Why is that?

To start with, many security issues are correctness issues. What applies to software in general also applies to security patches: writing bug-free code is hard to the point of impossible (depending on whether you're talking to a formal methods geek). To approximate it, one typically needs extreme attention to detail; very good knowledge of the surrounding code, intended architecture, and behavior of surrounding systems; a full coverage test suite; a good development process; and enough time to carefully design, code and test the software in question. As you can see from this simplified view, there will be quite a lot of work involved. There will be correctness issues in software, including security patches, and some of these correctness issues will cause security problems.

For security issues that go beyond correctness issues, you can take the above list of stuff you need to write high-quality code and increase the existing knowledge, test & time requirements substantially, and add knowledge about the specific security issues; security issues that go beyond correctness are usually more complicated to understand and reproduce than "simple" correctness issues. Complicated tends to mean more bugs, and some of the bugs will be security issues.

Now let's add in the fact that it's a patch.

Even if you only maintain one supported version (i.e. you work for a SaaS ;) ), you're not going to work on that supported version every day. You're mostly going to work on some future release. Chances are good that you have unconscious expectations about what features are available to you, and how the surrounding code acts, based on the code you work with regularly. Quite a lot of this may be new since the supported release, but since your assumptions about its presence are unconscious, you probably don't have a complete list of new features in mind and you may not think to ask yourself whether what you are doing will always work safely given that X is not present. The wider the gap between the supported version & what you're working on now, the more likely you are to slip up in this way.

If you have multiple supported versions, you get to repeat the above for every supported version. And it gets worse: again, unless you work for a SaaS, you have to cover the possibility that not all your previously published patches (security or not) have been installed. Your fix needs to work whatever the patch state, for all your supported versions, which means you have to think it through and test for each supported version & possible patch state.

For most developers, adding new features is way more fun than writing patches. You probably can't wait to get back to whatever you were working on when somebody reported this bug.

And for a security patch?

Security is a specialty & mindset of its own, and it can be hard for non-security developers to understand the issue they're fixing as deeply as they would understand a correctness issue.

There is extra pressure to get a security patch out fast. Distracted people in a hurry make more mistakes.

Typically, a security patch will reduce functionality. Backwards compatibility is a big issue: customers hardly ever want you to take something away. Ideally, you would remove the functionality the attacker can use without affecting the functionality legitimate users use, but this isn't always possible, and then you have some really tricky decisions to make.

Because of all these factors, you can expect security patches to have holes now & then. Sometimes they'll be the same holes, not yet fixed, and sometimes they'll be shiny and different. Personally, I feel the same way as most of the security administrators I know: I'll take the new hole over the old hole every time, and keep on patching as quick as I can.


(Yes, this counts as more fuel for the give-me-an-interim-patch-while-you-do-all-that-work fire.)

Thursday, March 12, 2009

Botnets for everybody!

BBC's Click technology program decided to demonstrate the SPAM power of botnets by directing 22,000 zombies in their own personal botnet. Sure, a lot of people are questioning the legality of this stunt, but that's not what really caught my attention. Most live hacking demonstrations involving real targets are legally questionable anyways, and despite the laws many people feel entitled (and sometimes even obligated) to do XSS and SQLi testing against arbitrary web applications on the Internet.

What caught my eye was a few interesting choice remarks made in the article. First, they called their 22,000 node botnet "low-value." What, pray tell, makes this botnet particularly low value? Is it what hackers would charge to rent/sell it? Is it the number of nodes (a mere 22,000)? I think this is a great illustration of the inflated grandness that media has really driven to botnet stories...apparently botnets under a quarter-million nodes are worth less consideration. Yet by the article's own admission, it only took a scant 60 nodes to DDoS their target website off the Internet. Make no mistake, 22,000 nodes at an attacker's command can do a considerable amount of damage to just about any target. There are even supercomputers listed on the
world's top 500 supercomputers list that leverage far fewer than 22,000 nodes. I would hardly trivialize a 22k node botnet with the label "low-value," as it desensitizes everyone to the overall threat that any sized botnet can represent.

Second, the article mentions they "acquired" their own botnet "after visiting some chatrooms" on the Internet. I wish they had provided a bit more details here...did they troll chat rooms until they found a botnet for sale, and purchase it? Or did they intercept an IRC-based command and control channel of the bots, thus hijacking the botnet to do their bidding? Either way, their candor regarding the ease of acquiring a botnet seems strange. I would think the story of how anyone can "visit some chatrooms" and walk away with a botnet would be more sensational than filling some demo inboxes with spam.

As an aside, the "how a botnet works" graphic they include in the article was a bit weird as well; the truncated version you see in the article leaves a lot to be desired ("Hacker -> virus"?). You have to click on the image to get the full chart, and then things become clear.

Until next time,
- Jeff

Friday, March 6, 2009

Vulnerable by, really

Part of my responsibilities at Zscaler is to look through our log files in order to spot strange and unusual requests (new malware, botnets, etc.), questionable surfing trends, and other sorts of data-mining security goodness. And unfortunately, I routinely come across requests such as these:;&clickTag2=JAVASCRIPT:DL_Close();<script%20language='javascript'%20src='/js/bannerscriptmp3internal2.js'></script><script%20type="text/javascript"%20src=""%20/>&rndNum=99812610
Anyone familiar with web security will likely see immediately that these requests essentially carry cross-site scripting payloads. But these are not an XSS attack against a user; I’ve traced all of these (and many, many more), and they are, in fact, required to happen that way by a legitimate web site. That’s right folks: there are sites passing Javascript in URL parameter fields on purpose. Most of the URLs I've discovered that have XSS by design typically fall into one of two types: advertising syndication, or passing HTML into a SWF. All of the above URLs exhibit one of those two types. The last listed URL probably gets the 'Hall of Shame' award, since the ifr.php was designed to return arbitrary content that is meant to be used in an Iframe.
But XSS is just the tip of the iceberg; check out these requests:…
Are those full and partial SQL queries/clauses in the URL parameter fields? Why, yes they are! These sites actually pass the SQL query strings in as request parameters. Now, perhaps these sites have absolutely perfect database security, the web scripts use a read-only account DB account, and SQL access is restricted to a limited view of the table...meaning the web script isn't exploitable to do much beyond just read the already-public read-only data from a single table. But my bet is that isn't the case.
There are lots of other pretty scary requests out there, but it's hard to tell whether they are really exploitable or not by just looking at the URL (and I'm not about to go and perform an unauthorized security assessment on these public web sites). Here are some of the suspicious ones, for your entertainment:…<?php%20print%20(rand());?>
I'm sure I'll be posting more in the weeks to come. There doesn't appear to be a shortage of new examples...
Until next time,
- Jeff

Thursday, March 5, 2009

Interim Patches

On September 16, 1997, Apple named Steve Jobs Interim CEO, a year after returning to the company that he had founded. Bud Selig served as interim commissioner for six years before Major League Baseball officially handed over the reins in 1998. When businesses face big challenges and a perfect solution isn't imminent, they implement an interim solution. It may be temporary, it may be imperfect, but something is better than nothing. Why then is the software industry so opposed to interim patches?

Two weeks ago, Adobe acknowledged a vulnerability in most versions of Adobe Reader that could lead to remote code execution. In that same advisory, they announced that a patch would be issued by March 11, leaving arguably the majority of computers exposed to attack worldwide. To make matters worse, security firms have suggested seeing attacks as early as the beginning of 2009 leveraging this attack vector. One would therefore presume that Adobe has had knowledge of this issue for some time.

Adobe went on to suggest that users disable JavaScript in order to protect themselves. That advice was short lived as on March 3rd, Dave Aitel announced that Immunity had released a working exploit which did not require the use of JavaScript. Not to worry though, as Adobe also assured customers that it was working with anti-virus vendors to ensure that signature based detection would be available to protect against potential attacks. How's that working out? A quick check with VirusTotal shows that as of this evening, only 5 of 39 AV vendors have protection in place for the proof of concept exploit released on February 22nd. That's less than 13% of vendors for an exploit for which source code has been available for nearly two weeks! It's tough to argue with Damballa's recent bashing of the AV industry.

The security industry on the other hand had a very different reaction. Lurene Grenier of SourceFire, released a homebrew patch just three days after the Adobe Advisory. Now I find it hard to believe that if a sole researcher, with no access to source code or exclusive knowledge of a given product can implement basic protection within days, that Adobe cannot do the same, with better quality, in a short time frame. I'm not asking for a perfect solution - I can wait until March 11 for that. I'm asking for an interim patch - a quick fix to a big problem. Yes, I'm willing to accept the risk that it will break something and hinder my ability to view PDF documents. Heck, I'm even willing to accept that it will erase a few. I'm willing to accept that risk, because it is far less damaging than the prospect of unwittingly joining the next botnet army. Sadly, while the recent Adobe debacle has become a poster child for the availability of interim patches it's only the latest high profile vulnerability representing a problem that won't go away on it's own.

Our industry loves to fuss and debate over formulas and approaches to determine risk. Rather than kick off a study group to study the work of another group, let me cut to the chase and propose a simple questionnaire for the entire software industry. The next time an evil-doer exposes a vulnerability in one of your software products, ask yourself three simple questions:

- Are more than 10% of Internet users affected?
- Is an exploit in the wild?
- Will it take more than 7 days to release a permanent fix?

If you answered yes to all of the above questions - stop reading and start writing - a quick and dirty patch that we can all use to protect ourselves.

- michael

Sunday, March 1, 2009

The Horror Browser

I was thinking about security usability (as usual) when I stumbled across Nayan Ramachandran's article about what makes a horror game scary. Hmm, some of these scare tactics sound familiar. For your entertainment if not edification, I present the horror browser.

On your first visit to a site you have never visited before, the browser darkens and plays the sound of a door creaking slowly open.

Suddenly, a zombie jumps out of the window to inform you that there is a problem with the site's SSL certificate. You are jaded, so you click through, finding the popup more and more irritating and less scary each time you see it.

Sometimes, when you submit a non-SSL form, an animated gremlin runs off the screen with your data.

Whenever you send cookies, you can hear your browser breathing, and the occasional choked sob indicates it encountered a web bug.

The browser makes periodic eerie noises whenever one of its plugins is vulnerable. Will your machine be infected?

Whenever you encounter an HTTP redirect to an explicit IP address, the browser shows sinister-looking spinning things in an attempt to distract and confuse you.

It's been weeks, and you and your vulnerable plugin are doing just fine, when you hear the eerie noises again, only this time, you see the flicker of a monster running across the browser window. As the monster closes the mini-blinds over the page you were viewing, you realize your home page has been hijacked.

You get email from your bank. Your webmail application shows a harmless looking child who says your account will be cancelled if you don't re-enter your personal information. Just as you enter your social security number, you realize: it's not your bank at all. Too late!

On the one hand, this is pretty silly, both for itself and because most of the Web is not out to get you. On the other hand, users make security decisions based on how secure they feel, not how secure the situation actually is. So, tying the language of fear to risky online security decisions (provided the browser actually knows which decisions are risky) could help users make better decisions in these cases. As Matthew Gallant comments,
Enemy aesthetics should repulse the player before the actual "threat" can be evaluated; ideally the player should feel a sense of "I don't want that thing near me" before it gets close to enough to prove harmful.

Sleep well, don't let the zombies get you, and send me some screenshots if you see the gremlins.


Demystifying/Abusing Flash Cookies

Flash Cookies (aka Local SharedObjects) are increasingly being leveraged by developers as a powerful alternative to HTTP cookies, yet there seems to be significant confusion as to what they are capable of and what security/privacy risks they may expose. I feel that most users are at least familiar with cookies in general, and to some extent this technology has taken a bad rap as a privacy concern. While cookies certainly can lead to privacy concerns, we should keep in mind that they have become a key component in today's rich web applications, enabling persistence which is leveraged for authentication, user preferences, etc. As end users have become increasingly uncomfortable of the existence of data which permits sites to track their web clicks, they have begun disabling cookies within their browsers or adopting applications which permit the blocking of cookies. Yet, what many don't realize, is that the same tracking features they are trying to avoid are not only alive and well, but exist in a much more powerful form, thanks to Flash cookies.

Local SharedObjects were first introduced in Flash Player 6.0 in March 2002, so they've been around for a while. Combine this with the fact that (thanks largely to YouTube) Adobe is able to claim being present on 99% of Internet enabled desktops and Flash becomes a de-facto standard client side platform. Flash Cookies offer developers more flexibility than their HTTP based counterparts in virtually every category:
  • Storage Capacity - By default, sites can store up to 100K of data on a local file system and no explicit consent is required form the end user. HTTP cookies can likewise store data without asking permission but are limited to 4K in most browser implementations.
  • Expiry - HTTP cookies are either session based, meaning that they are wiped upon exiting the application/browser or have an explicit expiration date. Flash cookies on the other hand, by default have no expiration.
  • File Format - Flash cookies are stored in a binary format, but contain text based data, so they're easy to read directly from the file system, so long as you know where they're stored (see below). There are also a variety of LSO readers that can be used.
LSO Storage Locations
  • Windows XP
    • $user\Application Data\Macromedia\Flash Player\#SharedObjects
  • Windows Vista
    • $user\AppData\Roaming\Macromedia\Flash Player\#SharedObjects
  • Mac OS X
    • ~/Library/Preferences/Macromedia/Flash Player/#SharedObjects
  • Linux
    • /home/$user/.macromedia/Flash_Player/#SharedObjects
From a developers perspective, the greatest advantage of Flash cookies appears to be the fact that users aren't overly familiar with them an wouldn't know how to delete them even if they were. In fact it wasn't until Flash Player 8.0 was released that Adobe actually added a Settings Manager to permit users to see and manage the Flash cookies on their system. This is the same control panel which was previously vulnerable to clickjacking. Browser privacy features which permit users to wipe local content including HTTP cookies do not presently touch Flash cookies, although I wouldn't be surprised to see this change over time.

Common security concerns for HTTP cookies outside of privacy involve cookie hijacking or injection or more generically, the ability to read from or write to a user's cookies. This is a significant threat given that cookies maintain persistence for authentication purposes and it is especially concerning given the prevalence of cross-site scripting vulnerabilities which facilitate such attacks.

Is cookie hijacking/injection possible with Flash cookies as can be done with HTTP cookies? The answer is yes, so long as the attacker has an ability to upload Flash content to the target domain. While this is a significant hurdle, the prevalence of Flash on the web and the acceptance of user supplied content on web sites, means that this isn't as difficult a challenge as you might think. As with HTTP cookies, Flash cookies enforce a same origin policy. Flash cookies can also restrict access based on the path where the Flash cookies are deployed. For example, a site where users have personal content (e.g., which stored all file uploads in one place (e.g. instead or user specific locations (e.g. could expose itself to attack. If Flash content can be uploaded, an attacker simply needs to build a Flash file which can read from/write to an existing LSO. While the name of the LSO is required, this is a trivial challenge as such data is already stored on the client side. The following ActionScript code sample permits writing to a Flash cookie:

package {

import flash.display.Sprite;

public class zscaler extends Sprite {
private var user:SharedObject;
private var firstname :String;
private var lastname:String;
public function zscaler() {

user = SharedObject.getLocal("zscaler");
firstname = "Michael”;
lastname = "Sutton"; = firstname; = lastname;


Once the code is written, it must be compiled into a *.swf file, which can be done using freely available tools such as the Adobe Flex 3 SDK, which includes a Flash compiler. In the above example, an LSO named zscaler is created which has two variables (firstname and lastname). Reading from an existing Flash cookie is equally as easy:

public function zscaler() {
var label:TextField;

user = SharedObject.getLocal("zscaler");

firstname =;
lastname =;

label = new TextField();
label.autoSize = TextFieldAutoSize.LEFT;
label.background = true;
label.border = true;
label.text = "Firstname: " + firstname + "\nLastname: " + lastname;



The code has been abbreviated as it's largely identical to the previous sample. In this case, we once again call getLocal() using the name of an existing LSO and then simply read existing data (e.g. The example will display the contents of these variables in a text box.

If you're curious to know how many sites are leveraging Flash cookies, simply check the Website Storage Settings panel in your Adobe Flash Player Settings Manager. I suspect you'll be surprised to see how many sites are leveraging this technology. Also take time to browse through the content of these LSOs. Given the larger storage capacities available, you may find some interesting and perhaps insecure content.

Take care.

- michael