How I Would Find the Next Heartbleed Exploit

How I Would Find the Next Heartbleed Exploit

 

Selection_002

Heartbleed is yet another example of how a simple programming error can cause so much damage. Not too long ago, Apple ran into their own trouble with the bug that was later dubbed the “Goto Fail” bug. About a month ago, GnuTLS revealed the presence of a bug that, similarly to Apple’s bug, skipped over a series of validity checks (read conditional statements) that made it simple for an attacker to get a user to accept their invalid X.509 certificate. Now, this week it was revealed that OpenSSL had a simple bug where a network-supplied value didn’t have a bounds check, allowing an attacker to dump up to 64K of memory outside of the bounds of the buffer each time they sent a “heartbeat” request to the server; this exploit makes it easy for an attacker to steal sensitive data from a server that is running OpenSSL.

What is more amazing than the widespread effects of these bugs is the sheer obviousness of these bugs. The Goto Fail bug should have been picked up by the IDE that the Apple developer who introduced the bug was using. This is something that is very easy to test for and even easier to spot as an attacker.

This week’s OpenSSL bug may not be obvious to anybody who hasn’t had any experience with programs written in C / C++ (thankfully many higher-level languages offer some types of buffer overflow protection), but it’s obvious to anybody who has had experience with arrays or buffers in C/C++; in short, you never write an array or buffer without specifying a maximum value that the input is checked against before being accepted.

The simplicity of these errors makes these types of exploits frightening, as it’s possible for even an amateur attacker to find a 0-day without much difficulty.

Which leads me to the crux of this posting: how would one find the next Heartbleed exploit? It’s as simple as 1-2-3.

  1. Subscribe to repository commits for a few select open-source security libraries.
  2. Wait until someone commits something stupid that gets approved; then wait until its included in a release of the security library.
  3. Start taking advantage of the exploit without telling anybody else.

The fact that these bugs are easy to spot after the fact means its also easy for an attacker to spot them preemptively. It may seem as though this is a failing of open-source software, but I’d actually argue the reverse is true. With open-source software, it’s possible for white-hats to take the same approach as an attacker would take to monitor commits for the introduction of security bugs. It’s also substantially easier to deal with these issues if they arise, since you can easily make a few changes to the code to mitigate against these vulnerabilities.

Now, you’re probably thinking to yourself, wouldn’t it be harder to accomplish the same task with closed-source software?

The honest answer is yes, but it’s not overtly difficult to find exploits in proprietary software. It may be harder to find obscure vulnerabilities, but it’s still fairly simple to test each feature for proper behavior. Wherever you have unexpected behavior, there may be a vulnerability lurking there. If this is a security feature, that unexpected behavior is almost always an exploit. This doesn’t require much, if any, decompiling or difficult assembly code analysis; it just takes a bit of persistence and an understanding of what’s expected and what’s unexpected.

What makes the presence of these types of security flaws in closed-source software concerning is the fact that an honest party is not necessarily able to do anything about these holes once they find them. Users of closed-source security tools are dependent upon the maintainer of the program to patch the flaw. This is still true for locked-down hardware, such as many embedded devices, that use open-source libraries, as well.

Guide: Get Fedora 20 on Digital Ocean


Digital Ocean doesn’t currently offer Fedora 20 as an option when creating a Droplet. While there may be several reasons to stick with Fedora 19 for the time being (most importantly if you ever plan on going to RHEL7, Red Hat announced that RHEL7 will be based on Fedora 19), if you’re a user who wants to have the latest release of Fedora, you’ll need to do a bit of extra work. Here’s how you can get to Fedora 20 on your Digital Ocean droplet.

How to install:

  1. Log in to your Digital Ocean account and click “Create Droplet” on the upper-right hand corner of your screen. There, you’ll want to set a hostname, choose a plan, select a location and choose from one of the following options:
    1. Fedora 19 x32 (Recommended) – This is the best option unless you know you need to have a 64-bit kernel.
    2. Fedora 19 x64
  2. Once created, go to your email to retrieve the message that contains information about how to login to your newly created Droplet.
  3. Open up an SSH client, such as OpenSSH or PuTTY. Log in as the root user using the password in the email.
    1. If you’re using OpenSSH (installed by default on most Linux systems and Mac OS X), open a terminal and type the following command:

                   $ ssh root@YOURSERVERIP

  4. Once you’ve logged in, be sure to change your password immediately.

    # passwd

  5. Now, update to the latest version of Fedora 19. Type in “y” when it prompts you to confirm whether you want to install the software.

    # yum update

  6. Once updated, import the list of keys for Fedora 20.

    # rpm –import https://fedoraproject.org/static/246110C1.txt

  7. Clean your Yum configuration.

    # yum clean all

  8. Use Yum to do an upgrade. Confirm by typing “y” when prompted.

    # yum –releasever=20 distro-sync

  9. Once the upgrade is complete, reboot your server.

    # reboot

  10. Log back in and confirm that you’re running Fedora 20.

    # cat /etc/fedora-release

Getting the latest Kernel on Digital Ocean:

By default, Digital Ocean ignores the kernels that you install when updating or upgrading your server. It’s tricky to get a custom kernel to work on Digital Ocean, but luckily, Digital Ocean provides recent kernels that you can select in the configuration for your Droplet.

  1. Go to your web browser and navigate back to the listing of your droplets. Click on the name of your droplet and go to “Settings.”
  2. Go to “Kernel” and you’ll see a drop-down menu that has a listing of kernel options. Select the one with the highest number. Be sure to pay attention to where it says “x64″ or “x32″; you’ll want to select the one that is appropriate for your server. At the time of this writing, the latest that Digital Ocean offers is 3.13.5.
  3. Click on “Change” once you’ve selected the newer kernel, you’ll be redirected to a page that asks you if you’d like to “reboot your droplet.” Click on “Power Cycle,” and let it reboot.
  4. To confirm that you have the latest kernel, SSH back into your server and check if you’re running the latest kernel with the following command:

    # uname –a

  5. You should see that you’re running a different kernel than before. Don’t worry about it saying “fc19″ in the kernel name – that just means the kernel was compiled for Fedora 19, but it works perfectly on Fedora 20.

A Little Discussion:

Some seasoned Fedora users may have noticed that I decided to use the Yum upgrade path over FedUp, which is obviously the “riskier” upgrade path. Unfortunately, FedUp will not work on Digital Ocean since they don’t give you access to your boot loader (even if you try to catch the boot loader during the boot process through Console access).

In almost every other case, FedUp provides an easier and safer way for users to upgrade to the latest version of Fedora.

Windows RDP – The Only Remote Desktop You Need

One of the major benefits of running Windows is the built-in Remote Desktop protocol. It is, in fact, the #1 thing that I missed most when Linux was my host operating system. When I was forced to install Windows as my host operating system after numerous driver issues when attempting to run 2 nVidia graphics cards at once, I was eager to get RDP re-enabled on my workstation. While it requires a bit more administrative work upfront, particularly due to its numerous security issues (and these aren’t insignificant either), it certainly provides better performance than any of its competitors (especially if you use a modern version of the Windows client). Here is what you should know about the protocol and how to implement it properly:

SECURITY
RDP has been a magnet for 0-day exploits and users of it should not ignore the real possibility that they will eventually fall victim to these exploits if they don’t protect their machine from attackers. The attractiveness of RDP to attackers is partly due to the protocol’s power – since it’s designed to give remote users full access to their accounts on a remote host, it is likely that an attacker will gain significant privileges on the remote host by compromising a local account. The protocol may be even more attractive to attackers simply due to the fact that most targets with the protocol enabled are of high value (either they’re servers or they are machines within an enterprise).

Regardless, it is imperative that users of the protocol take precautions to protect themselves from attack. If possible, try to isolate remote access to only approved clients. This should be done at the network level, possibly using a VPN. In my specific case, I’m limited to only restricting access by setting firewall settings to only allow connections from IP addresses I trust. Assuming the firewall is correctly configured and does not have remote access vulnerabilities (which is not necessarily guaranteed), this provides pretty decent protection against 0-days, since an attacker would need to either compromise a trusted IP address or masquerade as that IP address.

The VPN route is a much better option if you’re able to control the machines that are acting as clients to your server. This provides the best protection, since it requires a potential client to authenticate to the VPN server first, prior to connecting to the RDP server. Of course, the security of this option is wholly dependent upon the chosen VPN server and requires the administrator of the VPN server to properly configure it to only allow authorized access (PPTP in Windows is such an example, as the MS CHAPv2 authentication protocol that it’s dependent upon is vulnerable to brute-force attacks).

Importantly, configuring the network connection to the server is not a substitute for properly updating the host. Although not perfect, Microsoft frequently releases patches for the protocol that improve performance and  patch security vulnerabilities. This should not be trusted alone, since Microsoft may not be able to update the protocol before it’s being actively exploited, but it should be included in an overall protection strategy.

USING LINUX WITH IT (THE BEST WAY)
I should note that RDP has several open-source implementations that provide reasonable performance and compatibility with native clients. Often, however, these implementations are not as fast or efficient as their native counterparts. One of common techniques used to accomplish this is encapsulating a VNC instance inside of an RDP session. This allows the server to accept connections from clients on the RDP protocol and to offer some compression, but it does not provide very good performance.

Another option is to virtualize your Linux desktop. This is what I’m doing right now and the experience is close to using a native machine (as long as your host for the virtual machine has enough memory and a reasonably modern processor with CPU-based virtualization extensions). I’m having no issues accessing my Fedora 20 VM while I type this article in Emacs inside of the guest OS. RDP actually handles the virtual machine’s screen quite well, although it does suffer from slightly worse performance than an application with native Windows UI elements (see below for information on why). As a side benefit, once you’re using a virtualization environment extensively, adding new VMs to test software or do development work on a target platform becomes a lot easier.

HOW THE PROTOCOL WORKS BETTER THAN ITS COMPETITORS
I’m sure I will get some flack for claiming that RDP provides better performance and usability than it’s competition, but this is mostly true (only when connecting to Windows hosts). Since the protocol is deeply integrated into Windows, it benefits from being able to truly emulates local experience on a remote machine.

Technically, RDP “cheats” in rendering a remote host by calling local versions of common Windows UI library elements instead of trying to render these elements remotely. It also keeps track of which portion of the screen has changed during a connection. This contrasts other protocols, such as VNC, which sends a bitmap “image” of the remote screen. VNC polls the screen several times a second and sends a new version of the bitmap image to the client to keep client in “sync” with the remote host. Compression helps mitigate some of the latency issues, but it does not provide ideal performance, since it is still sending the whole screen each time a change occurs.

RDP, by comparison, sends only the changed portions of the screen, which helps to reduce the latency (less packets to compress and send each update) and reduces the overall network utilization, so it can be fast on a reasonably slow network. Obviously, actual utilization and latency is dependent upon the workload that you’re trying to push through the protocol. If you’re attempting to watch a full-screen HD video through RDP, you’re likely to be dissatistifed with the choppiness and poor image quality of the video that you’re watching. If, however, you’re working with text and static images (or small videos), your performance will likely be excellent.  Right now, I’m accessing my workstation from my older laptop running Windows 8.1 over a fairly slow wireless connection (>500kbps) and the performance is quite good.

It also handles client access better than any of its competition. For one, it allows the user to be signed into their machine remotely without giving them direct access to the host’s screen. This is important in a business environment, where it is not acceptable to leave an unattended machine logged into a user account. RDP “locks” the host’s screen and allows the remote user to interact with the system without “broadcasting” all events to people walking by the host’s monitor. This feature is available in other solutions (such as an X server which allows you to have multiple X sessions at once, each independent of each other), but these solutions either don’t allow users to keep the host “locked” or don’t allow the user to connect to an existing login.

RDP also handles clients with different screen resolutions quite well. My desktop has 3 monitors attached to it, whereas my laptop has a single 1920×1200 display. When I connect to my workstation, my session is adjusted to fit the native resolution of my client. There is no need for me to manually specify my screen resolution before connecting, since my session is automatically adjusted to my local screen configuration. As an aside, RDP also allows for multiple remote monitors on the client end, so you can take advantage of all available hardware on your client.

CONCLUSION
Needless to say, I’m a fan of the protocol. While I often despise Windows for the lack of flexibility or control, Remote Desktop Protocol, despite it’s security problems, is one of the best features of Windows. It’s fast, simple, yet insanely powerful. There are several other features that I did not touch upon in the article (namely remote audio, device sharing (allows you to share a client-side printer, USB device, or storage option)) that truly make this the best remote desktop option to-date. My main gripe is the proprietary nature of the protocol and the lack of a Linux or Mac OS X server that provides similar performance.

Reviving My Blog

… And I’m back!

I’ve been debating over the past few months about whether I should bring my blog back online and start posting again. It’s not as if I haven’t had any good ideas or plenty of things to say. In fact, the number of good ideas that I’ve had over the past few months have turned into motivation enough to bring this site back online. I’m hoping to update this site regularly with short posts about things I’ve found useful and insightful, as well as longer form posts that showcase other areas of interest.

If you’ve never visited my blog before, allow me to introduce myself. I’m Ryan Leaf. I’m a computer science major studying at the Worcester Polytechnic Institute in Worcester, MA. In my free time, I work on starting and building entrepreneurial ventures; I research and experiment with hardware, software, and human security; I create and develop open-source projects and initiatives; and I fervently promote my views on how to make the world of technology more perfect and harmonious.

I’m currently working on several projects, including a CLI implementation of the Kanban / Scrum Board project management methodologies. This project is released under the GPLv3 license and I hope it helps other people who are in a similar situation to myself (namely needing a self-hosted solution that doesn’t require a web browser) manage their teams better.

 
I’m also working on creating an accessible video lecture series that teaches people how to get started with software design and development. These videos are intended to be lighthearted and straightforward, with advice that can be easily followed and adhered to. My goal with this project is to give fellow students and budding developers the tools they need to properly solve complex software problems, instead of jumping directly into a programming task by writing code. For those unfamiliar, designing programs out on paper while considering good design practices and principles is the key to getting software programs right the first time (which reduces error and helps to eliminate major software bugs and errors). It reduces error, hassle and protects one’s sanity.

And with that, I’d like to encourage you to share your opinions on this post. What are your thoughts on a CLI-based Kanban program? What do you think about creating a video lecture series that teaches basic and intermediate software design and engineering? Feel free to comment on this post or send me an email directly at the address in my about section.