Install CMUS on Red Hat Enterprise Linux (RHEL) 7 / CentOS 7

CMUS is a fantastic lightweight music manager. In fact, it’s my go-to music player because of its speed and simplicity.

Unfortunately, it’s not available yet in any repository for RHEL 7. However, it’s possible to compile from source and install it manually. This is very easy to do and gives you the latest version of the program (with the ability to view all albums by an artist!).

(Note: you can get a “stable” release here)

Here’s how:

wget https://github.com/cmus/cmus/archive/v2.6.0-rc0.tar.gz
tar xvf v2.6.0-rc0.tar.gz
cd cmus*
sudo yum install ncurses-devel
./configure
sudo make install
cmus

That’s it! Enjoy :) !

Install Pianobar on Red Hat Enterprise Linux 7 / CentOS 7

I’ve just migrated all of my systems to CentOS 7, as I want a more stable operating system to run on my production machines. Unfortunately, it can be sometimes hard to find packages that are available for other distributions.

Pianobar is one such package. Pianobar provides a CLI interface to Pandora, which makes it considerably less cumbersome to work with (especially on a slower machine). Here’s how I installed it in EL7 (thanks to IceMan):

 cd /etc/yum.repos.d/
 wget http://download.opensuse.org/repositories/home:The-IceMan-Blog/Fedora_20/home:The-IceMan-Blog.repo
 yum install pianobar

You can see the original openSUSE repository page here:
http://software.opensuse.org/download.html?project=home%3AThe-IceMan-Blog&package=pianobar

How to Get Steam Working on Arch Linux w/ nVidia Graphics (2014)

Getting Steam to work on anything other than Ubuntu seems to be often frustrating. If you’re using the proprietary NVidia drivers (like anybody with a Maxwell-based GPU is), you might run into issues getting Steam to start.

Here’s a solution that was mentioned on the Arch forum:

1. Install your NVidia drivers normally.

2. Enable Multilib support.

3. Install steam.

4. (Important) Install the proper libgl package:

sudo pacman -S lib32-nvidia-utils lib32-nvidia-libgl

That’s it! You should now be able to play your favorite Steam games on Arch.

Easy way to disable TrackPad in Linux

Let me be honest for a moment: Lenovo took something that was fantastic and broke it for all of us. That’s right, the trackpads on new Lenovo laptops are absolutely unusable. It might be sufficient if you never use your keyboard, but then again, most people who buy ThinkPads probably are not not using their keyboard.

In KDE, turning of the trackpad is easy. Same thing in anything Gnome 3 based. But, I recently switched to the i3 tiling window manager. Is there an alternative that works just as well?

Luckily, the answer is absolutely yes. It’s a synch to disable the trackpad entirely while still retaining the ability to use the TrackPoint.

Here’s how:

1. Open your Terminal emulator.

2. Type in the following command:

synclient TouchpadOff=1

3. That’s it!

That’s all you need to do to disable the pesky trackpad on your laptop.

Using WebRTC to Deanonymize You (and How to Protect Yourself)

TL;DR
WebRTC is a feature in many recent browsers that helps make peer-to-peer connections faster and more efficient. By specification, it reveals the IP addresses of the network devices on the machine. This isn’t a new discovery, but it has a serious impact on machines with a public IP address assigned to it. Most importantly, it can be used to deanonymize a user who thinks they’re securely accessing websites through a proxy – it makes it possible for the website to know that user’s real-world IP address.


WebRTC is an open API standard drafted by the W3C to enable efficient peer-to-peer communications through a web browser. It provides a number of tools that make browser-based VOIP and video conferencing applications more practical. One of the key tools that enable this to work is the “RTCPeerConnection” feature that reveals all of the internal IP addresses that the host has available for the browser.

In practice, this makes it possible for your browser to facilitate peer-to-peer connections to machines within your LAN, because it allows for a communications service provider to determine if the connection should be handled locally. This makes sense, since it would be inefficient to relay all of your traffic over the Internet to talk to someone on a different part of your network.

Background

Many Internet users today are protected from having their real public IP address revealed to a malicious website, since they’re sitting behind a NAT firewall. If you’re connecting through a NAT firewall, all you might see is a local IP address that was assigned by your home router (like 192.168.1.2).

But, if you’re connecting to the Internet at a corporation, university or through a server, it’s likely your computer has been assigned a public IP address. In essence, if your IP address does not start with “192.168.x.x” or “10.x.x.x” or “172.16.0.0 to 172.31.255.255,” then it is possible for a malicious website to determine what your real public IP address.

Methodology

To prove this theory, I created two virtual machines on Digital Ocean. One machine served as a Linux machine running Firefox and SShuttle (an SSH tunneling tool), which was located in New York City. Another machine served as an SSH server that I connected to with SShuttle to proxy my traffic through that machine. This machine was located in Singapore.

Before testing this theory, I wanted to create a baseline of what my browser tells me when I’m not connecting through a proxy. Using a stock Firefox 30 installation on Ubuntu 14.04 LTS, I navigated to WhatIsMyBrowser.com.

This website provides users information about their web browser, their operating system, as well as their local and public IP addresses. It makes a call to the WebRTC API to retrieve your local IP address. It should be noted that if your machine has multiple “local” IP addresses, WebRTC has the capability of revealing all of these addresses.

Next, I connected to the server in Singapore using the following SSH command:

sshuttle --dns -vr root@IP_ADDRESS_OF_SINGAPORE_SERVER 0/0

Once connected, I refreshed the page over at WhatIsMyBrowser.com. The page did refresh successfully and was performed through the proxy.

Results

As expected, I saw the IP address of the VPS instance in Singapore displayed on my screen. Since I was now connecting to the Internet through a computer sitting in Singapore, I expected to see the IP address of the proxy server in Singapore.

However, in the field where it states “Your Local IP Address,” I saw the public IP address for the server that was running Firefox. I saw the same address before connecting to the proxy, as well.

Before - You can see the global IP (the one that is sent in a normal HTTP packet) and the local IP (the one retrieved by a WebRTC request) are present. They are the same here.

Before – You can see the global IP (the one that is sent in a normal HTTP packet) and the local IP (the one retrieved by a WebRTC request) are present. They are the same here.

Then, I went through my SOCKS proxy.

After - You can see that I'm connecting to the web through the proxy server in Singapore, but I'm still sending the IP address of my machine in New York. In other words, I've just been duped.

After – You can see that I’m connecting to the web through the proxy server in Singapore, but I’m still sending the IP address of my machine in New York. In other words, I’ve just been duped.

How to Protect Yourself

There are several things you can do to protect yourself from this WebRTC-based attack. These include:

  • Disabling JavaScript in your browser. WebRTC depends on JavaScript to be enabled for it to work. By using NoScript or similar browser extensions, it’s easy to disable JavaScript in your browsers for sites you don’t trust. [Enabling Javacsript whatsoever is not advisable if you're worried about a man-in-the-middle attack; such an attack could be mounted against your connection to supposedly trustworthy websites.]
  • Use an extension that disables WebRTC. Disable WebRTC seems to be a decent option.
  • Using a NAT router. You might not prevent the browser from revealing your local IP address, but if you’re connecting through a NAT router, you won’t be revealing your public IP address.
  • Using a VM. Virtual Machines connecting to the Internet in NAT mode are not suscepitble to this type of attack, since the machine has an addressed assigned by the hypervisor. VirtualBox appears to take this a step forward by using the same IP address for each machine that is configured to use NAT. This makes it a pain to run server applications within a VirtualBox VM without using Bridged networking, but it should offer better anonymity. One word of caution: if your local IP address is the same one that is used within VirtualBox, it’s pretty easy for a website to detect that you’re browsing in a VM because the IP address that VirtualBox assigns is not common in consumer routers.

What Browser Developers Should Do

Browser developers and companies are really responsible for this vulnerability. There is no reason for all websites to have access to this information automatically, since most websites do not provide any type of service that requires WebRTC. Instead, browsers should ask users to opt-in to this type of information disclosure, similarly to how web browsers ask users if they’d like to turn on their webcams or microphones.

Getting Started with Vagrant

Vagrant makes it possible for developers to work in a common environment that they can share with other people on their team or project. It helps to eliminate the issue with sharing code that may not work in every environment, often due to different compilers / interpreters or different library versions.

In essence, Vagrant provides a front-end to traditional virtualization products, like VirtualBox or VMware. It makes it easy to create, share, and work with VM images.

However, Vagrant isn’t as well documented as it really should be. This tutorial explains how to get started with Vagrant and how to create the first image on your own machine. For the purpose of this tutorial, I will assume that you’re running Linux, but the process is similar for Mac or Windows users.

Step 1: Get Vagrant

Head on over to the downloads page and get the appropriate installer for your system. Since I’m running Fedora 20 x64, I need to grab the 64-bit image.

http://www.vagrantup.com/downloads.html

Once you finish downloading it, install it and open a terminal.

Step 2: Create a folder for Vagrant

Vagrant does not store your images in the directory that you create the image in, but it does store a file called the “Vagrantfile.” Technically speaking, the Vagrantfile is a YAML file that contains information about the configuration of your machine.

For each image that you add to Vagrant, you should create a new folder for it. Whenever you run “vagrant up” or other commands that apply to a specific Vagrant image, Vagrant looks for a Vagrantfile in your working directory.

This is a simple detail, but I didn’t see it explained too well elsewhere, and it’s bound to create confusion for new users of Vagrant.

Step 3: Get Your Vagrant Image

There are a number of places that host Vagrant images (often called boxes) that you can download. Two places that I’ve found are Vagrant Cloud and Vagrantbox.es.

For this tutorial, I’m going to grab an image of FreeBSD 9.2, but you can grab any of the images listed on those indexes. Here’s what you should enter into your terminal:

mkdir "FreeBSD_9.2"
cd "FreeBSD_9.2"
vagrant init chef/freebsd-9.2

This will create a new directory, move into that directory, and then vagrant will fetch the Vagrantfile. At this point, you do not have your Vagrant image yet – only the configuration file that explains where to get the image.

Step 4: Start

Once you have the image downloaded, you need to run it. If this is the first time that you’ve run the VM, Vagrant will attempt to download the image from and start it. To get your Vagrant image ready and started, enter the following:

vagrant up

This will be the same command that you use every time that you want to access your Vagrant image.

Step 5: Login to your Vagrant image

At this point, your Vagrant image should be started. Now, you’ll need to SSH into the VM to access it. To do this, enter:

vagrant ssh

Step 6: Enjoy your Vagrant image.

That’s it! You’re done setting up your Vagrant image and can use it like it’s your host machine. To stop running your Vagrant image, all you need to enter is:

vagrant halt

Pro Tip: Deal with Small Screens in Linux

One of the most disappointing issues that modern laptops face is the lack of a high resolution display. Even higher end laptops often suffer from the dreaded 1366×768 LCD panels; manufacturers often refuse to spend slightly more to give you a better panel, as most users (who aren’t power users, at least) won’t noticed the difference.

Perhaps you’re like me and use a laptop with a small screen (I am a huge fan of my Lenovo ThinkPad X220; it’s a fantastic fast and small powerhouse that has superb portabilty). That makes for a great portable machine, but you’re often left longing for a better screen.

If you’re a Linux user, there’s something that you can do about it! It turns out that XRandr supports screen scaling, which allows you to display more information on your screen without requiring you to get a higher resolution panel.

Here’s how to do it:
1. Open your favorite terminal application.
2. Type:
$ xrandr –output LVDS1 –scale 1.4×1.4 –panning 1912×1075

You can change several of these settings to suit your needs. Your output display most likely will be LVDS1, but you’ll want to check to confirm this (if you try it and don’t receive an error, then this is the correct display).

Your scale is a multiple of the current screen resolution.

Your panning setting should be whatever your current resolution is multiplied by the scale. This allows for your mouse to move across the entire screen; you’ll notice that you won’t be able to move your mouse across your entire screen without it. In my case, since my original screen resolution is 1366×768, I need to figure out what 1366 * 1.4 and 768 * 1.4 is: in this case, it’s ~1912 and ~1075, respectively.

Hope this helps everybody dealing with a low resolution screen!

How I Would Find the Next Heartbleed Exploit

Selection_002

 

Selection_002

Heartbleed is yet another example of how a simple programming error can cause so much damage. Not too long ago, Apple ran into their own trouble with the bug that was later dubbed the “Goto Fail” bug. About a month ago, GnuTLS revealed the presence of a bug that, similarly to Apple’s bug, skipped over a series of validity checks (read conditional statements) that made it simple for an attacker to get a user to accept their invalid X.509 certificate. Now, this week it was revealed that OpenSSL had a simple bug where a network-supplied value didn’t have a bounds check, allowing an attacker to dump up to 64K of memory outside of the bounds of the buffer each time they sent a “heartbeat” request to the server; this exploit makes it easy for an attacker to steal sensitive data from a server that is running OpenSSL.

What is more amazing than the widespread effects of these bugs is the sheer obviousness of these bugs. The Goto Fail bug should have been picked up by the IDE that the Apple developer who introduced the bug was using. This is something that is very easy to test for and even easier to spot as an attacker.

This week’s OpenSSL bug may not be obvious to anybody who hasn’t had any experience with programs written in C / C++ (thankfully many higher-level languages offer some types of buffer overflow protection), but it’s obvious to anybody who has had experience with arrays or buffers in C/C++; in short, you never write an array or buffer without specifying a maximum value that the input is checked against before being accepted.

The simplicity of these errors makes these types of exploits frightening, as it’s possible for even an amateur attacker to find a 0-day without much difficulty.

Which leads me to the crux of this posting: how would one find the next Heartbleed exploit? It’s as simple as 1-2-3.

  1. Subscribe to repository commits for a few select open-source security libraries.
  2. Wait until someone commits something stupid that gets approved; then wait until its included in a release of the security library.
  3. Start taking advantage of the exploit without telling anybody else.

The fact that these bugs are easy to spot after the fact means its also easy for an attacker to spot them preemptively. It may seem as though this is a failing of open-source software, but I’d actually argue the reverse is true. With open-source software, it’s possible for white-hats to take the same approach as an attacker would take to monitor commits for the introduction of security bugs. It’s also substantially easier to deal with these issues if they arise, since you can easily make a few changes to the code to mitigate against these vulnerabilities.

Now, you’re probably thinking to yourself, wouldn’t it be harder to accomplish the same task with closed-source software?

The honest answer is yes, but it’s not overtly difficult to find exploits in proprietary software. It may be harder to find obscure vulnerabilities, but it’s still fairly simple to test each feature for proper behavior. Wherever you have unexpected behavior, there may be a vulnerability lurking there. If this is a security feature, that unexpected behavior is almost always an exploit. This doesn’t require much, if any, decompiling or difficult assembly code analysis; it just takes a bit of persistence and an understanding of what’s expected and what’s unexpected.

What makes the presence of these types of security flaws in closed-source software concerning is the fact that an honest party is not necessarily able to do anything about these holes once they find them. Users of closed-source security tools are dependent upon the maintainer of the program to patch the flaw. This is still true for locked-down hardware, such as many embedded devices, that use open-source libraries, as well.

Guide: Get Fedora 20 on Digital Ocean


Digital Ocean doesn’t currently offer Fedora 20 as an option when creating a Droplet. While there may be several reasons to stick with Fedora 19 for the time being (most importantly if you ever plan on going to RHEL7, Red Hat announced that RHEL7 will be based on Fedora 19), if you’re a user who wants to have the latest release of Fedora, you’ll need to do a bit of extra work. Here’s how you can get to Fedora 20 on your Digital Ocean droplet.

How to install:

  1. Log in to your Digital Ocean account and click “Create Droplet” on the upper-right hand corner of your screen. There, you’ll want to set a hostname, choose a plan, select a location and choose from one of the following options:
    1. Fedora 19 x32 (Recommended) – This is the best option unless you know you need to have a 64-bit kernel.
    2. Fedora 19 x64
  2. Once created, go to your email to retrieve the message that contains information about how to login to your newly created Droplet.
  3. Open up an SSH client, such as OpenSSH or PuTTY. Log in as the root user using the password in the email.
    1. If you’re using OpenSSH (installed by default on most Linux systems and Mac OS X), open a terminal and type the following command:

                   $ ssh root@YOURSERVERIP

  4. Once you’ve logged in, be sure to change your password immediately.

    # passwd

  5. Now, update to the latest version of Fedora 19. Type in “y” when it prompts you to confirm whether you want to install the software.

    # yum update

  6. Once updated, import the list of keys for Fedora 20.

    # rpm –import https://fedoraproject.org/static/246110C1.txt

  7. Clean your Yum configuration.

    # yum clean all

  8. Use Yum to do an upgrade. Confirm by typing “y” when prompted.

    # yum –releasever=20 distro-sync

  9. Once the upgrade is complete, reboot your server.

    # reboot

  10. Log back in and confirm that you’re running Fedora 20.

    # cat /etc/fedora-release

Getting the latest Kernel on Digital Ocean:

By default, Digital Ocean ignores the kernels that you install when updating or upgrading your server. It’s tricky to get a custom kernel to work on Digital Ocean, but luckily, Digital Ocean provides recent kernels that you can select in the configuration for your Droplet.

  1. Go to your web browser and navigate back to the listing of your droplets. Click on the name of your droplet and go to “Settings.”
  2. Go to “Kernel” and you’ll see a drop-down menu that has a listing of kernel options. Select the one with the highest number. Be sure to pay attention to where it says “x64″ or “x32″; you’ll want to select the one that is appropriate for your server. At the time of this writing, the latest that Digital Ocean offers is 3.13.5.
  3. Click on “Change” once you’ve selected the newer kernel, you’ll be redirected to a page that asks you if you’d like to “reboot your droplet.” Click on “Power Cycle,” and let it reboot.
  4. To confirm that you have the latest kernel, SSH back into your server and check if you’re running the latest kernel with the following command:

    # uname –a

  5. You should see that you’re running a different kernel than before. Don’t worry about it saying “fc19″ in the kernel name – that just means the kernel was compiled for Fedora 19, but it works perfectly on Fedora 20.

A Little Discussion:

Some seasoned Fedora users may have noticed that I decided to use the Yum upgrade path over FedUp, which is obviously the “riskier” upgrade path. Unfortunately, FedUp will not work on Digital Ocean since they don’t give you access to your boot loader (even if you try to catch the boot loader during the boot process through Console access).

In almost every other case, FedUp provides an easier and safer way for users to upgrade to the latest version of Fedora.

Windows RDP – The Only Remote Desktop You Need

One of the major benefits of running Windows is the built-in Remote Desktop protocol. It is, in fact, the #1 thing that I missed most when Linux was my host operating system. When I was forced to install Windows as my host operating system after numerous driver issues when attempting to run 2 nVidia graphics cards at once, I was eager to get RDP re-enabled on my workstation. While it requires a bit more administrative work upfront, particularly due to its numerous security issues (and these aren’t insignificant either), it certainly provides better performance than any of its competitors (especially if you use a modern version of the Windows client). Here is what you should know about the protocol and how to implement it properly:

SECURITY
RDP has been a magnet for 0-day exploits and users of it should not ignore the real possibility that they will eventually fall victim to these exploits if they don’t protect their machine from attackers. The attractiveness of RDP to attackers is partly due to the protocol’s power – since it’s designed to give remote users full access to their accounts on a remote host, it is likely that an attacker will gain significant privileges on the remote host by compromising a local account. The protocol may be even more attractive to attackers simply due to the fact that most targets with the protocol enabled are of high value (either they’re servers or they are machines within an enterprise).

Regardless, it is imperative that users of the protocol take precautions to protect themselves from attack. If possible, try to isolate remote access to only approved clients. This should be done at the network level, possibly using a VPN. In my specific case, I’m limited to only restricting access by setting firewall settings to only allow connections from IP addresses I trust. Assuming the firewall is correctly configured and does not have remote access vulnerabilities (which is not necessarily guaranteed), this provides pretty decent protection against 0-days, since an attacker would need to either compromise a trusted IP address or masquerade as that IP address.

The VPN route is a much better option if you’re able to control the machines that are acting as clients to your server. This provides the best protection, since it requires a potential client to authenticate to the VPN server first, prior to connecting to the RDP server. Of course, the security of this option is wholly dependent upon the chosen VPN server and requires the administrator of the VPN server to properly configure it to only allow authorized access (PPTP in Windows is such an example, as the MS CHAPv2 authentication protocol that it’s dependent upon is vulnerable to brute-force attacks).

Importantly, configuring the network connection to the server is not a substitute for properly updating the host. Although not perfect, Microsoft frequently releases patches for the protocol that improve performance and  patch security vulnerabilities. This should not be trusted alone, since Microsoft may not be able to update the protocol before it’s being actively exploited, but it should be included in an overall protection strategy.

USING LINUX WITH IT (THE BEST WAY)
I should note that RDP has several open-source implementations that provide reasonable performance and compatibility with native clients. Often, however, these implementations are not as fast or efficient as their native counterparts. One of common techniques used to accomplish this is encapsulating a VNC instance inside of an RDP session. This allows the server to accept connections from clients on the RDP protocol and to offer some compression, but it does not provide very good performance.

Another option is to virtualize your Linux desktop. This is what I’m doing right now and the experience is close to using a native machine (as long as your host for the virtual machine has enough memory and a reasonably modern processor with CPU-based virtualization extensions). I’m having no issues accessing my Fedora 20 VM while I type this article in Emacs inside of the guest OS. RDP actually handles the virtual machine’s screen quite well, although it does suffer from slightly worse performance than an application with native Windows UI elements (see below for information on why). As a side benefit, once you’re using a virtualization environment extensively, adding new VMs to test software or do development work on a target platform becomes a lot easier.

HOW THE PROTOCOL WORKS BETTER THAN ITS COMPETITORS
I’m sure I will get some flack for claiming that RDP provides better performance and usability than it’s competition, but this is mostly true (only when connecting to Windows hosts). Since the protocol is deeply integrated into Windows, it benefits from being able to truly emulates local experience on a remote machine.

Technically, RDP “cheats” in rendering a remote host by calling local versions of common Windows UI library elements instead of trying to render these elements remotely. It also keeps track of which portion of the screen has changed during a connection. This contrasts other protocols, such as VNC, which sends a bitmap “image” of the remote screen. VNC polls the screen several times a second and sends a new version of the bitmap image to the client to keep client in “sync” with the remote host. Compression helps mitigate some of the latency issues, but it does not provide ideal performance, since it is still sending the whole screen each time a change occurs.

RDP, by comparison, sends only the changed portions of the screen, which helps to reduce the latency (less packets to compress and send each update) and reduces the overall network utilization, so it can be fast on a reasonably slow network. Obviously, actual utilization and latency is dependent upon the workload that you’re trying to push through the protocol. If you’re attempting to watch a full-screen HD video through RDP, you’re likely to be dissatistifed with the choppiness and poor image quality of the video that you’re watching. If, however, you’re working with text and static images (or small videos), your performance will likely be excellent.  Right now, I’m accessing my workstation from my older laptop running Windows 8.1 over a fairly slow wireless connection (>500kbps) and the performance is quite good.

It also handles client access better than any of its competition. For one, it allows the user to be signed into their machine remotely without giving them direct access to the host’s screen. This is important in a business environment, where it is not acceptable to leave an unattended machine logged into a user account. RDP “locks” the host’s screen and allows the remote user to interact with the system without “broadcasting” all events to people walking by the host’s monitor. This feature is available in other solutions (such as an X server which allows you to have multiple X sessions at once, each independent of each other), but these solutions either don’t allow users to keep the host “locked” or don’t allow the user to connect to an existing login.

RDP also handles clients with different screen resolutions quite well. My desktop has 3 monitors attached to it, whereas my laptop has a single 1920×1200 display. When I connect to my workstation, my session is adjusted to fit the native resolution of my client. There is no need for me to manually specify my screen resolution before connecting, since my session is automatically adjusted to my local screen configuration. As an aside, RDP also allows for multiple remote monitors on the client end, so you can take advantage of all available hardware on your client.

CONCLUSION
Needless to say, I’m a fan of the protocol. While I often despise Windows for the lack of flexibility or control, Remote Desktop Protocol, despite it’s security problems, is one of the best features of Windows. It’s fast, simple, yet insanely powerful. There are several other features that I did not touch upon in the article (namely remote audio, device sharing (allows you to share a client-side printer, USB device, or storage option)) that truly make this the best remote desktop option to-date. My main gripe is the proprietary nature of the protocol and the lack of a Linux or Mac OS X server that provides similar performance.