Amazon Linux AMI 2012.03

March 28, 2012

As noted on the Amazon Web Services Blog and the EC2 discussion forums, today is the release of the Amazon Linux AMI 2012.03.

You can check out the release notes for more information about new features.

If you are running the command line tools for accessing EC2, you can find the AMI IDs in your region by running:

$ euca-describe-images -o amazon | grep amzn-ami | grep 2012.03.1

Change euca- to ec2- if you are using the Amazon EC2 API Tools.

If you like what you see and want to help us build future versions of the Amazon Linux AMI, you should submit your resume!


The coveted Big Radish status.

October 18, 2011

The best part about living in Seattle is having access to Amazon Fresh for grocery shopping, with delivery right to my apartment. The level of convenience and quality has been great, especially for someone like me who is absolutely terrible at making time for Normal Life Activities, viz. shopping, having a driver’s license that matches the state in which I live, taking clothes to the dry cleaners, and diverse other errands and chores that aren’t related to working.

As of October 1, I am pleased to report that I have the extremely coveted Big Radish status, which is currently my favorite frequent-buyer-program name.

Big Radish status: giving renewed (and voluntary!) meaning to the old phrase, “I owe my soul to the company store”.

EC2 command line tip — terminate all instances in a region.

October 3, 2011

I spend a lot of time kicking off EC2 instances for testing, bug reproduction, general information gathering, etc. These instances don’t have any sort of long-term life. Every so often I simply want to kill them all off, and I want to do so using the command line.

I’ve created ~/terminate-all-instances-in-region with permissions of 700.

$ cat terminate-all-instances-in-region
euca-describe-instances | grep INSTANCE | sed 's/INSTANCE[[:space:]]*\(i-[[:alnum:]]*\).*/\1/' | xargs euca-terminate-instances

Change euca- to ec2- if you are using the Amazon EC2 API Tools.

This assumes that your EC2 region is set via the EC2_URL environment variable. See this post for more details.

Configuring your EC2 environment.

October 2, 2011

Let’s talk for a moment about configuring your Linux system to work with Amazon EC2 via the command line.

The command line tools that you’ll want are either the Amazon EC2 API Tools or the euca2ools package, depending on your language, license, and distro preferences.

In the examples below, change euca- to ec2- if you are using the Amazon EC2 API Tools.

Note that everything in this blog post is something that you should only have to do once, and then you’ll be off and running with EC2 for a long time to come.

Setting environment variables

The first thing to do is to configure your system’s environment variables to handle AWS account credentials. Create ~/set-ec2-environment as follows:

$ cat ~/set-ec2-environment

The values for these variables are all found or generated via this link. Some really useful docs are here.

Setting your region

EC2 is split into distinct regions. Typically you’ll choose a region based on your geographic location, and you will launch Amazon Machine Images (AMIs) in that region. For the most part, you should be able to do all of your work in one region, unless you make a conscious choice to spread your workload across regions, or if an AMI that you want to run is only available in a specific region.

$ euca-describe-regions
REGION eu-west-1
REGION us-east-1
REGION ap-northeast-1
REGION us-west-1
REGION ap-southeast-1

Create one or more ~/set-region-REGION-NAME as follows:

$ cat ~/set-region-us-east-1
export EC2_URL

Tying credential and regional settings together

Edit ~/.bashrc to source the two configuration files on login, or just source the two files from the command line. Also, if you have multiple set-region-REGION-NAME files, it makes it very easy to change your region, simply by running source on the new region file.

source ~/set-ec2-environment
source ~/set-region-us-east-1

The reason why we’re going to all this trouble is because everything in EC2 is divided by regions, and the idea is to separate the global AWS configuration from the region currently in use, and to make it trivial to change that region from the command line.

Setting your EC2 ssh key

Now that you have your region set, it’s time to create your ssh key and upload it to the region to which your environment is pointing.

$ euca-add-keypair amazon-ssh > amazon-ssh
$ chmod 600 amazon-ssh

The default AWS security group in each region doesn’t allow inbound ssh access. It is a very simple command to enable this for all of your instances in that region.

$ euca-authorize -p 22 default

Finally, edit ~/.ssh/config to set the proper identify file for EC2 logins:

$ cat ~/.ssh/config
Host *
    User ec2-user
    IdentityFile ~/amazon-ssh

NOTE: It is possible to use a single SSH key for multiple regions, but euca2ools 1.3.1 doesn’t currently support this. You have to generate your own ssh public/private keypair, and then use ec2-import-keypair or the EC2 console in order to upload that public key to multiple regions.

Congrats! You’ve now finished all the one-time setup that is necessary to use EC2.

Launching your AMI

Launch your instance by running: $ euca-run-instances -k amazon-ssh AMI_ID

I have added alias euca-run-instances="euca-run-instances -k amazon-ssh" to my ~/.bashrc which allows me to simply run $ euca-run-instances AMI_ID with no additional command line arguments needed, unless I choose to specify a particular instance type, etc.

Connecting to your AMI

Run $ euca-describe-intances to get a list of all instances you have running in the region. You’ll see the hostname of the instance that you just started, and you can now run $ ssh HOSTNAME to connect. If everything is configured properly, you won’t need any other command line options.


Your home directory should contain:


Your ~/.bashrc should contain:

alias euca-run-instances="euca-run-instances -k amazon-ssh"
source ~/set-ec2-environment
source ~/set-region-us-east-1

Your ~/.ssh/config should contain:

Host *
    User ec2-user
    IdentityFile ~/amazon-ssh

Amazon Linux AMI 2011.09

September 26, 2011

Today is the release of the 2011.09 Amazon Linux AMI.

The AMI IDs are listed near the bottom of the detail page, along with the release notes.

If you are running the command line tools for accessing EC2, you can find the AMI IDs in your region by running:

$ euca-describe-images -o amazon | grep 2011.09.1 | grep amzn.*ami

Change euca- to ec2- if you are using the Amazon EC2 API Tools.

A few Cygwin tips.

September 25, 2011

My primary work laptop these days is a Windows 7 machine.

In an effort to make this a more Linux-friendly environment, the first thing that I installed on it was Cygwin, a collection of tools which provide a Linux look & feel and compatibility layer on Windows.

As an aside, the second thing that I installed was Tomboy, because Gnote is not available for Windows. Over the years, the Tomboy/Gnote application has become essential to my daily workflow.

What am I using Cygwin for? First and foremost, as an SSH client into my Linux desktop and a bunch of other Linux boxen, where all the real work gets done. For me, PuTTY isn’t a good enough SSH client for Windows. Secondly, for text editing with vim and nano. Finally, Cygwin provides the comfortable environment of bash, grep, less, find, and all the other main Linux utilities.

If you are also using Cygwin, here are some of my suggestions for maximizing your user experience:

(1) Install mintty, which is part of the Cygwin package set though not selected by default. It is far superior to the default Cygwin terminal emulator.

(2) Install the ncurses package so that the clear command will exist in your environment.

(3) Remove the bash-completion package, which dramatically speeds up the time between launching a mintty instance and getting a prompt.

(4) Configure vim to remember the last location of your cursor by adding the following to .vimrc:

" Only do this part when compiled with support for autocommands
if has("autocmd")
  augroup redhat
    " When editing a file, always jump to the last cursor position
    autocmd BufReadPost *
    \ if line("'\"") > 0 && line ("'\"") <= line("$") |
    \ exe "normal! g'\"" |
    \ endif
  augroup END

(5) Improve bash completion by adding the following to .inputrc:

set show-all-if-ambiguous on
set mark-directories on
set mark-symlinked-directories on

Firewalling SSH brute force attacks.

September 21, 2011

Anyone who runs their own Linux server knows the annoyance of looking through the log files to see automated SSH brute force attacks trying to find a login to the machine. In the past, I’ve avoided this problem simply by running sshd on a non-traditional port, which makes all the automated scripts that attack port 22 fail.

I recently had to move sshd back to port 22, and I quickly tired of seeing 5k failed login attempts every day.

UPDATE: After some Googling, and after taking into account a lot of good advice from the comments, as well as from John and Smooge, here’s how I’ve rewritten my firewall to protect against brute force ssh attacks.

# set default policies
iptables -P INPUT DROP

# all pre-established clients
iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT

# new inbound ssh, protecting against brute-force attacks
iptables -A INPUT -p tcp -m tcp --dport 22 -m state --state NEW -m recent --set --name SSH
iptables -A INPUT -p tcp -m tcp --dport 22 -m state --state NEW -m recent --update --seconds 60 --hitcount 4 --rttl --name SSH -j DROP
iptables -A INPUT -p tcp -m tcp --dport 22 -m state --state NEW -j ACCEPT

The changes improve efficiency by moving all the RELATED and ESTABLISHED filtering to the beginning of the checks. Also, the order of the checks on the NEW ssh connections have been fixed based on the suggestions in the comments.

The blocked IPs are stored in /proc/net/ipt_recent/SSH.

Meeting Neal Stephenson.

September 21, 2011

If I were only allowed to read one set of books for the rest of my life, I would choose The Baroque Cycle by Neal Stephenson without a second thought.

I had an opportunity to meet him tonight at a reading and signing in support of his latest novel, which was released today.

Seattle is Neal’s hometown, and there were about 800 people in attendance. He read some excerpts from the book and then did some Q&A.

I asked Neal to talk about the tools and tactics that he uses to keep all of the details of multi-thousand-page epics organized, allowing him to pull it all together in the end and to insert references, callbacks, and foreshadowing that cross over multiple books and multiple years of writing time.

His answer was essentially “there’s really nothing special about it”. He compared himself to pretty much anyone in a job that requires constant attention to many details, and claimed that most people keep a tremendous amount of details straight in their heads alone, and that he is no different.

After the Q&A, he stayed long enough to sign everyone’s books, Kindles, iPads, etc. I got my hardcovers of REAMDE and Quicksilver signed and had a chance to shake Neal’s hand. He was incredibly polite, humble, and nice. Given the insane number of hours that I’ve spent reading and re-reading his works, meeting my favorite author was a great experience.

Governance and scarcity.

August 1, 2011

Most of the time when we see contentious debate come up in the Fedora Project is when the community is trying to create, or agree on, the governance or process by which a scarce resource is used or allocated.

Recall the friction a year or two ago regarding how to advertise different spins of Fedora on the website, and whether or not the layout would recommend a default spin, or promote one spin as a first-among-equals. Real estate on the front page of is a scarce resource, which leads to lots of people debating the most efficient way to allocate it.

One of the key responsibilities of Fedora’s leadership is to identify these scarcity points and understand them. It is the job of Fedora’s leaders to understand whether the scarcity in question is real or artificial.

Back to the previous example — Fedora doesn’t have control over the manufacture of computer monitors. The amount of visible space on the main page of Fedora’s website is real scarcity.

I can think of several places where Fedora has taken steps to remove artifical scarcity that could otherwise have caused huge problems.

For instance, if a package needs review or needs to be maintained, it is easy to do so. The process for increasing the total number of packages in Fedora, and the number of folks who can review new packages is relatively simple. It doesn’t depend on another resource such as “money in a budget” or “open headcount for hiring”. From a governance point of view, this is great. Fedora’s leadership says “we need to make sure that packages in Fedora are high quality” and the community was left to solve that problem in a scalable way, and did so.

Many years ago, when I hired Mike to lead Fedora’s infrastructure team, I told him that I would never micromanage his work, because he knew better than I what needed to be done. The only time he would see me poke my nose into his business was if he permitted any artifical scarcity to exist within Fedora Infrastructure.

What does that mean? Building the capacity for an ever-growing number of people to participate in Fedora Infrastructure was the primary objective, and figuring that out while not sacrificing security policy or quality was (and is) a non-trivial problem. Because in a community like Fedora that places value on GETTING STUFF DONE, telling someone “there is no one with time to address your topic and and you are not allowed to do it yourself” is unacceptable.

To put it another way: within the context of Fedora, if you are claiming that people is a scarce resource, you are probably wrong, and people is simply a scapegoat for a different issue. The rollout of the community credit cards is a good example of this point.

Removing scarcity is not the same as removing guidelines or rules. Fedora has very well-written trademark guidelines. These guidelines help us not only build, but also protect, and scale, Fedora’s brand.

Just don’t let the implementation or the following of those guidelines and rules create artificial scarcity.

Hello again.

July 28, 2011

That didn’t take long, did it?  Welcome to the first post of my new blog, which is a continuation of the Fedora, Linux, and Open Source blog that I’ve been writing since 2006.  While I’m busy moving across country, it seems as good a time as any to move my blog to WordPress.

On August 22nd, I begin with Amazon Web Services as a manager in their Kernel & Operating Systems group. My team’s focus is on the overall customer experience of using Amazon Machine Images (AMIs). This includes accountability for the Amazon Linux AMI, working with external AMI vendors, and handling premium support for customers running AMIs.

Thus, one of the places where my new job will continue to intersect (at least a little bit) with the Fedora community is in the Cloud SIG, where the building, testing, and deployment of Fedora on EC2 is a key goal of that group of contributors.

I’ve subscribed to the cloud mailing list and sent a few notes, but things will probably be a bit quiet from me until I actually start with Amazon.  The next few weeks are devoted to seeing some family on the east coast followed by the logistics around leaving Raleigh and relocating to Seattle.