Wednesday, January 29, 2014

Getting Size and File Count of a 25 Million Object S3 Bucket

Amazon S3 is a highly durable storage service offered by AWS. With eleven 9s (99.999999999%) durability, high bandwidth to EC2 instances and low cost, it is a popular input & output files storage location for Grid Engine jobs. (As a side note, we were at Werner Vogels's AWS Summit 2013 NYC keynote where he disclosed that S3 stores 2 trillion objects and handles 1.1 million requests/second.)

AWS Summit 2013 keynote - Werner Vogels announces over 2T objects stored in S3On the other hand, S3 presents its own set of challenges. For example, unlike a POSIX filesystem, there are no directories in S3 - i.e. S3 buckets have a flat namespace, and files (AWS refers them as "objects") can have delimiters that are used to create pseudo directory structures.

Further, there is no API that returns the size of an S3 bucket or the total number of objects. The only way to find the bucket size is to iteratively perform LIST API calls, each of which gives you information on 1000 objects. In fact the boto library offers a higher level function that abstracts the iterating for us, so all it takes is a few lines of python:
from boto.s3.connection import S3Connection

s3bucket = S3Connection().get_bucket(<name of bucket>)
size = 0

for key in s3bucket.list():
   size += key.size
 

 print "%.3f GB" % (size*1.0/1024/1024/1024)

However, when the above code is run against an S3 bucket with 25 million objects, it takes 2 hours to finish. A closer look at the boto network traffic confirms that the high level list() function is doing all the heavy lifting of calling the lower level S3 LIST (i.e. over 25,000 LIST operations for the bucket). With such a chatty protocol it explains why it takes 2 hours.

With a quick google search we found most people workaround it by either:
  • store the size of each object in a database
  • extract the size information from the AWS billing console
mysql> SELECT SUM(size) FROM s3objects;
+------------+
| SUM(size)  |
+------------+
| 8823199678 |
+------------+
1 row in set (20 min 19.83 sec)
Note that both are not ideal. While it is much quicker to perform a DB query, we are not usually the creator of the S3 bucket (recall that the bucket, which is owned by another AWS account, stores job input files). Also, in many cases, we need other information such as the total number of files in the S3 bucket, so the total size shown in the billing console doesn't give us the complete picture. And lastly, we need the information of the bucket now and can't wait until the next day to start the Grid Engine cluster and run jobs.

Running S3 LIST calls in Parallel
Actually, the LIST S3 API takes the prefix parameter, which "limits the response to keys that begin with the specified prefix". Thus, we can run concurrent LIST calls, where one call handles the first half, and the other call handles the second half. Then it is just a matter of reducing and/or merging the numbers!

With 2 workers (we use the Python multiprocessing worker pool) running LIST in parallel, we reduced the run time to 59 minutes, and further to 34 minutes with 4 workers. We then maxed out the 2-core c1.medium instance as both processors were 100% busy. At that point we migrated to a c1.xlarge instance that has 8 cores. With 4 times more CPU cores and higher network bandwidth, we started with 16 workers, which took 11 minutes to finish. Then we upped the number of workers to 24, and it took 9 minutes and 30 seconds!

So what if we use even more workers? While our initial goal is to finish the pre-processing of the input bucket in less than 15 minutes, we believe we can extract more parallelism from S3, as S3 is not just 1 but multiple servers that are designed to scale with a large number of operations.

Taking Advantage of S3 Key Hashing
So as a final test, we picked a 32-vCPU instance and ran 64 workers. It only took 3 minutes and 37 seconds to finish. Then we observed that the work we distribute to the workers doesn't take advantage of S3 key hashing!

As the 25 million object-bucket has keys that start with UUID like names, the first character of the prefix can be 0-9 and a-z (36 characters in total). So key prefixes are generated like:
00, 01, ..., 09, 0a, ..., 0z
10, 11, ..., 19, 1a, ..., 1z

And workers running in parallel at any point in time can have a higher chance of hitting the same S3 server. We performed a loop interchange, and thus the keys become:
00, 10, ..., 90, a0, ..., z0
01, 11, ..., 91, a1, ..., z1
Now it takes 2 minutes and 55 seconds.


Tuesday, January 7, 2014

Enhanced Networking in the AWS Cloud - Part 2

We looked at the AWS Enhanced Networking performance in the previous blog entry, and this week we just finished benchmarking the remaining instance types in the C3 family. C3 instances are extremely popular as they offer the best price-performance for many big data and HPC workloads.

Placement Groups
An additional detail we didn't mention before: we booted all SUT (System Under Test) pairs in their own AWS Placement Groups. By using Placement Groups, instances get higher full-bisection bandwidth, lower and predictable network latency for node-to-node communications.

Bandwidth
With c3.8xlarge instances that have the 10Gbit Ethernet, Enhanced Networking offers 44% higher network throughput. With smaller C3 instance types that have lower network throughput capability, while Enhanced Networking offers better network throughput, the difference is not as big.



Round-trip Latency
The c3.4xlarge and c3.8xlarge have similar network latency as the c3.2xlarge. The network latency for those larger instance types are between 92 and 100 microseconds.



Conclusion
All C3 instance types with Enhanced Networking enabled offer half the latency in many cases for no additional cost.

On the other hand, without Enhanced Networking, bandwidth sensitive applications running on c3.8xlarge instances won't be able to fully take advantage of the 10Gbit Ethernet when there is 1 thread handling network traffic -- which is a common problem decomposition method we have seen in our users' code: MPI for inter-node communication, and OpenMP or even Pthreads for intra-node communication. For those types of hybrid HPC code, there is only 1 MPI task handling network communication. Enhanced Networking offers over 95% of the 10Gbit Ethernet bandwidth for those hybrid code, but when Enhanced Networking is not enabled, the MPI task would only get 68% of the available network bandwidth.

Tuesday, December 31, 2013

Enhanced Networking in the AWS Cloud

At re:Invent 2013, Amazon announced the C3 and I2 instance families that have the higher-performance Xeon Ivy Bridge processors and SSD ephemeral drives, together with support of the new Enhanced Networking feature.

Enhanced Networking - SR-IOV in EC2
Traditionally, EC2 instances send network traffic through the Xen hypervisor. With SR-IOV (Single Root I/O Virtualization) support in the C3 and I2 families, each physical ethernet NIC virtualizes itself as multiple independent PCIe Ethernet NICs, each of which can be assigned to a Xen guest.

Thus, an EC2 instance running on hardware that supports Enhanced Networking can  "own" one of the virtualized network interfaces, which means it can send and receive network traffic without invoking the Xen hypervisor.

Enabling Enhanced Networking is as simple as:
  • Create a VPC and subnet
  • Pick an HVM AMI with the Intel ixgbevf Virtual Function driver
  • Launch a C3 or I2 instance using the HVM AMI

Benchmarking
We use the Amazon Linux AMI, as it already has the ixgbevf driver installed, and Amazon Linux is available in all regions. We use netperf to benchmark C3 instances running in a VPC (ie. Enhanced Networking enabled) against non-VPC (ie. Enhanced Networking disabled).


Bandwidth
Enhanced Networking offers up to 7.3% gain in throughput. Note that with or without enhanced networking, both c3.xlarge and x3.2xlarge almost reach 1 Gbps (which we believe is the hard limit set by Amazon for those instance types).

 




Round-trip Latency
Many message passing MPI & HPC applications are latency sensitive. Here Enhanced Networking support really shines, with a max. speedup of 2.37 over the normal EC2 networking stack.




Conclusion 1
Amazon says that both the c3.large and c3.xlarge instances have "Moderate" network performance, but we found that c3.large peaks at around 415 Mbps, while c3.xlarge almost reaches 1Gbps. We believe the extra bandwidth headroom is for EBS traffic, as c3.xlarge can be configured as "EBS-optimized" while c3.large cannot.

Conclusion 2 
Notice that c3.2xlarge with enhanced networking enabled has a around-trip latency of 92 millisecond, which is much higher that of the smaller instance types in the C3 family. We repeated the test in both the us-east-1 and us-west-2 regions and got idential results.

Currently AWS has a shortage of C3 instances -- all c3.4xlarge and c3.8xlarge instance launch requests we issued so far resulted in "Insufficient capacity". We are closely monitoring the situration, and we are planning to benchmark the c3.4xlarge and c3.8xlarge instance types and see if we can reproduce the increased latency issue.

Updated Jan 8, 2014: We have published Enhanced Networking in the AWS Cloud (Part 2) that includes the benchmark results for the remaining C3 types.

Wednesday, November 21, 2012

Running a 10,000-node Grid Engine Cluster in Amazon EC2

Recently, we have provisioned a 10,000-node Grid Engine cluster in Amazon EC2 to test the scalability of Grid Engine. As the official maintainer of open-source Grid Engine, we have the obligation to make sure that Grid Engine continues to scale in the modern datacenters.

Grid Engine Scalability - From 1,000 to 10,000 Nodes
From time to time, we receive questions related to Grid Engine scalability, which is not surprising given that modern HPC clusters and compute farms continue to grow in size. Back in 2005, Hess Corp. was  having issues when its Grid Engine cluster exceeded 1,000 nodes. We quickly fixed the limitation in the low-level Grid Engine communication library and contributed the code back to the open source code base. In fact, our code continues to live in every fork of Sun Grid Engine (including Oracle Grid Engine and other commercial versions from smaller players) today.

So Grid Engine can handle thousands of nodes, but can it handle tens of thousands of nodes? Also, how would Grid Engine perform when there are over 10,000 nodes in a single cluster? Previously, besides simulating virtual hosts, we could only take the reactive approach - i.e. customers report issues, and we collect the error logs remotely, and then try to provide workarounds or fixes. In the Cloud age, shouldn't we take a proactive approach as hardware resources are more accessible?

Running Grid Engine in the Cloud
In 2008, my former coworker published a paper about benchmarking Amazon EC2 for HPC workloads (many people have quoted the paper, including me on the Beowulf Cluster mailing list back in 2009), so running Grid Engine in the Cloud is not something unimaginable.

Since we don't want to maintain our own servers, using the Cloud for regression testing makes sense. In 2011, after joining the MIT StarCluster project, we started using MIT StarCluster to help us test new releases, and indeed the Grid Engine 2011.11 release was the first one tested solely in the Cloud. It makes sense to run scalability tests in the Cloud, as we also don't want to maintain a 10,000-node cluster that sits idle most of the time!

Requesting 10,000 nodes in the Cloud
Most of us just request resources in the Cloud and never need to worry about resource contention. But there are default soft limits in EC2 that are set by Amazon for catching run away resource requests. Specifically each account can only have 20 on-demand and 100 spot instances per region. As running 10,000 nodes (instances) exceeds many times the soft limit, we asked Amazon to increase the upper limit of our account. Luckily the process was painless! (And thinking about it, the limit actually makes sense for new EC2 customers - imagine what kind of financial damage an infinite loop with instance request can do. In the extreme case, what kind of damage a hacker with a stolen credit card can do by launching a 10,000-node botnet!)

Launching a 10,000-node Grid Engine Cluster
With the limit increased, we first started a master node on a c1.xlarge (High-CPU Extra Large Instance, a machine with 8 virtual cores and 7GB of memory). We then started adding instances (nodes) in chunks - we picked chunk sizes that are always less than 1000, as we have the ability to change the instance type and the bid price with each request chunk. Also, we mainly requested for spot instances because:
  • Spot instances are usually cheaper than on-demand instances. (As I am writing this blog, a c1.xlarge spot instance is only costing 10.6% of the on-demand price.)
  • It is easier for Amazon to handle our 10,000-node Grid Engine cluster, as Amazon can terminate some of our spot instances if there is a spike in demand for on-demand instances.
  • We can also test the behavior of Grid Engine when some of the nodes go down. With traditional HPC clusters, we needed to kill some of the nodes manually to simulate hardware failure, but spot termination does this for us. We are in fact taking advantage of spot termination!

All went well until we had around 2,000 instances in a region, and we found that further spot requests all got fulfilled and then the instances were terminated almost instantly. A quick look at the error logs and from the AWS Management Console, we found that we exceeded the EBS volume limit, which has a value of 5000 (number of volumes) or a total size of 20 TB. We were using the StarCluster EBS AMI (e.g. ami-899d49e0 in us-east-1) that has a size of 10GB, so 2000 of those instances running  in a region definitely would exceed the 20TB limit!

Luckily, StarCluster offers S3 AMIs, but the OS version is older, and has the older SGE 6.2u5. As all releases of Grid Engine released by the Open Grid Scheduler project are wire-compatible with SGE 6.2u5, we quickly launched a few of those S3 AMIs as a test, and not surprisingly those new nodes joined the cluster without any issues, so we continue to add the instance-store instances, and soon achieving our goal of creating a 10,000 node cluster:

10,000-node Grid Engine cluster on Amazon EC2 Cloud
A few things more findings:
  • We kept sending spot requests to the us-east-1 region until "capacity-not-able" was returned to us. We were expecting the bid price to go sky-high when an instance type ran out but that did not happen.
  • When a certain instance-type gets mostly used up, further spot requests for the instance type get slower and slower.
  • At peak rate, we were able to provision over 2,000 nodes in less than 30 minutes. In total, we spent less than 6 hours constructing, debugging the issue caused by the EBS volume limit, running a small number of Grid Engine tests, and taking down the cluster.
  • Instance boot time was independent of the instance type: EBS-backed c1.xlarge and t1.micro took roughly the same amount of time to boot.

HPC in the Cloud
To put a 10,000-node cluster into perspective, we can take a look at some of the recent TOP500 entries running x64 processors:
  • SuperMUC, #6 in TOP500, has 9,400 compute nodes
  • Stampede, #7 in TOP500, will have 6,000+ nodes when completed
  • Tianhe-1A, #8 in TOP500 (was #1 till June 2011), has 7,168 nodes
In fact, a quick look at the TOP500 list, we found that over 90% of the entries have less than 10,000 nodes, so in terms of the raw processing power, one can easily rent a supercomputer in the Cloud and get compatible compute power. However, some HPC workloads are very sensitive to the network (MPI) latency, and we believe dedicated HPC clusters still have an edge when running those workloads (in the end, those clusters are designed to have a low latency network). It's also worth mentioning that the Cluster Compute Instance-type with the 10GbE can reduce some of the latency.

Future work
With 10,000 nodes, embarrassingly parallel workloads can complete using 1/10 of the time compare to running on a 1,000-node cluster. Also worth mentioning is that the working storage required to place the input and output is needed for 1/10 the time as well (eg. instead of using 10TB-Month, only 1TB-Month is needed). So not only that you get the results faster, but it actually costs less due to reduced storage costs!

With 10,000 nodes and a small number of submitted jobs, the multi-threaded Grid Engine qmaster process constantly uses 1.5 to 2 cores (at most 4 cores at peak) on the master node, and around 500MB of memory. With more tuning or a more powerful instance type, we believe Grid Engine can handle at least 20,000 nodes.



Note: we are already working on enhancements that further reduce the CPU usage!

Summary

Cluster size 10,000+ slave nodes (master did not run jobs)
SGE versions Grid Engine 6.2u5 (from Sun Microsystems)
GE 2011.11 (from the Open Grid Scheduler Project)
Regions us-east-1 (ran over 75% of the instances)
us-west-1
us-west-2
Instance types On-demand
Spot
Instance sizes from c1.xlarge to t1.micro
Operating systems Ubuntu 10
Ubuntu 11
Oracle Linux 6.3 with the Unbreakable Enterprise Kernel
Other software Python, Boto

Tuesday, July 31, 2012

Using Grid Engine in the Xeon Phi Environment

The Intel Xeon Phi (Codename: MIC - Many Integrated Core) is an interesting HPC hardware platform. While it is not in production yet, there are already lots of porting and optimization work done for Xeon Phi. In the beginning of this year, we also started the conversation with Intel - we wanted to make sure that Open Grid Scheduler/Grid Engine works in the Xeon Phi environment as we have received requests for Xeon Phi support at SC11.

While a lot of information is still under NDA, there are already lots of papers published on the Internet, and even Intel engineers themselves have already disclosed information about the Xeon Phi architecture and programming models. For example, by now, it is widely known that Xeon Phi runs an embedded Linux OS image on the board - and in fact the users can log onto the board and use it as a multi-core Linux machine.

One Xeon Phi execution model is the more traditional offload execution model, where applications running on the host processors offload sections of code to the Xeon Phi accelerator. Note that Intel also defines the Language Extensions for Offload (LEO) compiler directives to ease porting. And yet with the standalone execution model, users execute code directly and natively on the board, and the host processors are not involved in the computation.

Open Grid Scheduler/Grid Engine can be easily configured to handle the offload execution model, as the Xeon Phi used this way is very similar to a GPU device. Grid Engine can easily schedule jobs to the hosts that have Xeon Phi boards, and Grid Engine can make sure that the hardware resources are not oversubscribed. Yet the standalone execution model is more interesting, it is the Linux OS environment that most HPC users are familar with, but it adds a level of indirection to job execution. We don't have the design finalized yet, as the software environment is not released to the public, but our plan is to support both execution models in a future version of Open Grid Scheduler/Grid Engine.

Monday, July 9, 2012

Optimizing Grid Engine for AMD Bulldozer Systems

The AMD Bulldozer series (including Piledriver, which was released recently) is very interesting from a micro-architecture point of view. A Bulldozer processor can have as many as 16-cores, and cores are further grouped into modules. With the current generation, each module contains 2 cores, so an 8-module processor has 16 cores, and a 4-module processor has 8 cores, etc.

  • The traditional SMT (eg. Intel Hyper-Threading) pretty much duplicates the register file and the processor front-end, but as most of the execution pipeline is shared between the pair of SMT threads, performance can be greatly affected when the SMT threads are competing for hardware resources.
  • For Bulldozer, only the Floating-Point Unit, L1 instruction cache, L2 cache, and a small part of the execution pipeline are shared, making resource contention a much smaller concern.
A lot of HPC clusters completely turn off SMT, as performance is the main objective for those installations. On the other hand,  Bulldozer processors are less affected by a dumb OS scheduler, but it still helps if system software understands the architecture of the processor. For example, the patched Windows scheduler that understands the AMD hardware can boost the performance by 10%.

And what does this mean for Grid Engine? The Open Grid Scheduler project implemented Grid Engine Job Binding with the hwloc library (initially for the AMD Magny-Cours Opteron 6100 series - the original PLPA library that was used by Sun Grid Engine was not able to handle newer AMD processors), and we also added Linux cpuset support (when the Grid Engine cgroups Integration is enabled and with the cpuset controller present). In both cases, the execution daemon essentially becomes the local scheduler that dispatches processes to the execution cores. With a smarter execution daemon (execd), we can speed up job execution with no changes to any application code.

(And yes, this feature will be available in the GE 2011.11 update 1 release.)

Wednesday, June 6, 2012

Grid Engine Cygwin Port

Open Grid Scheduler/Grid Engine will support Windows/Cygwin with the GE 2011.11u1 release. We found that many sites just need to submit jobs to the Unix/Linux compute farm from Windows workstations, so in this release only the client-side is supported under our commercial Grid Engine support program. For sites that need to run client-side Grid Engine commands (eg. qsub, qstat, qacct, qhost, qdel, etc), our Cygwin port totally fits their needs. We are satisfied with Cygwin until our true native Windows port is ready...

Running daemons under Cygwin is currently under technology preview. We've tested a small cluster of Windows execution machines, and the results look promising:



Note that this is not the first attempt to port Grid Engine to the Cygwin environment. In 2003, Andy mentioned in the Compiling Grid Engine under Cygwin thread that Sun/Gridware ported the Grid Engine CLI to Cygwin, but daemons were not supported in the original port. Ian Chesal (who worked for Altera at that time) ported the Grid Engine daemons as well, but did not release any code. In 2011 we started from scratch again, and we checked in code changes earlier this year - the majority of the code is already in the GE2011.11 patch 1 release, with the rest coming in the update 1 release.

So finally, the Cygwin port is in the source tree this time - no more out-of-tree patches.