Full-stack Philosophies

James Morle's Blog

RSS Feed

Latency Hiding For Fun and Profit

Posted on 2:27 pm September 28, 2009 by James Morle

Yep, another post with the word 'latency' written all over it.

I've talked a lot about latency, and how it is more often than not completely immutable. So, if the latency cannot be improved upon because of some pesky law of physics, what can be done to reduce that wasted time? Just three things, actually:

  1. Don't do it.
  2. Do it less often.
  3. Be productive with the otherwise wasted time.


The first option is constantly overlooked - do you really need to be doing this task that makes you wait around? The second option is the classic 'do things in bigger lumps between the latency' - making less roundtrips being the classic example. This post is about the third option, which is technically referred to as latency hiding.

Everybody knows what latency hiding is, but most don't realise it. Here's a classic example:

I need some salad to go with the chicken I am about to roast. Do I:

(a) go to the supermarket immediately and buy the salad, then worry about cooking the chicken?

OR

(b) get the chicken in the oven right away, then go to the supermarket?

Unless the time required to buy the salad is much longer than the chicken's cook-time, the answer is always going to be (b), right? That's latency hiding, also known as Asynchronous Processing. Let's look at the numbers:

Variable definitions:

Supermarket Trip=1800s

Chicken Cook-Time=4800s

Calculations:

Option (a)=1800s+4800s=6600s (oh man, nearly two hours until dinner!)

Option (b)=4800s (with 1800s supermarket time hidden within it)

Here's another example: You have a big code compile to do, and an empty stomach to fill. In which order do you execute those tasks? Hit 'make', then grab a sandwich, right?

As a side note, this is one of my classic character flaws - I just live for having tasks running in parallel this way. Not a flaw, I hear you say? Anyone that has tried to get software parallelism  (such as Oracle Parallel Execution) knows the problem - some tasks finish quicker than expected, and then there's a bunch of idle threads.  In the real world, this means that my lunch is often very delayed, much to the chagrin of my lunch buddies.

OK, so how does this kind of thing work with software? Let's look at a couple of examples:

  1. Read Ahead
  2. Async Writes

Read ahead  from physical disk is the most common example of (1), but it equally applies to result prefetching in, say, AJAX applications. Whatever the specific type, it capitalises on parallel processing from two resources. Let's look at the disk example for clarification.

Disk read ahead is where additional, unrequested, reads are carried out after an initial batch of real requested reads. So, if a batch job makes a read request for blocks 1,2,3 and 4 of a file, "the disk" returns those blocks back and then immediately goes on to read blocks 5,6,7,8, keeping them in cache. If blocks 5,6,7,8 are then subsequently requested by the batch job after the first blocks are processed, they can immediately be returned from cache, thus hiding the latency from the batch job. This has the impact of hiding the latency from the batch job and increases throughput as a direct result.

Async writes are essentially the exact opposite of read-ahead. Let's take the well-known Oracle example of async writes, that of the DBWR process flushing out dirty buffers to disk. The synchronous way to do this is to generate a list of dirty buffers and then issue a series of synchronous writes (one after the other) until they are all complete. Then start again by looking for more dirty buffers. The async I/O way to do the same operation is to generate the list, issue an async write request (which returns instantly), and immediately start looking for more dirty buffers. This way, the DBWR process can spend more useful time looking for buffers - the latency is hidden, assuming the storage can keep up.

By the way, the other horror of the synchronous write is that there is no way that the I/O queues can be pushed hard enough for efficient I/O when sending out a single buffer at a time. Async writes remedy that problem.

I've left a lot of the technical detail out of that last example, such as the reaping of return results from the async I/O process, but didn't want to cloud the issue. Oops, I guess I just clouded the issue, just ignore that last sentence...


3 comments on “Latency Hiding For Fun and Profit

  1. So the equivalent of read-ahead effect with your preparing-a-meal analogy would be to buy an apple pie at the same time as the salad (hey, asynchronous buying) and slap it into the oven as I take the chicken out :-)So it will be ready and waiting once the chicken has been "processed".

    But like the read-ahead of the storage, a waste of time if I decide I don't want the apple pie (too full on chicken and salad).

    Nice post.

  2. You did not cloud the issue, indeed 🙂
    When I think of parallel operations (mostly in the human world), I try to guess if it it worth it or not, based on the tasks. You know, the curse of parallelism is synchronization. Sometimes you'd better do one thing at a time, or you can get burnt chicken on with the nice fresh salad.

Leave a Reply