Latency is a bit of a weird thing, quite complicated. In audio processing land, all the plugins get the same buffer to process, in the same clock tick, so it's possible for the maximum latency to be the buffer length. For the playback of, say, a sample, this is the norm. But certain plugins introduce an actual latency (say a 'look ahead' compressor that needs to see a few milliseconds into the future) and this is what 'plugin delay compensation' helps with. Some EQ's can also introduce a latency, due to the signal processing architecture. The latency is not about the amount of processing they have to do - after all this processing has to get done sometime, why not in the current clock tick? It's a product of other things.
In a live situation you cannot do plugin delay compensation, it's a daft idea. Unless you have a time machine (which is possible with pre-recorded playback of course). So we process all information in the same clock tick. For a sound that starts up immediately this leaves you with a latency of somewhere between 6 and 7 milliseconds from keypress to the first sound coming out of the speaker (there's 1-2 milliseconds of time used in getting from the instrument to the PC). This, if you are familiar with instrument design, is very good. It took us a lot of effort to get the USB communication to be low enough latency to make this possible (USB always introduces at least 1ms of latency, its just the way it works). The difficulties of USB latency are also the only reason there isn't a Linux EigenD right now - Linux USB isn't that good and we have to write a kernel driver to make it low enough latency - something we haven't got to yet.
Of course, in the real world all this theory gets much more complicated. Take the example of a physics model, say our Clarinet. That's a whole physical system being modeled, with a large start up time entirely caused by the physics - a modeled tube requires a finite time for the injected 'energy' to begin to cause oscillation. This, and the quite low speed of sound, cause us all to be quite well evolved to deal with straightforward latency - we just learn to play a little in front of the beat (and if you think this is not real, remember that a drummer five metres away is also fifteen milliseconds away for sound). What we don't deal with terribly well is jitter (variations in latency) and weird cues that break our built in latency tolerance. The jitter intolerance is what causes old fashioned MIDI hardware to feel so latent from time to time (it could be woefully jittery as well as latent), and weird cues are what makes latency so intolerable for singers and, as I now know, guitarists playing through Au/VST processing.
Sorry, slight obsessive rant - I spent several years worrying about latency a lot while we designed the Eigenharps!
John