Pretty much anything that can be imagined.
Every kind of synthesis technique known, subtractive, additive, distortion synthesis, aggregate synthesis, smooth morphing between sounds (spectral morphing), change a dog into a cat, any kind of effects processing imagined, granular synthesis, spectral processing, pitch tracking, format banks, multiwave synthesis, it pretty much goes on and on.
create a new sound with cloud of 1000 sine waves and 1000 bandpass filters tracking the spectral content of another sound. Crazy stuff like that.
Also it can create very complex physical modeling for instruments. That's probably the most interesting with the eigenharp since the combination of a physical model and high resolution controller is extremely interesting. Some of the continuum examples have far out physical models of fanciful instruments never heard before and also real instruments like brass instruments and such. I'd love to be able to design new models for the eigenharp.
As I said, every calculation is done at the sample rate which means 0 latency. The only latency is from the audio interface to the DAW for example which is a fixed latency.
Since it's realtime, at some point a sound algorithmn can be too complex to execute in one sample, then you run out of realtime.
The pacarana ahs 4 very high speed DSPs and the jobs are distributed between them. Most even complex algorithmns don't even move the dsp above zero so I have no idea what kind of sound would pin the DSP.
I just patched up a digeridoo on my modular, I'd love to transfer that to Kyma and then create a physical model using the breath pipe on the Alpha, then use the Alpha keys to trigger a filterbank on the sound to sculpt it. I'm just really interested in that aspect. I don't need to pretend I'm using this to play gigs, cause I'm not.