The key group switches an incoming signal to one or more outputs. This includes its strip and breath, etc inputs.
All the inputs are switched. So if you want the strip input to an instrument turn off then you select away from that instrument, you should route it via the kgroup. Otherwise, imagine you had an instrument with a really long release. You could potentially trigger a note, switch away to a different instrument, but the strip would still affect the releasing note of the old instrument.
Or imagine that you set up a strip to pitch bend all the takes being played by a recorder (which you could do by leading the strip around the recorder instead of through it) Then when you switched away, the strip would still be live.
Of course, that might be what you want...
Its good practice to not change event ID's unless you really have to. The cfilter model preserves the IDs. The only reason to change them is usually because you are synthesising more than 1 concurrent output event from an input event (a chord generator maybe), or merging more than 1 input event (a fingerer)
If you do mess with the event ID's, it would be a good idea to allow for the routing of other signals via the agent so that (although the data is left alone) the event ID's are changed in a similar way.
This was quite hard in the past, because Event ID's were overloaded with the key number. That was the primary motivation for the new key input in 2.0.
I think a good rule of thumb is to think of an event representing a physical act (like hitting a key) If you change the key stream, I dont think you should change the event ID.
To go back to the chord generator example, a good thing to do there would be to suffix the primary incoming ID to generate the notes in the chord. So incoming event 1.1 would generate 1.1.1, 1.1.2, and 1.1.3. That way, the pressure roll and yaw signals from the triggering key would apply to all the notes in the chord that key generates without the chord generator being involved in the signal flow.
The next change will be to have Event ID's composed of two parts, a leading 'channel' part and a trailing 'voice' part. Correlation will be extended to match both parts separately using the current rules, two events correlating if both channel and voice correlate.
Harmonisers, fingerers and chord generators (Agents that deal with more than one key press) can use the channel part to identify key presses that belong together (being in the same channel)
At the moment there are problems if you put such an agent downstream of a recorder, because all the key presses from the different takes can't be sorted out.
Recorders will distinguish takes by adding the channel part. 'Using' will add to the channel part.