Hmm. I didnt think about readbacks...
Ive thought a little more, and I think you could implement it by rendering a single point per entity, you could handle more than one entity at once tho for sure.
Maybe if you used the geometry shader, and outputted a pixel sized point (i know this is getting a little nuts hehe) for each "actor arm" you could set so many arms to render in a small texture to read back into the cpu. so say you used a 1x8 texture, you tell it to render "movement distance" and "animation frame" then whatever else you needed on different pixels in this "gpu output" texture... then read it back.
CPU TO GPU
1 by x variable texture (possibly 32 bit floats)
OPERATE ON GPU (prestored in video ram)
256x256 script texture, youd have to organize how you would like the byte code.
GPU TO CPU (maybe a floating point or byte texture would do, cause reading back is really slow isnt it...)
1 by x output texture, which is written by the geometry shader for explicit points on different pixels.
well, im crazy enough to think it might work, but how extensive the scripting language could be carried out in the geometry shader I dont really know... maybe your right Reedbeta and it suits simple ai with lots of entities, but maybe you wouldnt get that many entities because there would be too many read backs, dunno.
The cool thing is, it would make plugging in ai really easy, just add a shader... thats what really sparked me up about the idea.