probably I was not so clear when describing the problem:
The application (sort of a game engine, but different) is generating sounds/effects using DirectSound/Direct3DSound and EAX-effects for reverbation, echo, ... . The sound files are simple wav, ogg or mp3 files. These sound effects are placed in the 3d scene or linked to moving objects. An EAX environment is generated in a preprocessing step (analyzing the size of the different rooms, ...) and while the user navigates inside the 3d scene, the actual environment is determined to give the correct "ambient feedback".
The sounds themselves start/stop/loop according to the proximity of the virtual camera to these objects or by user-actions or by time, ... .
While this all works very good when achieving real-time frame-rates (like navigating in the scene) the problem is capturing the sounds (or soundtrack) to an avi file while rendering a lot of still frames. In this case the camera is moving along a predefined path and effects are triggered just like in the interactive mode. The camera movement is not real-time because of antialiasing techniques and the high resoulution. In my previous post I mentioned the workaround to this problem (sort of ugly but I can live with it), but I can only record stereo (2 channels).
So my questions are:
(1) Is there a way to capture the samples (like in the sampleGrabber-demo) in a one-shot mode (run sounds --> capture --> do_something_that_takes_a_long_time --> get next sample) while still getting EAX effects and more than two channels?
(2) If (1) is impossible, how can I capture the output of just one channel (say back-left). This way I could write each channel seperately to the combined avi-soundtrack.
(3) What exactly do you mean by software mixing? I know how to get hardware acceleration in DirectSound, but I thought switching to software mixing would decrease quality/available effects and of course disable EAX effects. (CPU usage is not important when writing the sounds to disk).