I think the hardware HRTF support in those soundcards is mostly for rendering multi channel audio into a proper binaural mix. What I'm talking about is different.
If you think of your typical consumer audio standards you can add dimensions to them.
If we ignore the cues that we mentally process that tells us things like depth (like a reverberant recording will give us a mental cue as to a sound that is farther away than another etc,) because technically there is no spatialization there. its just a mental trick.
Mono would be 0 dimensions. it is just a point. all the sound from one source.
Stereo would be 1 dimension, because you can plot the sound source on one line.
4.1, 5.1 6.1 7.1 ... etc would be 2 dimensional because you can plot a sound in X,Y but you have no height. You would have to add an elevated speaker array to get a true 3 dimensional representation of the sound.
So rather than trying to render a 5.1 mix as a stereo binaural mix that uses HRTF. which would probably give you a half decent illusion of where things were in 2 dimensions. you still loose that 3rd dimension.. which to me would probably ruin the entire effect, because sound doesn't work that way.
What I'm talking about is taking a mono sound and being able to position it somewhere in 3 dimensions while only using a stereo feed, which in this case would have to be headphones.
I suppose if there are soundcards that will calculate HRTF in realtime then that would basically already do what I'm talking about. But it would have to be able to read information from the application as far as position of sound, the geometry of its surroundings, the makeup of its surroundings etc. and simulate that 3d space for your brain.
So I'm talking about doing that \\^\\^\\^ at a software level.
and do develop an application that would allow you to do it realtime, or to render it to a file.