What aamesxdavid mentions with the "footprint"-sounds actually could solve my (more theoretical) problem (at least at the moment - still at planning stage). And I actually forgot to mention what I wanted to achieve, which is
- Have the ability to 'create' similar sounding, but still recognizable different sounds
- no need to fill the game itself with huge amounts of similar sounding effects
- change some sounds on-the-fly, rather than have to call the Audio Artist and book a studio for another hour, to get it done...
Still, while I do understand, that the "creation of sounds" is a mathematical thing, I do think, that the same applies for any kind of 3D-Engine anyways, still in many cases we do not constantly rewrite everything from skratch, but use lots of helping-libraries to achieve what we need to achieve. In case of "graphic-shaders" (to whom I refer), we now use tools and HLSL to tell the objects how they have to look like.
Seeing as a computer-music-hobbyist how some companies create softsynths and other sound-processing applications, with partially awesome results, I do actually think, that with the current available processing power, the mathematical calculations should be rather easy to solve (especially seeing, that only few projects utilize the full power of all the cores that are available in many of todays quad-cpu-computers).
Going a bit further in my question comparing the audio-development vs. the graphical development of game creation, I kinda ask myself, what happened with all those applications that we seen in earlier days, where we had fun for hours (ok, maybe minutes), typing in some ridiculous sentences, then hearing our computer talk to us. And still, today, I have to read Quests in an RPG rather than listening to it.
Earlier, I had to read the whole game, but now I can at least see it. (True, there are games, where you wish you hadn't seen that one...).
@JarkkoL: Sounds good, and is at least a basis for what I am "aiming" for.