I have implemented the following->
I have an audio stream and a video stream recording into a database.
For every piece of audio I can associate it to the frames where it came from in the video.
I can also show the system novel information, like new audio, and itll find the closest matching audio and produce the video for the closest matching audio.
I can also show it novel video, and itll associate it to the most closest matching audio.
This way, I can show it an audio dog bark, and then itll show me video of dogs barking.
Now... Is it possible to use this database, and generate a new frameset of video for an audio stream it hasnt heard yet, using the closest matching data as the base point of the generation, that and possibly more data that has been stored inside it.
That would be amazing if you could do it, and thats what im thinking about right now, ive taken hierarchical temporal memory theory to the point where I know I need to generate something as well as take in input from the senses, just what is it? Then all ai is now possible and computers can dream...
That could be kind of trippy, if you used it on a piece of music, to produce a music video. (Unless it just came up with snippets of other music videos from songs that sounded similar...)
haha yeh, it could do that. Actually I shouldnt have posted this, im sorta on my own with this crazy thing, if I ever get anywhere with it, who the hell knows.