Getting a computer to read text is supposedly impossible, or really tricky, but they made watson play jeopardy, isnt that a good sign that a logic calculation is all thats necessary for say a chat bot you cant tell isnt a person?
Im thinking about it, and all the neural networks or sense symbolization under the sun wont get you the main meat of an intelligent program, which is logical reactions to things.
If you just have a database of this's and that's, then the reaction would be to access, what do i like to do, how do i solve this problem, and how do i reply to this user speaking to me?
I dont think its a dead end at all, I think its damn possible to get a computer to react logically, its what computers are good at!
Have you seen AIML? It's pretty much all about pairing up categories of inputs with categories of logical responses as described.
Also, what does any of this have to do with willpower?
well, willpower as in i am free to act in any way possible.
alice is a bit of a bimbo, i actually mean something that wouldnt run on your average computer.
heres some drivel-> (now published )
DIFFERENT IDEAS IVE HAD
WHAT THE HIERACHICAL DATABASE OF COMMONS GAVE ME, TOY WISE.
matching 2 visions, cause to effect.
matching 2 text boxes, replying to each other, each is each others cause and effect.
matching joystick movements, to a vision.
LOGIC FORMATION DATABASE, which is yet to even be visualized by me properly.
the two can be combined, and it makes the effect deduced, instead of just matched and played back.
and the idea that, i can turn this database of commons, into a logically formulated database of experiences, granting me the power
to make new experiences from old experiences using simple logic.
Watson wasn't just about logic though. The trick that really made Watson possible, and got them from 60% accuracy to 90% accuracy, was to combine a bunch of different programs that would each try to evaluate not only the answer to the question, but also how certain the program was of its answer. Jeopardy questions fall into vague, often-repeated categories, and by having the sub-programs identify which categories they were good at and only get used for those categories, they were able to vastly improve the machine's accuracy.
Now I do think that principle could be more widely exploited. One of the more successful frameworks for neural networks is something called 'ART' (Adaptive Resonance Theory), where you have a hierarchy of networks all responding to input, and the relative weights of the networks' responses also evolve according to a neural network algorithm based on their success. When none of the networks respond well to a given input, a new sub-network is automatically created, and the process continues - this means that the algorithm dynamically creates a hierarchy of specialists based on categories it observes in the input set. That said, it still has all the classic problems of neural networks - it takes many repetitions to learn and data representation is extremely important, as well as eliminating noisy or meaningless inputs. Its also only able to deal with input-to-output mapping problems with fixed input and output sizes like most other neural networks.