ned4akpakwu at February 9th, 2005 09:44 — #1
i'm having a little problem wit a tic tac toe game i wrote using a bac-propagation neural network as the opponent (A.I) the thing is, the network has to train itself with each board state i.e whenever i make a move, it trains itself using the current state of the board as inputs (i used the following rules. 1 means the point is occupied by an O, -1 means its occupied by an X (the human player) while zero stands for an empty cell. My question is, since this training takes time will it be possible for me to just train the network once using all possible board configurations? currently i train once AND still train each time i make a move.
b.t.w my current training set is about 500 lines of code like this :
0, 1, -1
1, 0, 0
1, -1, 0
i'll be really grateful if anyone could help out with this problem.
ed_mack at February 9th, 2005 11:11 — #2
why not have code automatically play games with it (random moves)? If you do that enough, surely the network will learn to beat most strategies
ned4akpakwu at February 10th, 2005 08:59 — #3
ok, i'll try it. Im not so sure how to go about doing that though (the random moves thing)..
ed_mack at February 10th, 2005 11:43 — #4
Have the smart computer take its turn, then have the dumb computer just place its x anywhere.
rego at February 12th, 2005 08:08 — #5
Have the smart computer take its turn, then have the dumb computer just place its x anywhere. [snapback]15975[/snapback]
ed_mack at February 12th, 2005 12:19 — #6