Eventhough all other facets of my life seem to be spinning out of control, at least I have still got my wits about me:
The above proof ensures that certain recurrent neural network models will always settle to some local minimum (i.e. a best solution) during training.
“What a retarded thing to take pride in–what has that got to do with anything in the real world?” you’re probably asking yourself. Answer: nothing. Brilliant guys Cohen and Grossberg devised this proof 25 years ago; all I have done is simply work it out for myself. Right now, math seems to be the only stupid human trick I can do. A guy has got to savor the victories that he can, however small and inconsequential they might be.