You may have read in the news that computers have just now become able to beat top human players at the ancient Chinese strategy game of Go. https://gogameguru.com/alphago-defeats-lee-sedol-4-1/ Besides faster hardware, the key advance over old Go engines involves "neural networks" that must be "trained" in order perform well. That training involves lots of statistics, but one of the key steps, "backpropogation," is a simple application of the chain rule for partial derivatives: http://colah.github.io/posts/2015-08-Backprop/ As I mentioned in class this week, gradient descent is also a key tool: http://peterroelants.github.io/posts/neural_network_implementation_part01/#Gradient-descent See also this nice overview if you're interested: http://highscalability.com/blog/2016/3/16/jeff-dean-on-large-scale-deep-learning-at-google.html