The interesting wrong truth

On the Ninjas and Robots blog, the post How to be interesting intrigued me with this Murray Davis quote:

An audience finds a proposition ‘interesting’ not because it tells them some truth they thought they already knew, but instead because it tells them some truth they thought they already knew was wrong.

That humans find interest in the truth that is contrary to their beliefs is similar to the way neural networks are trained. If the observed truth is not in line with the current beliefs of the network (i.e., the truth deviates from the result obtained from forward evaluation of the network given the input data), the network weights are changed in a way that makes the result better match the truth. The usual algorithm for changing the weights is called back-propagation.

I also think that humans find great interest in things that are similar (or correlate) with other things we know to be true. Like my observation above. But, is there a way to train neural networks by focusing on training data that correlate with truths the network has already learned? I guess that the difficulty is in how to use the network to define the "truth-comparison" for an algorithm that search for correlating truths in the training data.