# notes-cog-ai-applications-aiForMath

## Automated theorem discovery

Random ideas:

It would be neat if there were tools that allow individual mathematicians to 'play around with' and easily generate/discover idiosyncratic/novel sets of theorems, the way that lay users can play around with Mandelbrot set programs to explore the Mandelbrot set and discover novel interesting/beautiful-looking regions of it. Another analog with Mandelbrot set programs is that the theorem-prover subroutine of the automated theorem discoverer could be set to have a runtime bound beyond which it gives up on trying to prove conjectures (or a free parameter in some subroutine that sets such a bound, allowing the program to increase the bound for theorems that it thinks would be especially interesting or important, or whose truth it is especially confident of), similarly to the way that you can tune a parameter to get your region of the Mandelbrot set calculated to a finer level of detail.

If the software simply found a bunch of theorems, this would not be of much use, as it would take too long to look through all the theorems that it found. So it should rank-order the theorems it finds by interest (best would be to write an English-language narrative tying together the most important theorems and their applications, ie to write a textbook for the stuff it has discovered, but this is probably too hard for now, although i suppose the rank-ordered theorems, with dependencies between them, could be fed to a simple template-based typesetting algorithm to produce a spartan 'textbook' of sorts). How to define theorem 'interest'? A friend (ROF) once suggested that theorem importances could be measured by how many other theorems they are useful for proving. Of course, if you do that, then shouldn't a theorem get more 'points' for proving other theorems that are themselves important? We suggests an eigenvector approach. Which reminds me of reputation systems. Which suggests that users could 'seed' the system not just with a set of seed theorems (and application domains fed to the model checker, if the Colton/Bundy/Walsh approach is followed), but also with a unique assignment of interest values to each seed theorem (and then to the new theorems too). And of course the system could try to 'learn' how interesting its user will consider new theorems, bringing the system some attributes of a recommender system.

The idea is that with such software in hand, someone interested in math could play around with the initial parameters, run the system for awhile, and see if it comes up with anything interesting. Most of the time it won't, or will come up with the known elementary results of the given domain, but sometimes it will come up with a bunch of elegant novel results about some domain, or maybe a twiste-in-an-interesting-way reconceptualization of the known elementary results.