:idea: hi all here are some wild half baked ideas and slightly )( more on using/ combining ML (machine learning) with ATP (automated theorem proving). both topics have been big in this blog and lately am trying to harmmer out some (new) ideas on how to merge them in a grandiose 21^{st} century vision. there are also some preliminary experiments here.

how would one apply an ML algorithm to collatz? the whole area is quite unexplored of uniting ML algorithms with number theory or TCS proofs but yet seems open to experimentation. number theoretic problems are similar to analyzing pseudorandom functions, or functions with “signal mixed with noise” much like with Big Data. have long had a nagging suspicion theres something potentially big lurking in this (right now seemingly) exotic marriage. ie a tip/ (very big?) iceberg floating around. ML is absolutely centric to empirical approaches to problems for decades and is now after years of sometimes painstaking development on very strong theoretical and practical footing.

it appears to me the sensible main/ natural idea on how to apply it would be to try to create a prediction algorithm for trajectory stopping distance, the key “apparently unpredictable” aspect. stopping distance is not known to be bounded (not sure why wrote those words, “lol”) but is bounded for all experimental values. suppose one came up with a learning algorithm that predicted the pseudorandom-looking stopping distances. the error scaling would be a key factor. does the error increase or worse, “blow up” somehow as the model scales to very large numbers? this is a nonobvious & maybe sort of subtle question that could lead to a lot of study wrt the different ML algorithms.