In a recent Substack post (https://superposer.substack.com/p/insufferable-mathematicians), FQxI's Jonathan Oppenheim discusses how far Large Language Models have progressed in terms of their ability to solve math problems. His quick summary is that "they’re pretty terrible at anything close to research level maths, but the speed at which they’re improving is astounding. But also, they’re being trained to be insufferable and a bit psychopathic."
How Good Are LLMs at Maths and Physics? Jonathan Oppenheim Ponders