Mobile phones are incredibly capable -- loads of compute (with neural network acceleration), great connectivity (LTE, BLE, WiFi), front and rear-facing cameras, LiDAR, microphones, a screen and speakers built-in for output. AR frameworks provide very sophisticated perception capabilities out of the box that are incredibly useful for navigation. RoBart uses ARKit's scene meshing to identify navigable areas and communicates that to the LLM by annotating camera images with numbered landmarks.
I'd love to see more projects leveraging phones for robotics prototypes, as I've found the development cycle to be much more pleasant than working with something like ROS and a Jetson board.