Like many other academics, it seems, I spent part of Winter break playing around with ChatGPT, a neural network “which interacts in a conversational way.” It has been trained up on a vast database, to recognize and (thereby) predict patterns, and its output is conversational in character. You can try it by signing up. Somewhat amusingly you must prove you the user are not a robot. Also, it’s worth alerting you that the ChatGPT remembers/stores your past interactions with it.
It’s uncanny how fluent its dialogic output is. It will also admit ignorance. For example, when I asked it who was “President in 2022,” it responded (inter alia) with “My training data only goes up until 2021, so I am not able to provide information about events that have not yet occurred.”
Notice that it goes off the rails in its answer because it wrote me that in 2023! (It’s such a basic mistake that I think claims about it passing, or faking, the Turing test are a bit overblown, although one can see it being in striking distance now.) When I pressed it on this point, it gave me a much better answer: