OpenAI’s latest release, GPT-4, is the most powerful and impressive AI model yet from the company behind ChatGPT and the Dall-E AI artist. The system can pass the bar exam, solve logic puzzles, and even give you a recipe to use up leftovers based on a photo of your fridge – but its creators warn it can also spread fake facts, embed dangerous ideologies, and even trick people into doing tasks on its behalf. Here’s what you need to know about our latest AI overlord.
GPT-4 is, at heart, a machine for creating text. But it is a very good one, and to be very good at creating text turns out to be practically similar to being very good at understanding and reasoning about the world.
And so if you give GPT-4 a question from a US bar exam, it will write an essay that demonstrates legal knowledge; if you give it a medicinal molecule and ask for variations, it will seem to apply biochemical expertise; and if you ask it to tell you a gag about a fish, it will seem to have a sense of humour – or at least a good memory for bad cracker jokes (“what do you get when you cross a fish and an elephant? Swimming trunks!”).
Not quite. If ChatGPT is the car, then GPT-4 is the engine: a powerful general technology that can be shaped down to a number of different uses. You may already have experienced it, because it’s been powering Microsoft’s Bing Chat – the one that went a bit mad and threatened to destroy people – for the last five weeks.
But GPT-4 can be used to power more than chatbots. Duolingo has built a version of it into its language learning app that can explain where learners went wrong, rather than simply telling them the correct thing to say; Stripe is using the tool to monitor its chatroom for scammers; and assistive technology
Read more on theguardian.com