
Meta announced on Friday (October 18) the release of a suite of new AI models from its research division. The batch includes “Self-Taught Evaluator,” a groundbreaking AI model that is likely to reduce human involvement in AI technology.
Meta Self-Taught Evaluator
The new AI tool works by breaking down complex problems into logical steps, improving the accuracy in subjects such as science, maths, and coding.
Earlier, the Facebook parent company shared details about this evaluator’s dependence on OpenAI’s “chain of thought” in its August paper.
Read more: ChatGPT Windows app rolls out, but only for subscribers
To be noted, Meta’s researchers have trained the evaluator model using AI-generated data, without any human involvement. This can help AI models learn from their own mistakes.
Citing sources, Reuters reported that these self-improving models could streamline the current Reinforcement Learning from Human Feedback, an expensive process that requires specialised human annotators to label data and verify complex query answers.
"We hope, as AI becomes more and more super-human, that it will get better and better at checking its work so that it will be better than the average human," said Meta researcher Jason Weston.
"The idea of being self-taught and able to self-evaluate is crucial to the idea of getting to this sort of super-human level of AI," he said.
Additionally, Meta also released other AI tools, including an update to the company's image-identification Segment Anything model that speeds up LLM response generation times and datasets.