MIT researchers developed DisCIPL, a method that enhances the efficiency of language models (LMs) by pairing a large model with smaller followers to tackle complex tasks. This approach improves accuracy while significantly reducing computational costs, outperforming existing models in reasoning and practical applications, thereby offering a scalable solution for effective language processing.













