One of the paper’s authors, Meredith Whittaker, summarizes:
Smaller AI models often perform better than big models in context
Obsession with bigness has severe collateral consequences, from climate costs, to concentrated power, to more surveillance, to the capture of AI research.
(See the arXiv “Access Paper” sidebar menu to read it.)
Heh. That’s the paper I linked to in the ‘AI - is bigger better?’ topic. This whole, rather mindless, often unethical, world march to planetary scale computation is an old bugaboo of mine. One of the paper’s authors, Dr. Sasha Luccioni, is of similar mind.
In Canada, AI research has a long tradition in small, efficient, symbolic architecture. The FET was first patented here in 1925. PROLOG was born at the U de Montréal in 1970. I started making my own humble contributions in a canoe many years ago. You can’t get more Canadian than that
BTW, the hero of the Palm Pilot era, Jeff Hawkins, went on to lead the sparse/efficient AI movement. You could spend a lifetime studying his lifetime.