The problems with AI bigness

Spotted on Mastodon: a scholarly paper related to (edit: oops) already mentioned in the recent TASAT topic: A.I. - is bigger better?

One of the paper’s authors, Meredith Whittaker, summarizes:

  1. Smaller AI models often perform better than big models in context

  2. Obsession with bigness has severe collateral consequences, from climate costs, to concentrated power, to more surveillance, to the capture of AI research.

(See the arXiv “Access Paper” sidebar menu to read it.)

Heh. That’s the paper I linked to in the ‘AI - is bigger better?’ topic. This whole, rather mindless, often unethical, world march to planetary scale computation is an old bugaboo of mine. One of the paper’s authors, Dr. Sasha Luccioni, is of similar mind.

In Canada, AI research has a long tradition in small, efficient, symbolic architecture. The FET was first patented here in 1925. PROLOG was born at the U de Montréal in 1970. I started making my own humble contributions in a canoe many years ago. You can’t get more Canadian than that :smiley:

BTW, the hero of the Palm Pilot era, Jeff Hawkins, went on to lead the sparse/efficient AI movement. You could spend a lifetime studying his lifetime.

Augh, embarrassing! :sweat_smile: I remembered your post but didn’t recognize it was the same link.

I’ll edit myself a bit, and leave it here as another entry point.

Yes, please do leave yours up. I’ve sent Sasha Luccioni that as a link (would be nice if she dropped in to straighten us all out :slight_smile: