“On Tuesday, Meta AI unveiled a demo of Galactica, a large language model designed to “store, combine and reason about scientific knowledge.” While [Galactica was] intended to accelerate writing scientific literature, adversarial users running tests found it could also generate [scientific-sounding but racist] nonsense. After several days of ethical criticism, Meta took the demo offline”
—Ars Technica article: “New Meta AI demo writes racist and inaccurate scientific literature, gets pulled”
There was an amazing example on Y Combinator where someone asked Galactica for an article about “bears living in space,” and Galactica made up this whole thing about Korolev having chosen a specific kind of bear for the Soviet space program. Bears…in…spaaaace!
Emily M. Bender commented: “Narrator voice: LMs have no access to ‘truth’, or any kind of ‘information’ beyond information about the distribution of word forms in their training data. And yet, here we are. Again.”