Deflating the AI Bubble
Angèle Christin, Affiliate Professor of Communication and Stanford HAI Senior Fellow
The billboards in San Francisco say all of it: AI in all places!!! For every little thing!!! On a regular basis!!! The marginally manic tone of those ads offers a way of the hopes – and immense investments – positioned in generative AI and AI brokers.
Up to now, monetary markets and massive tech corporations have doubled down on AI, spending large quantities of cash and human capital, and constructing gargantuan computing infrastructures to maintain AI development and improvement. But already there are indicators that AI could not accomplish every little thing we hope it should. There are additionally hints that AI, in some circumstances, can misdirect, deskill, and hurt folks. And there may be knowledge displaying that the present buildout of AI comes with large environmental prices.
I count on that we’ll see extra realism about what we will count on from AI. AI is a unbelievable device for some duties and processes; it’s a problematic one for others (hi there, college students producing closing essays with out doing the readings!). In lots of circumstances, the influence of AI is more likely to be average: some effectivity and creativity achieve right here, some additional labor and tedium there. I’m significantly excited to see extra fine-grained empirical research of what AI does and what it can’t do. This isn’t essentially the bubble popping, however the bubble may not be getting a lot greater.
A “ChatGPT Second” for AI in Medication
Curtis Langlotz, Professor of Radiology, of Medication, and of Biomedical Knowledge Science, Senior Affiliate Vice Provost for Analysis, and Stanford HAI Senior Fellow
Till not too long ago, growing medical AI fashions was extraordinarily costly, requiring coaching knowledge labeled by well-paid medical consultants (for instance, labeling a mammogram as both benign or malignant). New self-supervised machine studying strategies, now broadly utilized by the builders of business chatbots, don’t require labels and have dramatically lowered the price of medical AI mannequin coaching.
Medical AI researchers have been slower to assemble the huge datasets wanted to capitalize on self-supervision due to the necessity to protect the privateness of affected person knowledge. However self-supervised studying from considerably smaller datasets has proven promise in radiology, pathology, ophthalmology, dermatology, oncology, cardiology, and many different areas of biomedicine.
Many people will keep in mind the magic second once we found the unimaginable capabilities of chatbots educated with self-supervision. We’ll quickly see an analogous “ChatGPT second” for AI in medication, when AI fashions are educated on large high-quality healthcare knowledge rivaling the dimensions of knowledge used to coach chatbots. These new biomedical basis fashions will increase the accuracy of medical AI methods and can allow new instruments that diagnose uncommon and unusual ailments for which coaching datasets are scarce.

Leave a Reply