2024-02-29T03:10:09+00:00 | 🔗
@lpachter In that sense, "Foundational Models" are not a specific technical concept, but more a name we apply to models meeting certain criteria (generalizability). But, most papers claiming foundation models in biology seem not to meet that criteria.
2024-02-29T03:06:59+00:00 | 🔗
@lpachter My take, people generally agree on what "foundation models" for language and images are. Large general models trained on large bodies of data: transformer for text, autoencoders and latent diffusion models for images. For biology, no one has found the same generalizability...
2024-02-28T23:18:19+00:00 | 🔗
AI is like furniture. Everyone will have it. There will be functional AI and aesthetic AI. There will be Ikea AI, handcarved AI, and dorm room AI. What there won't be is a monopoly on selling tables. What would that mean even mean -- a monopoly on tables?
2024-02-28T15:29:43+00:00 | 🔗
Robert Caro talks about turning every page. Boxes and boxes in the LBJ library. Carts and carts. I thought man, he would really benefit from an LLM. Then I thought, those aren't digitized, and he ~ could ~ hire an assistant or even many, but he doesn't. Why?
2024-02-28T05:13:51+00:00 | 🔗
Why will AI never be a coworker? Coworkers are of ~ similar level! If Management has one AI at that level, why would they have any humans at that level too, if they can just copy paste the AI. Having an AI coworker makes no sense!
2024-02-28T05:12:37+00:00 | 🔗
I'm sure their tech is going to be incredible. But, AI is never going to be a coworker. It is either going to be a tool or if it gets good enough a subordinate. And if it get's really good, the author of some godawful emails from Management. Never a coworker though. https://t.co/kmwKzymbaf
2024-02-27T02:57:09+00:00 | 🔗
@timschlomi Most of that is pretty anodyne. But also like ya know if you’re testing cancer drugs you kinda have to give the animal cancer first which is the unsafe part. And the drugs are supposed to kill stuff which is also unsafe. Biology is mysterious so we are always very far from sure
2024-02-27T02:11:33+00:00 | 🔗
I don’t understand why it’s such a common trend to try to get AI to generate and replace reality. That seems like a foolproof way to get AI out of alignment and into crazy world. We need AI to be MORE attached to reality not less. No animal testing, only human testing?! What? https://t.co/iMaoeUg5nc
2024-02-27T01:00:59+00:00 | 🔗
Conversational AI will be made by training directly on voice, not some weird hodge-podge of LLMs and TTS. Conversations aren't read off teleprompters. And conversations are equally bad if one person is making a speech.
2024-02-25T21:56:00+00:00 | 🔗
@tunguz Thank you for your insight