2024-02-07T00:07:10+00:00 | 🔗
@rauchg Python serverless function including .next .cache, node_modules, public by default :( I spent lots of time debugging, still love next though
2024-02-06T21:02:08+00:00 | 🔗
@yacineMTB fold my laundry and dishes too?
2024-02-05T17:15:00+00:00 | 🔗
Why is this? Maybe self-improvement for LLMs evolves the language as well, out of alignment with our use of language. Language depends on the context and LLMs don’t have context. https://t.co/CMVzINaANB
2024-02-04T00:38:13+00:00 | 🔗
@kendrictonn I thought, "How hard could it be?" And then I tried painting one. It's extraordinarily difficult to get the transitions and the nice variations the way he does it.
2024-02-03T18:12:17+00:00 | 🔗
For companies this means that there is very little differentiation in RAG/LLMs and much more differentiation in the data, e.g. having all the important experts on board. Domain-expertise.
2024-02-03T18:12:17+00:00 | 🔗
Corollary: even if you want the LLM to have RAG over all the journals, what you are really valuing is the opinion/expertise of the person who curates the RAG pipeline. Likely that has more to do with expertise in the field than general RAG skills.
2024-02-03T18:12:17+00:00 | 🔗
Say you are looking to talk to a chatbot that’s an expert on a given field. What would you value more? An LLM with information on all the journals in that field. An LLM with curated information from the top five experts.
2024-02-02T18:29:25+00:00 | 🔗
Instead of raising prices for ChatGPT to see what consumers will pay, they are degrading performance to see what consumers will pay $20 for.
2024-02-02T16:01:32+00:00 | 🔗
RT @WillManidis: within months you will be able to buy genomics data from 14 million americans for +/- $200m? the inevitable fire sale of…
2024-02-02T15:34:37+00:00 | 🔗
@litu_rout_ @alexcarliera Keep up the great work! Looking forward to seeing the code released!