2024-10-05T00:03:49+00:00 | 🔗
@pitdesi It's actually legal to bet on Kalshi dot com now... I'm telling you be cause I've placed my bets already š¤£š¤£š¤£
2024-10-04T23:52:57+00:00 | 🔗
New Thai caver esque story developing
2024-10-04T23:30:47+00:00 | 🔗
Run far far away! https://t.co/PzrFXZVk0g
2024-10-04T21:45:28+00:00 | 🔗
BTW this isn't a ding on any of the new models... I just also happen think that AGI has been reached already around gpt-3... gpt-4 was like another step function above that and general intelligence in LLMs will/has plateau there https://t.co/AyVtxtLFNu
2024-10-04T21:30:01+00:00 | 🔗
@MickeyShaughnes synthetic data is good in the sense that if your foundation is say gpt-4, you can retrain say a gpt-o1 model or a gpt-4o-canvas that has performance equivalent to gpt-4 + chain of thought + multi agent checking + tool use and plus few-shot + prompt eng but not exceeding that
2024-10-04T21:13:57+00:00 | 🔗
Good prompting is asking for what you want Great prompting is middle management God-tier prompting is hypnotism
2024-10-04T18:56:38+00:00 | 🔗
that it seems more tenable for startups
2024-10-04T18:56:37+00:00 | 🔗
I'm a no on world simulators... most productivity gains have been gotten by taking things into controlled environments, not by working within uncontrolled environments. Automated ports and nuclear fusion >> humanoid laundry gardening robots... the allure of the later is...
2024-10-04T18:44:10+00:00 | 🔗
I was so tired I thought it was Sam Altman and went back to sleep https://t.co/7EEcSBoknj
2024-10-04T18:31:15+00:00 | 🔗
But itās not about scaling compute⦠you canāt scale data⦠And BTW scaling synthetic data like what 4o canvas did is more akin to multi agent design/prompt engineering/and fine tuning then scaling data. Does nothing to effect foundation model capabilities which plateau at gpt-4 https://t.co/Wvf7wwfmhs