- Jason Wei / AI Researcher at Meta
- Jason Wei / AI Researcher at Meta272.New blog post about asymmetry of verification and "verifier's law": https://t.co/bvS8HrX1jP›LLM score 80 · 7 months ago
- Tri Dao / Chief Scientist at Together273.I played w it for 1h.›LLM score 35 · 7 months ago
- Tri Dao / Chief Scientist at Together274.@RaghuGanti @cHHillee Oh you’d want to use warp reduction if the whole row fits into 1 warp.›LLM score 80 · 7 months ago
- Tri Dao / Chief Scientist at Together275.They’ve finally done it.›LLM score 60 · 7 months ago
- Tri Dao / Chief Scientist at Together
- Tri Dao / Chief Scientist at Together277.
- Tri Dao / Chief Scientist at Together278.Albert articulates really well the trade offs between transformers and SSMs.›LLM score 80 · 7 months ago
- Tri Dao / Chief Scientist at Together
- Geoffrey Hinton
- Geoffrey Hinton281.I just watched a great compilation of various people's views about what is coming:›LLM score 10 · 8 months ago
- Geoffrey Hinton282.AGI is the most important and potentially dangerous technology of our time.›LLM score 70 · 9 months ago
- Geoffrey Hinton
- Geoffrey Hinton
- Geoffrey Hinton
- Geoffrey Hinton286.@ESYudkowsky LeCun p(doom) = 0.001; Yudkowsky p(doom) = .999;›LLM score 70 · over 1 year ago
- Geoffrey Hinton287.@OrniasDMF I am not "blindly opposing AI".›LLM score 80 · over 1 year ago
- Ilya Sutskever / Founder of SSI288.
- Ilya Sutskever / Founder of SSI289.Practical alignment work is both critically important and immediately impactful.›LLM score 10 · over 2 years ago
- Ilya Sutskever / Founder of SSI
- Ilya Sutskever / Founder of SSI291.Can’t let your mode collapseLLM score 60 · over 2 years ago
- Ilya Sutskever / Founder of SSI292.Wrong motivation -> wrong resultsLLM score 20 · over 2 years ago
- Ilya Sutskever / Founder of SSI293.Criticizing a decision is, to a first order approximation, 100x easier than making oneLLM score 20 · over 2 years ago
- Ilya Sutskever / Founder of SSI294.
- Ilya Sutskever / Founder of SSI295.What do people and artificial neural networks agree on? ›LLM score 80 · over 2 years ago
- Ilya Sutskever / Founder of SSI296.A one sentence articulation (of existing ideas) for why AI alignment need not be straightforward: ›LLM score 70 · over 2 years ago
- Ilya Sutskever / Founder of SSI297.There’s an amusing anti correlation between networking and actually workingLLM score 20 · over 2 years ago
- Ilya Sutskever / Founder of SSI298.First step towards a democratically controlled AI https://t.co/HcL1cGDd7WLLM score 10 · over 2 years ago
- Ilya Sutskever / Founder of SSI299.The Ray of compression shines brightly https://t.co/NhbaehbDQKLLM score 40 · almost 3 years ago
- Ilya Sutskever / Founder of SSI300.Powerful and non obvious scientific ideas, once internalized, usually become blindingly obvious.›LLM score 60 · almost 3 years ago