I think there's strong logic to your position. Every day we're getting more data about open source catching up. Very compelling. And it gives me a sense of relief.
I appreciate this outline of your thesis. The lack of standard definition for what constitutes ASI/AGI makes it challenging to write about. In a world of true superintelligence I'm unsure how any human investors can (legally) keep finding alpha.
Love this perspective. I find it interesting that many technologists are so enamored by the ideas of transcending physical existence and forget about physical boundaries. It is as if we need to shed our physical selves, which I'm not that into.
A slight nuance to the main points and maybe this is why there are some very worried about ASI/AGI. Maybe it's not about the race to ubiquitous ASI and first mover advantage. Maybe it's more about the time until a person tries to use it for nefarious reasons.
I say this pretty openly. As a consumer, researcher, and person who uses technology, I struggle to know what's real in digital spaces. Authentication and Security will remain important areas of investment. Not to protect us from machines. But to protect us from others who want to use the machines against other people.
Interesting thesis. You acknowledge that compute and physical constraints will matter ‘as much or more’ than the algorithm, but then largely drop that thread. Isn’t that the crux? AGI/ASI isn’t just a model artifact. It’s a civilizational-scale infrastructure deployment. Even if open weights leak, inference at that scale is gated by power, memory, cooling, and geopolitical control. Shouldn't we be asking not who builds ASI first, but who can actually build the infrastructure to run it?
I think there's strong logic to your position. Every day we're getting more data about open source catching up. Very compelling. And it gives me a sense of relief.
I appreciate this outline of your thesis. The lack of standard definition for what constitutes ASI/AGI makes it challenging to write about. In a world of true superintelligence I'm unsure how any human investors can (legally) keep finding alpha.
Love this perspective. I find it interesting that many technologists are so enamored by the ideas of transcending physical existence and forget about physical boundaries. It is as if we need to shed our physical selves, which I'm not that into.
A slight nuance to the main points and maybe this is why there are some very worried about ASI/AGI. Maybe it's not about the race to ubiquitous ASI and first mover advantage. Maybe it's more about the time until a person tries to use it for nefarious reasons.
I say this pretty openly. As a consumer, researcher, and person who uses technology, I struggle to know what's real in digital spaces. Authentication and Security will remain important areas of investment. Not to protect us from machines. But to protect us from others who want to use the machines against other people.
Interesting thesis. You acknowledge that compute and physical constraints will matter ‘as much or more’ than the algorithm, but then largely drop that thread. Isn’t that the crux? AGI/ASI isn’t just a model artifact. It’s a civilizational-scale infrastructure deployment. Even if open weights leak, inference at that scale is gated by power, memory, cooling, and geopolitical control. Shouldn't we be asking not who builds ASI first, but who can actually build the infrastructure to run it?