maxnajer 4 hours ago

Everyone is obsessed on the 'next' AI revolution where we'll train and execute multimodal models inside 'physical bodies. Like humanoids.

But let's try to imagine the revolution after the revolution; where by fusing all types of live geospatial data (satellite, drone, IoT, etc.) into a cohesive onthology where (literal) world models understand natively this onthology, this system can coordinate other systems and robotics on the ground based on what our 'eyes in the sky' see, in real time. Without human intervention.

For me this is a perfectly logical next step towards a Kardashev I scenario. Thoughts?