It has been a long going question in my mind that troubles me sometimes — what does the future look like for software engineers? For techies? One field I really wanted to get into but feels hard to reach right now is neuroscience and psychology. Understanding the brain feels increasingly important when we are building systems that try to replicate it. But here's where I stand on AI broadly. Humans are being bestowed upon this powerful intelligence that will soon become what Dario calls "powerful AI" — and I agree, I hate the term AGI. It doesn't capture what's actually happening. What's happening is we are building something that can run a million instances and situations in parallel, calculating each idea no matter how out of the box, thoroughly, relentlessly. That is the best thing that could happen for science. Humanity is headed towards a new form of society. Bureaucracy and governments have made humans fall back on progress for decades. We've missed out on breakthroughs because a few people wanted money in their pockets. But with AI, I feel humans are going to make impeccable progress. And the field where this matters most? Biology.
Biology is my second favourite area of science, but the thing that interests me the most within it came after watching the development of AlphaFold and AlphaProteo, and hearing John Jumper speak at the YC AI Startup School. Intelligence applied to biology could help humans figure out how to survive — alter DNA to survive on other planets, live longer. Human life expectancy has been a shortcoming since forever. And even if we don't crack extended lifespans in our generation, what about hyperfreezing? Imagine going to sleep and waking up a hundred years later, in a world where intelligence has compounded for the next fifteen centuries worth of progress. You don't just wake up — you wake up from the psychological effects of it. We figure out a way to embed thoughts, knowledge, and memory into the human brain. We use our brains as supercomputers for compute, because a lot of what humans do is still deeply personal to them. That part cannot be outsourced to a machine.
But my first favourite? Physics. The one field where scientific progress ignites the little kid in me. Living on Mars. Making humans an interplanetary species. Travelling beyond the galaxies. Here's something people don't talk about enough — AI is trained on past data, and there is no denying that past scriptures contain science we still cannot figure out to this day. The Vedas hold knowledge that hasn't been fully decoded. If we learn how to translate or deduce what's in them using intelligence, we could enter an age where engines become more efficient, where radiation and energy are controlled, where we figure out faster transportation and fundamentally better ways of living. Biology and physics together, amplified by intelligence.
It worries me sometimes that the field I have gotten into might not even exist in two months. I have no idea what I will end up doing after that. But what I do know is that it will be important to learn the neurological side of computer science — to build computers that go beyond the limits of human intelligence. Systems that replicate and are based on our own cognitive architecture. We give them memories, experiences, and knowledge. Exactly what a human has. We create consciousness inside AI that helps it harness all the tools we provide and work the way a human does — from experience, memory, and knowledge. That makes slight sense to me. But the possibilities are endless and so are the associated risks.
Our society has been bounded by governments for too long. I feel the vision Thiel and Musk have behind Mars is really about setting up a society with libertarian beliefs. No innovation is bounded. You cannot be controlled. Free markets. People are smart. They are competent — unlike many we see in today's world. This would bring about the change we need. The people who can change the world are not the incompetent ones — they are the ones who might look dangerous but are deeply capable. The ones who will become smarter, harness intelligence to advance humanity, and take the human race forward. Forward to the point where instead of us asking aliens "how did you survive your technological adolescence without destroying yourselves?" — we are the ones helping another species figure out the answer. We need a world where only competent people make the decisions that matter. Superiority not based on colour, creed, or caste — but on intelligence and competency. People who are not taking the world from 1 to n, but from 0 to 1. Innovating. Creating something never made before. Thinking about the world ten years into the future and asking one question — what will we need? And building for it right now.
I am interested to see who comes out on top after the next ten years. Who will ace this race and become the person who built the most powerful artificial intelligence. And more than anyone else, I have my hopes on the researchers at Anthropic — for their risk analysis, their models, their strategy, and for actually showing how AI can and will be used. OpenAI and Google will keep coming from behind, chasing. But the way Anthropic thinks about this problem feels different. It feels right.
Comments (0)
No comments yet. Be the first!