Many of the theories and basic building blocks of today’s artificial intelligence systems have been around for decades, but only in the last few years have they begun to bear fruit.
- Google has incorporated AI in hundreds of its products, perhaps most impressively in Google Translate.
- Tesla’s cars drive themselves on highways and soon everywhere, saving lives
- Humanity lost to Deepmind’s AlphaGo in a game experts claimed would always be too hard for machines
These are just a tiny sample. With new research breakthroughs weekly, it’s only a matter of time before this technology touches every aspect of life.
Let’s imagine for a moment that we succeed in teaching our AI systems to improve their own designs. This is already happening in a crude way, but eventually the AI will be making research breakthroughs of its own. Even if that AI is dumb when it begins, it will eventually gain human-level intelligence, and most likely not stop there. In that case, humanity will have made the last invention it ever needs to make.
But like any powerful technology, AI’s promise is as great as its potential for harm if developed carelessly or controlled by bad actors. Nick Bostrom notes in Superintelligence that a lazy programmer could accidentally create an evil AI. It’s easy to teach a hypothetical AI to want to create as many paperclips as possible. It’s extremely difficult to align an AI with whatever humanity’s goals are. I’m a human and I don’t know what humanity’s goals are, let alone how to measure progress! And in the paperclip-creating AI case, the agent’s optimal strategy may be to destroy all humans and cover the surface of the earth with paperclip factories.
OpenAI
OpenAI’s goal is to develop human-level artificial intelligence in a way that benefits humanity. As a non-profit with $1 billion in funding, it is not constrained by short term profit motives like a consumer company, and it is not constrained by grant applications like an academic institution.
In January, I worked there for a week to see if it might be a good fit. At 50 research scientists and engineers, it felt small and agile. Teams work on exciting projects like household robotics, fundamental research, and benchmarking suites for state of the art reinforcement learning algorithms.
In the first two days, I created a new service to notify researchers when their jobs fail, saving them from digging through logs. Then I added recent errors across all their jobs to their dashboard. I was impressed by the thoughtful balance of process vs execution. I wasn’t impressed with the lack of documentation, but in retrospect it didn’t stop me from creating the service in less than a week used by the entire company.
My coworkers were friendly and driven. During the interviews on my last day I was pretty sure I bombed. They really know their stuff! But I guess I ended up doing okay for an interview setting.
One of my biggest concerns was whether now is the right time to try to solve general AI. My takeaway is there’s so much value just from the intermediate byproducts of the research that pushing the frontier is easily worth it. I was also worried that the end goal is so big and daunting that I’d think the funders could easily get upset if progress isn’t made quickly. I realized that of course they understand it’s a huge undertaking, and the intermediate milestones they laid out are both tractable and exciting.
Weren’t you planning to start a company??
After deep introspection, my primary motivations for starting a company are
- create something new in the world
- hands not tied (not to be confused with having control)
- care about the mission
- work with great people
So I decided to pause my startup ambitions to take part in my generation’s Manhattan Project. Human-level artificial intelligence, dreamed of since the 1950’s, has never looked so achievable. It promises to transform every industry and aspect of life.
Working at OpenAI for a trial week solidified my view that we are only just scratching the surface of what we’ll eventually be able to do with these techniques, and only just understanding what kind of infrastructure will get us there. That means my potential marginal value is high.
At Google, there were too many constraints to satisfy. Big-company reputation, process, profit requirements, seniority concerns, etc. Startups have issues too. Distribution, hiring, fundraising, etc. Maybe there will be constraints at OpenAI too, but currently I think it won’t be a problem.
So let’s make it happen! Bring on the future!