How do we define Agi?
Microsoft and Openai recently aroused controversy by defining general artificial intelligence not through technical achievements, but through profit – specifically, $ 100 billion in annual revenues. This landmark appears in their partnership agreement, essentially allowing Microsoft to maintain access to OpenAi’s innovations until this financial purpose is fulfilled, probably well in 2029.
It is difficult that such a deep historical moment of technical should be reduced to a monetary objective. However, while the definition of this lawyer defends Microsoft’s interests, it raises a deeper question: what exactly is Agi, and how close are we to achieve it?
“At that time is Agi and how we define it, I think in every definition that is reasonable, we are still far away,” Deepl Jarek Kutelowski tells me during our last conversation. He warns against early enthusiasm: “We are very impressed by technology at first, but then you have to go into detail to understand what its limitations are.”
The challenge of definition
While Oxford’s English dictionary offers a wide definition of Agi as ‘a car that can exhibit as intelligent behavior as, or more intelligent than, a human being’, industry leaders remain separate. Sam Altman of Opennai describes Agi as “the equivalent of an average man you could hire as a collaborator”, while the pioneer of Ai Fei-Fei Li, head of the Institute of Human Center of Stanford and CEO of World Labs “I sincerely Don ‘You know what Agi means.’
In the field of language translation, Kutilowski offers a practical perspective: “If we want translations of one that matches human translations, we must have one that can understand the world, as well as a human cans – which may be the definition of Agi. “Deepl is one of the most powerful means of translation available today. However, despite Deepl’s advances by bringing them “an epsilon away” from translating human level into some contexts, real understanding remains elusive.
Debate of time limit
Agi’s dispute over the time limits mainly stems in the above problem – as we determine Agi. If we focus simply on cognitive abilities, the ambitious predictions from Sam Altman, Elon Musk and CEO of Anthropic Dario Amodei to reach Agi within 2-4 years seem more reliable. But if we include physical abilities – even with the skills of robots designed by Boston Dynamics likes, we are too far away.
However, kutylowski raises a more fundamental concern: “If Agi can replace anyone’s brain, then we must rethink our society.”
Human factor
My experience with predictions and he has taught me humility. In a 2018 book, I predicted that self-driving vehicles would prevail roads within 5-15 years. As technology progresses, I underestimated human resistance to change. Although autonomous vehicles demonstrate better security records (except during dawn and dusk), the social implications of 5% of the workforce would be severe, so perhaps slow adoption was a grace of saving.
Kutylowski frames this challenge philosophically: “Our current value system is very focused about what we have achieved, what we are doing, what is our contribution to society.” As he expands his abilities, he asks, “How do we feel fulfilled when this falls?” This is a major question in the basic discussions of universal income.
While cognitive agi can come out within four years, its integration into society is likely – and fortunately – to continue gradually. The institutional inertia that I once criticized, especially in large companies, can actually serve for a vital purpose: allowing society to adapt to a manageable pace.
The reality is that the arrival of the Agi will not be marked by a historic moment of profit or a single technological progress, but by our collective readiness to re -establish human potential in an improved world. Maybe this is a better measure of progress than any balance.