Write and record the development of technology with deep insight
Will AI surpass humans in a few years?
Elon Musk, a well-known American entrepreneur, recently declared that by 2025 or 2026, AI (artificial intelligence) could be smarter than the smartest human. This prediction has surprised many people: is AGI (Generalized Artificial Intelligence), which is as intelligent as or even more intelligent than humans, already close at hand?
A few years ago, most scientists expected AGI to be realized only after decades or even centuries. In the past year and a half, the great success of the American artificial intelligence company OpenAI in large models has made many optimistic industry insiders reduce the AGI realization timetable to 5 to 10 years. Musk’s 2-year period is clearly a more aggressive prediction. From the perspective of the current state of AI development, there is at least one major bottleneck and two major risks that hinder the realization of AGI.
A major bottleneck is the energy bottleneck. The popular AI big model is still in the stage of “vigorously producing miracles”, relying on the accumulation of arithmetic power to realize the emergence of intelligence. 2020, the strongest big model arithmetic demand reaches the 23rd power of 10 floating-point operations per second; by 2024, the strongest big model pre-training arithmetic demand reaches the 26th floating-point operation of 10 floating-point operations per second; the strongest big model pre-training arithmetic demand reaches the 26th floating-point operation of 10 floating-point operations per second. By 2024, the most powerful big-model pre-training power requirements have reached the order of 10 to the 26th power of floating-point operations per second, and continue to double every three to four months. Arithmetic intensity means energy intensity. According to foreign media reports, OpenAI‘s ChatGPT chatbot consumes more than 500,000 kilowatt-hours of power per day to process about 200 million user requests, which is equivalent to more than 17,000 times the daily power consumption of an average U.S. household. Compared with this huge energy consumption, the power of an adult brain is only about 20 watts, so the advantages of low energy consumption are very prominent. From the point of view of energy consumption, there is still a big gap for AI to reach the level of human intelligence, so much so that some experts are counting on the “freedom of energy” brought about by future breakthroughs in controllable fusion technology, while others are counting on the rapid advancement of chips and algorithms, which are obviously difficult to achieve in the short term.
The two major risks are commercial risk and security risk.
From the perspective of commercial risk, the investment in large artificial intelligence models is facing the challenge of “losing money to making money”, and the growth rate is difficult to sustain, which may affect the iteration speed of artificial intelligence. For example, a U.S. company’s big model product charges users a monthly fee of $ 10 or $ 100 annual fee, but because of the high cost of service, you need to “back-slice” $ 20 per month for each user. The State of the Artificial Intelligence (AI) Industry Report 2023, published by research firm CB Insights, shows that in 2023, the total amount of funding for AI startups worldwide will be about $42.5 billion, down 10 percent from $47.3 billion in 2022. The high valuation of large model startups and the lack of a clear path to commercialization have dissuaded many venture capitalists. At present, although a large amount of money is still pouring into the large model, but the cost of “refining the large model”, which burns money like running water, has also made the demand for commercial realization of the large model more and more urgent.
From the perspective of security risks, the current rapid development of artificial intelligence technology has brought great opportunities to the world but also brought unpredictable risks and complex challenges. To ensure that AI is always under human control, and to create auditable, supervisable, traceable, and trustworthy AI technology, it is not better to allow AI to develop as fast as possible without any constraints, but rather, it is necessary to adhere to the ethical precedence of “science and technology for the good”, and to establish and improve the ethical norms, specifications and accountability mechanisms of AI. Artificial intelligence governance concerns the fate of all mankind and is a common issue for all countries in the world. Therefore, the formulation of AI rules cannot be left to the discretion of a few developed countries, rather, it is necessary to promote the construction of a fair and just international governance system for AI.
Artificial intelligence may one day surpass humans, and whether that day is far or near, mankind must be ready to meet the challenge.