During her visit to Taiwan, Stanford University's AI expert Fei-Fei Li was invited by Taiwan's National Science and Technology Council (NSTC) to give a speech and chat with industry representatives. She mentioned that the trend of generative AI has created a "tsunami-like" impact. Different industries are all participating in the discussion to prevent the AI ecosystem from becoming a monopolized market.
At an AI forum held on March 23, Li gave a speech titled "What We See and What We Value: AI with a Human Perspective." Within it, she described the progress the AI sector has made in the past several decades, especially in visual recognition applications. She also emphasized that AI is a relatively "young" sector, and its development is closely connected to brain and cognitive science.
She mentioned that AI can be used to "see" what humans can see, can't see, and want to see. She emphasized that item recognition is the cornerstone of visual intelligence. It has gone from models manually designed by early scientists to machine learning (ML) and deep learning.
At the turn of the century, the rise of the Internet and the subsequent generation of massive data sets accelerated the development of AI. In the field of deep learning, the convolutional neural network (CNN) model dominated for more than ten years. It spread from academia to industry and market applications. CNN was finally replaced a few years ago when the transformer model was introduced and took over as the new standard.
As AI visual recognition becomes more powerful, the question that follows is what should not be detected/seen. Li pointed out that this issue is especially critical for the medical sector, as it involves patients' privacy.
Many people are concerned that as AI functions become stronger and stronger, they will replace human labor. Li stated that the relationship between AI and the workforce can be approached differently. Instead of "replace," we should be talking about "expanding." For example, the US is currently going through a medical talent shortage. Therefore, developing medical technology can relieve the talent shortage. Her research project at Stanford University also has many collaborations with medical schools.
iKala CEO Sega Cheng, a participant in the forum, mentioned that this wave of AI trends has reached a certain turning point. The "Moore's Law" of AI has appeared. The cost to deploy AI will drop by 50% every three months. However, no matter how advanced the technology gets, it is pointless without a field of application.
Despite that, some are concerned that the AI industry is slowly losing its openness. For instance, some have questioned whether OpenAI, the company that gained popularity due to ChatGPT, has become ClosedAI. This is because when OpenAI recently revealed its GPT-4 model, it didn't provide many details on things like the parameters. Cheng believes that as the AI competition between tech giants intensifies, suppliers will be more and more protective of their AI intellectual property and won't reveal too many details in the first place.
In response, Li stated that many startups in the US have come to her recently to "complain" about how AI model development is slowly becoming a monopoly. They hope that the federal government can put some regulations in place. She stressed that the AI ecosystem should have balanced development and that a monopolized market is unhealthy.
She believes that the influences of generative AI can be significant, especially for the developing minds of adolescents. While tech giants do shoulder some responsibility, they have their limitations. Therefore, the academic, intellectual, legal, and other fields should all join in on the relevant discussions to circumvent the negative effects of AI technology development in the post-truth era.