According to an annual outlook interview distributed by investment bank Goldman Sachs featuring Chief Investment Officer Marco Argenti, this year’s artificial intelligence will focus on “hybrid” AI and applications that run on large-scale language models. It is said that it will be ruled by the rising power.
MORE: Do you have 10 hours? IBM will train you on AI basics for free
“Hybrid AI uses these larger models as the brains that interpret prompts and user desires, or as orchestrators that elaborate tasks to many worker models that specialize in specific tasks.” said Argenti. For “basic” models such as ChatGPT.
Argenti argues that it would be too expensive for any company other than the world’s wealthiest to build such a large-scale program. Therefore, most companies will be content to build small neural networks trained on their own data, in their own data centers or cloud computing services.
The idea of specialized models fine-tuned to enterprise data aligns well with current trends in stitching together capabilities, such as LangChain, an open-source framework built on generative AI.
Related article: Bill Gates predicts a “massive technology boom” driven by AI soon
Regarding the potential deployment of hybrid AI, Argenti said, “Most companies in 2024 will be focused on proof-of-concept projects that offer the highest returns.”
In addition to hybrid structures, Argenti sees the arrival of a new class of third-party applications built on top of the foundation model in 2024.
“When you think about those things, [foundation models] There are applications all over the world that have yet to really emerge around these models as operating systems and platforms,” Argenti said.
“There’s a huge opportunity for capital to move into the application layer, into the toolset layer. I think we’ll probably see that change in the next year.”
Also: Microsoft’s GitHub Copilot pursues absolute “time to value” for AI in programming
The need to coordinate security among different actors is a major issue, Mr. Argenti stressed.
“Looking to the future, we will encourage collaboration between players, encourage open sourcing of models where appropriate, and build on sound principles to help manage potential risks such as bias, discrimination and safety.” “It’s going to be important to continue to foster an environment in which we can develop rules that are safe, sound, and private,” Argenti said. “This will advance the technology and ensure that the United States continues to be a leader in AI development.”