crossorigin="anonymous"> OpenAI focuses on superintelligence in 2025. – Subrang Safar: Your Journey Through Colors, Fashion, and Lifestyle

OpenAI focuses on superintelligence in 2025.


OpenAI has announced that its main focus for the coming year will be on the development of “superintelligence”. Blog post From Sam Altman. It is defined as an AI with more than human capabilities.

While OpenAI’s current suite of products includes a A wide array of capabilitiesAltman said that superintelligence would enable users to do “something else”. He highlights the acceleration of scientific discovery as a prime example, which he believes will lead to the betterment of society.

“It sounds like science fiction right now, and it’s somewhat crazy to even talk about. It’s OK — we’ve been there before and we’re OK with going there again,” he said. wrote

The change in direction is fueled by Altman’s confidence in his company now knowing “how to build AGI as we traditionally understand it.” AGI, or artificial general intelligence, is typically defined as a system that matches human abilities, while superintelligence exceeds them.

See: OpenAI’s Sora: Everything You Need to Know

Altman has been eyeing superintelligence for years — but there are concerns.

OpenAI has been referring to superintelligence for many years now Discuss the risks of AI systems and aligning them with human values. In July 2023, OpenAI announced Hiring researchers to work on super-intelligent AI.

The team will reportedly dedicate 20% of OpenAI. Total computing power To train what he calls human-level automated alignment researchers to line up future AI products. Concerns about superintelligent AI arise from how such systems may prove impossible to control and may not share human values.

“We need scientific and technological breakthroughs to operate and control AI systems smarter than we are,” wrote OpenAI’s Head of Alignment John Leckie and Co-Founder and Chief Scientist Ilya Sotskiver. Blog post At this time

See: OpenAI and Anthropic Sign contract with the US AI Safety Institute

But, four months after making the team, another Company Post revealed that they “didn’t yet know how to reliably control superhuman AI systems” and had no way to prevent “(a superintelligent AI) from going rogue.”

In May, OpenAI’s The Super Intelligence Security Team was disbanded. And several senior officials left because of concerns that “safety culture and processes have overtaken the shiny product,” including John Leke. and team co-head Ilya Sotskiver. The team’s work was absorbed by other OpenAI research efforts. Wired.

Nevertheless, Altman highlighted the importance of security for OpenAI in his blog post. “We believe that the best way to secure an AI system is to release it into the world iteratively and incrementally, giving society time to adapt and adapt to the technology, learn from experience, And to continue to make the technology safer,” he wrote.

“We believe in the importance of being a world leader on safety and alignment research, and guiding that research with feedback from real-world applications.”

The path to superintelligence may still be years away.

There is disagreement about how long it will take until superintelligence is achieved. A November 2023 blog post said it could develop within a decade. But nearly a year later, Altman said it might “A few thousand days left.”

However, Brent Smolinsky, IBM VP and global head of technology and data strategy, said that was “totally exaggerated.” Company Post From September 2024. “I don’t think we’re even in the right zip code to reach superintelligence,” he said.

AI still requires far more data than humans to learn a new ability, is limited in scope of abilities, and lacks consciousness or self-awareness, which Smolinsky sees as key indicators of superintelligence. is

He also claims that quantum computing may be the only way we can unlock AI that surpasses human intelligence. At the start of the decade, IBM predicted this. Quantum will start solving real business problems. Before 2030

See: Advances in quantum cloud computing ensure its security and privacy.

Altman predicts that AI agents will enter the workforce in 2025.

AI agents are semi-autonomous. Generative AI that can connect or interact with applications to execute instructions or make decisions in an unstructured environment. For example, Salesforce uses AI agents. Call sales leads.

TechRepublic predicted at the end of the year that The use of AI agents will increase in 2025.. “We may see the first AI agents ‘entering the workforce’ and materially changing the output of companies,” Altman echoed in his blog post.

See: IBM: Enterprise IT faces an AI agent revolution

According to a research paper by Gartnerthe first to dominate industry agents will be software development. “Current AI coding assistants continue to mature, and AI agents provide the next set of additional benefits,” the authors wrote.

According to a Gartner paper, by 2028, 33 percent of enterprise software applications will include agent AI, down from less than 1 percent in 2024. A fifth of online store interactions and at least 15% of daily business decisions will be made by agents by this year.

“We are beginning to push our goal beyond that, toward superintelligence in the truest sense of the word,” Altman wrote. “We strongly believe that in the next few years, everyone will see what we see, and that the need to act with great care, along with the vast benefits and empowerment, is critical.”



Source link

Leave a Reply

Translate »