Bloomberg News reports that Apple product leaker Mark Gurman, who has accurately revealed iPhone update details multiple times in advance, recently stated that Apple’s long-term plan for a major AI upgrade to Siri has encountered obstacles during recent testing. This could delay the release of several highly anticipated new AI features for Apple fans. In other words, the significant AI functionalities originally scheduled to be included in iOS 26.4 may be split into multiple releases—rolled out gradually over subsequent versions, with some key features possibly delayed until iOS 26.5 or even iOS 27.
Citing sources familiar with the matter, Gurman reports that these new AI features were initially planned for inclusion in iOS 26.4—a system update slated for release in March. Apple is now working to distribute these features across multiple future versions. This means some AI functionalities may be postponed until at least iOS 26.5 (expected in May this year) and iOS 27 (expected in September).
The latest testing delays are just a small part of Apple’s long and challenging AI journey. As early as June 2024, Apple announced a redesign of Siri into an “on-device AI superintelligence” as part of its AI ambitions. That year, the consumer electronics giant, which owns the world’s most popular smart products like iPhone and iPad, demonstrated several capabilities allowing Siri to fully access personal data and screen content, enabling it to efficiently and thoughtfully fulfill user requests.
Regarding the AI technology foundation for Siri, it will be powered by one of the world’s leading large AI models—Google’s exclusive Gemini AI model. In January, Google confirmed that it had entered into a multi-year agreement with Apple, the maker of the iPhone and iPad, to provide the core AI large model support for Apple’s on-device AI technology, including the integration of new AI features into Siri after updates.
This AI partnership is a significant win for both companies. Apple’s Siri is finally set to undergo a breakthrough AI transformation. At the same time, under the influence of this multi-year agreement, Alphabet (Google’s parent company) saw its market cap surpass $4 trillion for the first time, making it the second-largest company by market value after Nvidia ($4.5 trillion).
From a strict engineering perspective, the integration of Google Gemini into Apple’s AI features is more like an “upgrade of the intelligence layer” or an external “more powerful brain” for Siri/Apple Intelligence. According to a joint statement from Google and Apple, Apple’s next-generation Apple Foundation Models will be “based on Gemini’s exclusive AI large model and cloud computing technology,” and will be used for many upcoming Apple AI features, including a more personalized Siri AI voice assistant. However, the current Apple Intelligence AI features will continue to be available to users.
The full integration of large AI models with consumer electronics like PCs and smartphones to create powerful, offline inference-capable models on local devices—while also leveraging extensive cloud AI resources to meet deeper personal needs—has become a core part of many global consumer electronics companies’ AI roadmaps.
In the minds of Apple fans envisioning Siri updates, with cloud and on-device AI large models combined, Siri’s role may no longer be a clumsy, formal voice assistant. By integrating cloud AI computing power and generative AI capabilities on the device, iPhone models could become more like a “personal AI assistant” tailored to individual users—similar to the “all-powerful AI companion” depicted in the movie HER. Apple has stated that future iterations of Siri will be able to use personal data to answer questions and perform tasks across various apps.
Repeated delays! The upgraded Siri AI super voice assistant has been repeatedly postponed
The upgraded Siri voice assistant will also allow Apple device users to precisely control Apple’s own and third-party apps via voice. All these new features were originally scheduled to be launched before early 2025.
Last spring, Apple delayed the launch, saying the new Siri would arrive in early 2026. However, the company never announced a more specific timeline. Gurman states that internally, Apple’s management has set a target for March 2026—aligned with iOS 26.4—and this goal remained unchanged until last month.
However, insiders say recent testing uncovered new software issues, leading to another delay. These sources requested anonymity because the discussions are confidential. They indicated that Siri does not always correctly handle user queries or takes too long to process requests.
This remains a dynamic situation, and Apple’s plans for the new Siri release may further shift. A spokesperson for the Cupertino-based tech giant declined to comment.
After the news broke on Wednesday, Apple’s stock retraced some gains. By the close of U.S. markets on Wednesday, the stock was up 0.67%, closing at $275.50; earlier, it had risen as much as 2.4%. Driven by market risk sentiment and optimistic outlooks on iPhone demand, Apple’s stock outperformed the S&P 500 benchmark.
Gurman said that in recent days, Apple has asked engineers to test the new Siri features using the upcoming iOS 26.5, implying that these features have been delayed at least one version. The internal build of this update currently includes a note about new Siri enhancements.
One particularly delayed feature is the expansion of Siri’s ability to access personal data. This technology would allow users to have the AI super assistant retrieve old messages, find a podcast shared by a friend, and play it immediately.
The internal version of iOS 26.5 also includes a toggle setting that enables employees to preview this feature. This suggests Apple is weighing whether to warn users that the initial release may be incomplete or unreliable—similar to its beta testing approach for new OS versions.
Other delayed features include the most advanced command system for in-app voice control, known as app intents. This would allow users to give a single command to quickly find a photo, edit it, and send it to a contact—all in one step.
Some Apple employees testing iOS 26.5 report that these features have preliminary support but are not yet reliably operational in all cases.
Gurman also notes that testers have reported accuracy issues and bugs, such as Siri interrupting users when they speak too quickly. There are also problems when handling complex queries that require longer processing times.
Another challenge is that the new Siri sometimes reverts to simple integration with OpenAI’s ChatGPT-based generative AI applications instead of using Apple’s own AI technology. Even when Siri should be able to handle related user requests, this fallback can occur.
By the end of 2025, internal versions of the new Siri are still very slow, leading some involved in development to believe the launch might be pushed back by several months.
Apple executives have long aimed to avoid delaying the AI software assistant product launch scheduled for June 2024 beyond spring 2026. Even in recent weeks, Apple still plans to release it this month or next.
Siri is about to get the “Gemini AI foundation”! Apple partners with Google to revolutionize on-device AI
However, this has been an extremely complex project. The redesigned Siri AI voice assistant is built on a new architecture codenamed Linwood. Its software will rely on Apple’s large language model platform—called Apple Foundations Models—which is currently integrating cutting-edge AI technology from Google’s Gemini team under Alphabet Inc.
Gurman states that the current iOS 26.5 beta also includes two important features not yet announced: a new web search tool and custom image generation. Apple previously tested these capabilities in iOS 26.4, suggesting some new Siri features may still arrive according to the earlier schedule.
The web search function works similarly to Gemini in Perplexity or Google Search. It allows users to retrieve information from the web and provides an AI-generated answer, a summarized list of key points, and links to relevant websites.
The image generation feature uses the same super engine as Apple’s Image Playground app, but insiders say it remains unstable in testing iOS 26.5.
Gurman reports that, besides these upgrades, Apple is also developing a major new AI initiative for iOS 27, iPadOS 27, and macOS 27: a completely redesigned Siri that functions more like a futuristic chatbot. It will be supported by Google servers and a more advanced, customized Gemini AI model.
Codenamed “Campo,” this project aims to deeply integrate AI into Apple’s major operating systems, providing an interface and features aligned with user expectations shaped by ChatGPT-style AI assistants. Apple is also testing this system through a standalone Siri app, allowing users to manage previous chat interactions.
A key part of the next-generation Siri interface will be the ability to control functions across the entire OS and access personal data such as files. Apple also plans to incorporate the new Siri core engine into some of its key proprietary apps, including Mail, Calendar, and Safari.
CEO Tim Cook hinted at more changes during a company-wide meeting last week, stating that Apple is developing new data center AI chips to boost its AI capabilities.
“Apple Silicon is enabling us to build data center solutions tailored for our devices,” Cook said. “Looking ahead, the work we’re doing will make a whole new category of products and services possible.”
He is likely referring to Baltra—a long-standing project for developing high-performance chips for cloud AI workloads.
One reason for the extended development cycle of Apple’s personal data features may be the company’s strict privacy stance. During the same meeting, software engineering chief Craig Federighi emphasized that personalized AI must not expose user data.
“We believe it’s extremely important to keep the data private when a model receives your questions,” he said, adding, “The industry standard is to send this data to servers where it’s recorded, exposed to the company, and used for training.”
In contrast, Federighi stated that Apple is “leading the way” in building AI that either stays on the user’s device or is sent to private, privacy-protected servers. He also mentioned that the company relies on authorized information and synthetic data—artificially generated data that simulates real-world inputs—instead of directly using user content.
“When you combine all these factors, we can offer a personalized and very powerful exclusive AI experience that builds increasingly immersive experiences in our lives,” he said. He expressed confidence that Apple’s approach will eventually be adopted across the industry.
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
Siri "Transformation Plan" has been disrupted! Apple (AAPL.US) internal testing fails, "AI version of Siri" forced to be delivered in phases
Bloomberg News reports that Apple product leaker Mark Gurman, who has accurately revealed iPhone update details multiple times in advance, recently stated that Apple’s long-term plan for a major AI upgrade to Siri has encountered obstacles during recent testing. This could delay the release of several highly anticipated new AI features for Apple fans. In other words, the significant AI functionalities originally scheduled to be included in iOS 26.4 may be split into multiple releases—rolled out gradually over subsequent versions, with some key features possibly delayed until iOS 26.5 or even iOS 27.
Citing sources familiar with the matter, Gurman reports that these new AI features were initially planned for inclusion in iOS 26.4—a system update slated for release in March. Apple is now working to distribute these features across multiple future versions. This means some AI functionalities may be postponed until at least iOS 26.5 (expected in May this year) and iOS 27 (expected in September).
The latest testing delays are just a small part of Apple’s long and challenging AI journey. As early as June 2024, Apple announced a redesign of Siri into an “on-device AI superintelligence” as part of its AI ambitions. That year, the consumer electronics giant, which owns the world’s most popular smart products like iPhone and iPad, demonstrated several capabilities allowing Siri to fully access personal data and screen content, enabling it to efficiently and thoughtfully fulfill user requests.
Regarding the AI technology foundation for Siri, it will be powered by one of the world’s leading large AI models—Google’s exclusive Gemini AI model. In January, Google confirmed that it had entered into a multi-year agreement with Apple, the maker of the iPhone and iPad, to provide the core AI large model support for Apple’s on-device AI technology, including the integration of new AI features into Siri after updates.
This AI partnership is a significant win for both companies. Apple’s Siri is finally set to undergo a breakthrough AI transformation. At the same time, under the influence of this multi-year agreement, Alphabet (Google’s parent company) saw its market cap surpass $4 trillion for the first time, making it the second-largest company by market value after Nvidia ($4.5 trillion).
From a strict engineering perspective, the integration of Google Gemini into Apple’s AI features is more like an “upgrade of the intelligence layer” or an external “more powerful brain” for Siri/Apple Intelligence. According to a joint statement from Google and Apple, Apple’s next-generation Apple Foundation Models will be “based on Gemini’s exclusive AI large model and cloud computing technology,” and will be used for many upcoming Apple AI features, including a more personalized Siri AI voice assistant. However, the current Apple Intelligence AI features will continue to be available to users.
The full integration of large AI models with consumer electronics like PCs and smartphones to create powerful, offline inference-capable models on local devices—while also leveraging extensive cloud AI resources to meet deeper personal needs—has become a core part of many global consumer electronics companies’ AI roadmaps.
In the minds of Apple fans envisioning Siri updates, with cloud and on-device AI large models combined, Siri’s role may no longer be a clumsy, formal voice assistant. By integrating cloud AI computing power and generative AI capabilities on the device, iPhone models could become more like a “personal AI assistant” tailored to individual users—similar to the “all-powerful AI companion” depicted in the movie HER. Apple has stated that future iterations of Siri will be able to use personal data to answer questions and perform tasks across various apps.
Repeated delays! The upgraded Siri AI super voice assistant has been repeatedly postponed
The upgraded Siri voice assistant will also allow Apple device users to precisely control Apple’s own and third-party apps via voice. All these new features were originally scheduled to be launched before early 2025.
Last spring, Apple delayed the launch, saying the new Siri would arrive in early 2026. However, the company never announced a more specific timeline. Gurman states that internally, Apple’s management has set a target for March 2026—aligned with iOS 26.4—and this goal remained unchanged until last month.
However, insiders say recent testing uncovered new software issues, leading to another delay. These sources requested anonymity because the discussions are confidential. They indicated that Siri does not always correctly handle user queries or takes too long to process requests.
This remains a dynamic situation, and Apple’s plans for the new Siri release may further shift. A spokesperson for the Cupertino-based tech giant declined to comment.
After the news broke on Wednesday, Apple’s stock retraced some gains. By the close of U.S. markets on Wednesday, the stock was up 0.67%, closing at $275.50; earlier, it had risen as much as 2.4%. Driven by market risk sentiment and optimistic outlooks on iPhone demand, Apple’s stock outperformed the S&P 500 benchmark.
Gurman said that in recent days, Apple has asked engineers to test the new Siri features using the upcoming iOS 26.5, implying that these features have been delayed at least one version. The internal build of this update currently includes a note about new Siri enhancements.
One particularly delayed feature is the expansion of Siri’s ability to access personal data. This technology would allow users to have the AI super assistant retrieve old messages, find a podcast shared by a friend, and play it immediately.
The internal version of iOS 26.5 also includes a toggle setting that enables employees to preview this feature. This suggests Apple is weighing whether to warn users that the initial release may be incomplete or unreliable—similar to its beta testing approach for new OS versions.
Other delayed features include the most advanced command system for in-app voice control, known as app intents. This would allow users to give a single command to quickly find a photo, edit it, and send it to a contact—all in one step.
Some Apple employees testing iOS 26.5 report that these features have preliminary support but are not yet reliably operational in all cases.
Gurman also notes that testers have reported accuracy issues and bugs, such as Siri interrupting users when they speak too quickly. There are also problems when handling complex queries that require longer processing times.
Another challenge is that the new Siri sometimes reverts to simple integration with OpenAI’s ChatGPT-based generative AI applications instead of using Apple’s own AI technology. Even when Siri should be able to handle related user requests, this fallback can occur.
By the end of 2025, internal versions of the new Siri are still very slow, leading some involved in development to believe the launch might be pushed back by several months.
Apple executives have long aimed to avoid delaying the AI software assistant product launch scheduled for June 2024 beyond spring 2026. Even in recent weeks, Apple still plans to release it this month or next.
Siri is about to get the “Gemini AI foundation”! Apple partners with Google to revolutionize on-device AI
However, this has been an extremely complex project. The redesigned Siri AI voice assistant is built on a new architecture codenamed Linwood. Its software will rely on Apple’s large language model platform—called Apple Foundations Models—which is currently integrating cutting-edge AI technology from Google’s Gemini team under Alphabet Inc.
Gurman states that the current iOS 26.5 beta also includes two important features not yet announced: a new web search tool and custom image generation. Apple previously tested these capabilities in iOS 26.4, suggesting some new Siri features may still arrive according to the earlier schedule.
The web search function works similarly to Gemini in Perplexity or Google Search. It allows users to retrieve information from the web and provides an AI-generated answer, a summarized list of key points, and links to relevant websites.
The image generation feature uses the same super engine as Apple’s Image Playground app, but insiders say it remains unstable in testing iOS 26.5.
Gurman reports that, besides these upgrades, Apple is also developing a major new AI initiative for iOS 27, iPadOS 27, and macOS 27: a completely redesigned Siri that functions more like a futuristic chatbot. It will be supported by Google servers and a more advanced, customized Gemini AI model.
Codenamed “Campo,” this project aims to deeply integrate AI into Apple’s major operating systems, providing an interface and features aligned with user expectations shaped by ChatGPT-style AI assistants. Apple is also testing this system through a standalone Siri app, allowing users to manage previous chat interactions.
A key part of the next-generation Siri interface will be the ability to control functions across the entire OS and access personal data such as files. Apple also plans to incorporate the new Siri core engine into some of its key proprietary apps, including Mail, Calendar, and Safari.
CEO Tim Cook hinted at more changes during a company-wide meeting last week, stating that Apple is developing new data center AI chips to boost its AI capabilities.
“Apple Silicon is enabling us to build data center solutions tailored for our devices,” Cook said. “Looking ahead, the work we’re doing will make a whole new category of products and services possible.”
He is likely referring to Baltra—a long-standing project for developing high-performance chips for cloud AI workloads.
One reason for the extended development cycle of Apple’s personal data features may be the company’s strict privacy stance. During the same meeting, software engineering chief Craig Federighi emphasized that personalized AI must not expose user data.
“We believe it’s extremely important to keep the data private when a model receives your questions,” he said, adding, “The industry standard is to send this data to servers where it’s recorded, exposed to the company, and used for training.”
In contrast, Federighi stated that Apple is “leading the way” in building AI that either stays on the user’s device or is sent to private, privacy-protected servers. He also mentioned that the company relies on authorized information and synthetic data—artificially generated data that simulates real-world inputs—instead of directly using user content.
“When you combine all these factors, we can offer a personalized and very powerful exclusive AI experience that builds increasingly immersive experiences in our lives,” he said. He expressed confidence that Apple’s approach will eventually be adopted across the industry.