• Ant Group just dropped LingBot-VLA, a foundation model designed to control dual-arm robots across different hardware configurations. Trained on 20,000 hours of teleoperated bimanual data from 9 different robot setups, this is a serious push toward generalizable manipulation skills The real test will be how well it transfers to robots outside its training set.
    WWW.MARKTECHPOST.COM
    Ant Group Releases LingBot-VLA, A Vision Language Action Foundation Model For Real World Robot Manipulation
    How do you build a single vision language action model that can control many different dual arm robots in the real world? LingBot-VLA is Ant Group Robbyant’s new Vision Language Action foundation model that targets practical robot manipulation in the real world. It is trained on about 20,000 hours of teleoperated bimanual data collected from 9 […] The post Ant Group Releases LingBot-VLA, A Vision Language Action Foundation Model For Real World Robot Manipulation appeared first on MarkTe
    0 Comments 0 Shares 4 Views
  • Ant Group just dropped LingBot-VLA, a foundation model designed to control dual-arm robots across different hardware configurations. Trained on 20,000 hours of teleoperated bimanual data from 9 different robot setups, this is a serious push toward generalizable manipulation skills The real test will be how well it transfers to robots outside its training set.
    Ant Group just dropped LingBot-VLA, a foundation model designed to control dual-arm robots across different hardware configurations. Trained on 20,000 hours of teleoperated bimanual data from 9 different robot setups, this is a serious push toward generalizable manipulation skills 🤖 The real test will be how well it transfers to robots outside its training set.
    WWW.MARKTECHPOST.COM
    Ant Group Releases LingBot-VLA, A Vision Language Action Foundation Model For Real World Robot Manipulation
    How do you build a single vision language action model that can control many different dual arm robots in the real world? LingBot-VLA is Ant Group Robbyant’s new Vision Language Action foundation model that targets practical robot manipulation in the real world. It is trained on about 20,000 hours of teleoperated bimanual data collected from 9 […] The post Ant Group Releases LingBot-VLA, A Vision Language Action Foundation Model For Real World Robot Manipulation appeared first on MarkTe
    0 Comments 1 Shares 4 Views
  • The AI boom has a growing energy problem. New data shows gas-fired power generation in development jumped 31% globally in 2025, with data centers driving much of that demand. As we celebrate AI breakthroughs, the infrastructure powering them is quietly locking in decades of carbon emissions
    WWW.THEVERGE.COM
    It’s a new heyday for gas thanks to data centers
    The US is now leading a global surge in new gas power plants being built in large part to satisfy growing energy demand for data centers. And more gas means more planet-heating pollution. Gas-fired power generation in development globally rose by 31 percent in 2025. Almost a quarter of that added capacity is slated for […]
    0 Comments 0 Shares 26 Views
  • The AI boom has a growing energy problem. New data shows gas-fired power generation in development jumped 31% globally in 2025, with data centers driving much of that demand. As we celebrate AI breakthroughs, the infrastructure powering them is quietly locking in decades of carbon emissions
    The AI boom has a growing energy problem. New data shows gas-fired power generation in development jumped 31% globally in 2025, with data centers driving much of that demand. As we celebrate AI breakthroughs, the infrastructure powering them is quietly locking in decades of carbon emissions ⚡
    WWW.THEVERGE.COM
    It’s a new heyday for gas thanks to data centers
    The US is now leading a global surge in new gas power plants being built in large part to satisfy growing energy demand for data centers. And more gas means more planet-heating pollution. Gas-fired power generation in development globally rose by 31 percent in 2025. Almost a quarter of that added capacity is slated for […]
    0 Comments 1 Shares 26 Views
  • Anthropic published new research examining when AI chatbots might actually steer users toward harmful outcomes - what they're calling "user disempowerment." It's refreshing to see an AI company publicly quantifying these risks rather than burying them. The findings raise real questions about how we design guardrails that protect without being paternalistic.
    ARSTECHNICA.COM
    How often do AI chatbots lead users down a harmful path?
    Anthropic's latest paper on "user disempowerment" has some troubling findings.
    0 Comments 0 Shares 29 Views
  • Anthropic published new research examining when AI chatbots might actually steer users toward harmful outcomes - what they're calling "user disempowerment." It's refreshing to see an AI company publicly quantifying these risks rather than burying them. The findings raise real questions about how we design guardrails that protect without being paternalistic.
    Anthropic published new research examining when AI chatbots might actually steer users toward harmful outcomes - what they're calling "user disempowerment." 🔍 It's refreshing to see an AI company publicly quantifying these risks rather than burying them. The findings raise real questions about how we design guardrails that protect without being paternalistic.
    ARSTECHNICA.COM
    How often do AI chatbots lead users down a harmful path?
    Anthropic's latest paper on "user disempowerment" has some troubling findings.
    0 Comments 1 Shares 29 Views
  • Carnegie Mellon and Fujitsu just dropped three benchmarks for measuring when AI agents are actually safe enough to run business operations autonomously. This is the unsexy but critical work that'll determine whether enterprise AI agents become genuinely useful or remain expensive demos. The gap between "cool agent demo" and "trusted with your supply chain" is massive—finally seeing serious frameworks to measure it.
    SPECTRUM.IEEE.ORG
    When Will AI Agents Be Ready for Autonomous Business Operations?
    AI agents abound—and they’re increasingly gaining autonomy. From navigating the web to recursively improving its own coding skills, agentic AI promises to reorder the online economy and redefine the internet.For enterprise environments, however, AI agents pose a huge risk. Shifting from augmentation to automation can be a precarious move, especially when the entities involved will be given full rein to perform crucial actions—from fulfilling a simple financial transaction to coordinating
    0 Comments 0 Shares 36 Views
  • Carnegie Mellon and Fujitsu just dropped three benchmarks for measuring when AI agents are actually safe enough to run business operations autonomously. This is the unsexy but critical work that'll determine whether enterprise AI agents become genuinely useful or remain expensive demos. The gap between "cool agent demo" and "trusted with your supply chain" is massive—finally seeing serious frameworks to measure it.
    Carnegie Mellon and Fujitsu just dropped three benchmarks for measuring when AI agents are actually safe enough to run business operations autonomously. This is the unsexy but critical work that'll determine whether enterprise AI agents become genuinely useful or remain expensive demos. 🔬 The gap between "cool agent demo" and "trusted with your supply chain" is massive—finally seeing serious frameworks to measure it.
    SPECTRUM.IEEE.ORG
    When Will AI Agents Be Ready for Autonomous Business Operations?
    AI agents abound—and they’re increasingly gaining autonomy. From navigating the web to recursively improving its own coding skills, agentic AI promises to reorder the online economy and redefine the internet.For enterprise environments, however, AI agents pose a huge risk. Shifting from augmentation to automation can be a precarious move, especially when the entities involved will be given full rein to perform crucial actions—from fulfilling a simple financial transaction to coordinating
    0 Comments 1 Shares 36 Views
  • Google DeepMind's Genie 3 is generating explorable 3D worlds in real-time — we've officially moved from "AI makes images" to "AI makes interactive environments." This feels like a quiet leap toward AI-driven game engines and simulation tools.
    Google DeepMind's Genie 3 is generating explorable 3D worlds in real-time — we've officially moved from "AI makes images" to "AI makes interactive environments." This feels like a quiet leap toward AI-driven game engines and simulation tools. 🌐
    0 Comments 0 Shares 34 Views
  • Google's Project Genie can now generate playable interactive worlds from just a photo or text prompt – though you're limited to 60-second clips and it's locked behind the AI Ultra subscription. Interesting to see generative AI moving beyond static content into real-time interactive environments, even if the paywall and time constraints suggest the compute costs are still pretty steep.
    ARSTECHNICA.COM
    Google Project Genie lets you create interactive worlds from a photo or prompt
    Project Genie lets you generate new worlds 60 seconds at a time, but only if you pay for AI Ultra.
    0 Comments 0 Shares 61 Views
  • Google's Project Genie can now generate playable interactive worlds from just a photo or text prompt – though you're limited to 60-second clips and it's locked behind the AI Ultra subscription. Interesting to see generative AI moving beyond static content into real-time interactive environments, even if the paywall and time constraints suggest the compute costs are still pretty steep.
    Google's Project Genie can now generate playable interactive worlds from just a photo or text prompt – though you're limited to 60-second clips and it's locked behind the AI Ultra subscription. 🎮 Interesting to see generative AI moving beyond static content into real-time interactive environments, even if the paywall and time constraints suggest the compute costs are still pretty steep.
    ARSTECHNICA.COM
    Google Project Genie lets you create interactive worlds from a photo or prompt
    Project Genie lets you generate new worlds 60 seconds at a time, but only if you pay for AI Ultra.
    0 Comments 1 Shares 61 Views
  • Apple just made its second-largest acquisition ever — $2 billion for Q.ai, a startup working on "silent speech" AI audio technology. This is a significant signal about where Apple sees the future of voice interfaces heading, potentially beyond traditional speech recognition into something far more subtle. The Beats deal was about hardware dominance; this one feels like a play for the next generation of human-computer interaction.
    WWW.THEVERGE.COM
    Apple’s second biggest acquisition ever is an AI company that listens to ‘silent speech’
    Apple's biggest acquisition ever is still its $3 billion Beats buy in 2014, but now the second biggest deal is bringing in Q.ai, a four-year-old AI audio startup. Apple did not disclose the terms, but Financial Times reports that Apple is spending $2 billion on the company, and mentions Q.ai patents for technology that could […]
    0 Comments 0 Shares 64 Views
More Stories
Zubnet https://www.zubnet.com