China, 17th Oct 2025 — On August 21, 2025, Agibot held its inaugural Partner Conference in Shanghai. As the theme of “Advancing with Intelligence, Embarking on a New Era”, the conference comprehensively showcased Agibot’s full-chain layout across “product, technology, business, ecosystem, capital, and team” through strategic announcements, demonstrations of eight scenario-based solutions, and immersive experiences with hundreds of robots. Leveraging the “One Body, Three Intelligences” full-stack technology architecture and a comprehensive product matrix for all scenarios, Agibot is collaborating with partners to accelerate the commercialization process of embodied intelligence and propel the industry from technological exploration to scale commercialization.

PART1. Collaborative Software and Hardware Builds a New Embodied Intelligence Ecosystem, with Multi-Scenario Coverage Accelerating the New Era of Commercial Embodied Intelligence

At the main forum, Deng Taihua, Chairman and CEO of Agibot, delivered an impactful opening speech. He proposed that the world is on the eve of the great explosion of embodied intelligence, with artificial intelligence rapidly advancing towards AGI. He stated that 2025 will be an inflection point for the commercial development of embodied intelligent robots, which will ultimately become the next generation of mass-market intelligent terminal, following phones and cars.

Adhering to its original aspiration of creating infinite productivity, Agibot aims to become a global leader in the intelligent robotics field, pioneer the general-purpose robot industry ecosystem, and accelerate the arrival of the era of general intelligence. Deng Taihua also elaborated on Agibot’s core strategy: guided by the goal of general-purpose, mass-produced humanoid robots, focus on building a full-stack software and hardware platform with strong intelligence and easy collaboration; progressively promote deployment in industrial, commercial, and home commercial scenarios; and ultimately build an application ecosystem for a general intelligent robot platform.

Although established less than three years ago, Agibot has already achieved numerous industry-leading milestones. The company continuously builds core product competitiveness around the “One Body, Three Intelligences” concept, achieving the industry’s only full-series, full-scenario product layout, and establishing a full-stack technology layout encompassing the body, cerebellum, and brain. In business operations, by continuously strengthening product capabilities and progressively deploying products suitable for different scenarios, it is gradually achieving cross-domain scale commercialization, maintaining strong business development momentum while continuously breaking into high-quality major client bases.

Deng Taihua emphasized that ecosystem co-creation is the core driver for the scaling of the embodied intelligence industry, explicitly listing it as one of the company’s core strategies. Agibot will pool global innovative forces through three main paths: open source, being integrated, and capital empowerment.

In terms of technology open source, Agibot has open-sourced the robot middleware AimRT and a million-unit real-robot dataset, and launched the first embodied intelligence operating system, “Lingqu OS”, thereby promoting the industry’s journey towards standardized, scaled, and ecosystem-driven development.

In building the business ecosystem, Agibot implements an Enablement Strategy, using its own platform technology to integrate the vertical capabilities of leading industry partners in R&D, market, delivery, etc., to create industry-specific embodied agents covering eight scenarios including guided reception, entertainment & commercial performances, intelligent manufacturing, and logistics sorting. Simultaneously, it is building a layered distribution system, effectively lowering the barrier to cooperation by clarifying responsibilities, rights, and benefits, and establishing incentive mechanisms.

Furthermore, to support early-stage innovation, Agibot launched the first startup acceleration program focused on the embodied intelligence industry chain—”Agibot Plan A”. This plan aims to incubate 50+ high-potential early-stage projects and build a trillion-yuan industrial ecosystem within three years. Agibot will provide participants with benefits including technical support, financing empowerment, scenario access, and entrepreneurial incubation. Deng Taihua announced on-site that the first startup cohort officially opened for applications globally from robot startups and developer teams on August 21, 2025.

Currently, the Agibot team is continuously expanding, with increasing talent density. On this foundation, Agibot also clearly presented its plans and goals for the next five years to its partners.

Deng Taihua concluded, “From technological breakthroughs to industry explosion, from Chinese innovation to global leadership, every step Agibot takes is inseparable from the support of our partners.” Agibot will work hand-in-hand with global partners to promote embodied intelligent robots as a new productive force that changes the world, jointly opening a new chapter in the intelligent era.

PART2. Interpretation of the “1+3” Full-Stack Technology Strategy&Three Product Series Covering Diverse Scenarios

Peng Zhihui, Co-founder and CTO of  Agibot, systematically interpreted the “1+3” full-stack technology strategy, building upon the robot body to develop three core capabilities: Motion Intelligence, Interaction Intelligence, and Task Intelligence. “We are not just making a few robots, but creating a base for a self-evolving general embodied intelligence agent,” he explained. Motion Intelligence enables robots to “walk steadily and move quickly,” achieving adaptive walking on complex terrain based on Sim2Real reinforcement learning. Interaction Intelligence allows them to “hear and understand, chat naturally,” with multi-modal dialogue response times reaching the one-second level. Task Intelligence tackles “grasping accurately and performing delicate tasks,” achieving a closed loop from grasping to fine manipulation through real-robot reinforcement learning. These three intelligences can be flexibly combined in robots of different forms, creating a “one set of capabilities, multiple carriers” technology flywheel.

Peng showcased Agibot’s three product series at the event: The YuanZheng (Expedition) series’ YuanZheng A2 is the industry’s first full-sized humanoid robot for scaled commercial deployment, having passed 2000+ hours of walking tests and obtained safety certifications in China, the US, and Europe. It focuses on guided reception and entertainment/commercial performances, supporting full-body customization. The JingLing (Genie) series’ JingLing G1 possesses native data collection and integrated collection-push capabilities. Paired with platforms like Genie Studio, it is suited for industrial, commercial, and other scenarios. LingXi X2 from the LingXi series is an agile, life-like, and 1.3 meters tall robot, covering scenarios like entertainment/commercial performances, store reception, and scientific research/education.

During the conference, Peng released “LinkCraft,” a robot motion and expression creation platform. Described as a disruptive, AI-powered multi-modal content generation and editing tool for robots, it features rich motion libraries, supports preview editing, motion import, choreography, and performance, reducing the barrier for robot secondary development to virtually zero. Peng emphasized that while robots are moving from labs to life and industry, “interaction and expression” remain bottlenecks, and partners/developers need simpler, more efficient ways to customize robot behavior. “LinkCraft’s vision is to make robots express as naturally as humans and let creators choreograph as freely as directors.” The platform is expected to rapidly enrich interactive forms across various scenarios, accelerating scenario co-creation and ecosystem deployment in commercial services, cultural entertainment, and other fields.

Additionally, Peng impressively unveiled the LingXi X2-W prototype – a wheeled dual-arm robot specifically designed for “Task Intelligence.” Just a month prior, Agibot’s innovative wheel-legged LingXi X2-N had already garnered widespread attention. The LingXi X2-W prototype further embodies the design philosophy of “becoming the smoothest native Task Intelligence body,” featuring core attributes such as an omnidirectional mobile base, high-DOF dual arms with bionic wrists, compact storage (footprint < 0.5㎡), dual power system switching, dexterous three-finger hands with tactile feedback, omnidirectional perception system, powerful edge computing unit, and low cost. Peng stated that the LingXi X2-W is currently in the prototype stage, but with continuous breakthroughs in algorithms and models, it is expected to become a benchmark for the next generation of embodied intelligence task robots.

“YuanZheng reaches out, JingLing gets the job done, LingXi wins hearts,” Peng emphasized, noting that the synergistic efforts of these three series will help Agibot accelerate rapidly. “Looking ahead three years, Agibot aims to achieve deployment of hundreds of thousands of general-purpose robots, support autonomous generalization across hundreds of tasks, and build an open, evolvable, self-growing general robot ecosystem.”

PART3. Business Initiatives: Multiple Measures to Improve Industry Development

Jiang Qingsong, Partner and Vice President of Agibot, stated that the company currently focuses on eight major scenarios in business: guided reception, entertainment/commercial performances, industrial intelligence, logistics sorting, security inspection, commercial cleaning, data collection/training, and scientific research/education. It has launched customized solutions and achieved large-scale applications across multiple industries.

To popularize promote technology and further scenario deployment, Jiang released the 2025 Partner Policy, proposing the construction of a multi-tier partner system. Based on the principle of “directing premium resources to premium partners,” it provides comprehensive support to jointly build a synergistic “technology-product-scenario” ecosystem and share industry dividends.

“Embodied intelligence is moving from the laboratory to all industries. Agibot not only has the industry’s most complete robot product family but has also built a channel system that allows partners to ‘board with low barriers and grow with high returns.’ The goal is to work with partners to truly convert AI’s creativity into customer productivity,” Jiang said.

Additionally, the conference featured four sub-forums, eight commercial scenario exhibition areas, and an AgiBot Night tech party. Through detailed product displays, case studies of scenario deployment, and immersive interactions, partners could directly experience the commercial capabilities of embodied intelligence.

As the industry’s first large-scale ecosystem gathering, the conference helped solidify consensus across the industry chain through the clear communication of its core strategy. In the future, Agibot will continue to join hands with partners to accelerate the commercial implementation of embodied intelligence, promote the industry’s shift from technological exploration to scale commercialization, and establish a new global benchmark for the embodied intelligence industry.

Media Contact

Organization: Shanghai Zhiyuan Innovation Technology Co., Ltd.

Contact Person: Jocelyn Lee

Website: https://www.zhiyuan-robot.com

Email: Send Email

City: Shanghai

Country:China

Release id:35598

The post Agibot First Partner Conference was Successfully Held Showing that Its Full-Chain Layout is Accelerating Commercialization of Embodied Intelligence appeared first on King Newswire. This content is provided by a third-party source.. King Newswire makes no warranties or representations in connection with it. King Newswire is a press release distribution agency and does not endorse or verify the claims made in this release. If you have any complaints or copyright concerns related to this article, please contact the company listed in the ‘Media Contact’ section

file

Shanghai, China, 17th Oct 2025 — Recently, Zhiyuan Innovation (Shanghai) Technology Co., Ltd. (hereinafter referred to as Agibot) has reached a project cooperation agreement with Fulin Precision Co., Ltd.(hereinafter referred to as Fulin Precision) valued at tens of millions of RMB. Nearly one hundred Agibot’s Expedition A2-W robots will be deployed across Fulin Precision’s factories. This milestone represents not only the first large-scale commercial contract for embodied robots in China’s industrial sector but also the world’s first scaled deployment of such robots in smart manufacturing scenarios. It signifies that industrial embodied AI has officially transitioned from the technical validation phase into a new era of scalable commercialization.

In July 2025, the first set of Expedition A2-W robots successfully completed a live demonstration of routine industrial operations in a material feeding scenario on Fulin Precision’s production line. Its capability of handling 1,000 turnover boxes per shift successfully matched the full production schedule demand for a single production line that month. This current deployment of nearly one hundred units marks a leapfrog upgrade from a “single-factory pilot verification” to “multi-factory, full-line coverage” in the application of robotic box depalletizing and material feeding. The robots’ operational scope has expanded from the initial 2 production line points to 15 feeding points across two core workshops – the powertrain and reducer workshops – undertaking raw material delivery tasks for a daily production capacity of over 500 units. They also handle automated empty box retrieval, performing nearly ten thousand box-moving actions per shift, becoming the core transport force connecting the production processes of the two workshops. The Agibot Expedition A2-W wheeled general-purpose robot is specifically designed for flexible intelligent manufacturing scenarios and can be widely applied in various tasks such as turnover box depalletizing/palletizing, transportation, and loading/unloading.

A breakthrough innovation of this cooperation lies in the deep collaborative system built through “Embodied Robot + AMR”. In this system, the embodied A2-W robot is responsible for picking and placing turnover boxes from multi-layer racks, while the AMR handles the transport of heavy full-pallet materials within the workshop. Relying on the production system’s intelligent task dispatch and dynamic material call mechanisms, the AMR delivers materials from the picking area to the line-side feeding points. The embodied A2-W robot can automatically identify boxes, autonomously adjust its posture, complete depalletizing/feeding and empty box retrieval, realizing a fully automated workflow from material outbound to production line feeding and empty box circulation. This system leverages both the embodied robot’s generalized operational capability for turnover boxes and its adaptability to complex scenes, as well as the AMR’s efficiency advantages in long-distance, high-load transportation. It addresses the pain points of traditional automation equipment in adapting flexibly to multi-workshop, multi-category scenarios, providing a directly reusable collaborative solution for manufacturing scenarios involving turnover box depalletizing/palletizing, transportation, and loading/unloading.

Furthermore, the large-scale application of the Expedition A2-W is underpinned by three core technological breakthroughs:

1. Multi-modal Perception System: Real-time recognition of dynamic obstacles (personnel, equipment, materials) ensuring safety in human-robot collaborative environments.
2. Dual-arm Coordination & High-precision Manipulation: Adapts to containers of varying sizes and weights.
3. Autonomous Error-correction Algorithm: Enables the robot to autonomously recover workflow during unexpected anomalies without human intervention.

Deng Yang, Director of the Engineering Department at Fulin Precision, stated: “The performance of the Expedition A2-W on the production line has exceeded expectations – it autonomously adjusts its grasping posture for irregularly stacked boxes, avoids obstacles in real-time when personnel cross its path, and achieved zero failures in nearly ten thousand operations per shift. Its anti-interference and error correction capabilities are far superior to traditional automation equipment. More importantly, it can take over repetitive tasks in the workshop that are prone to causing lumbar muscle strain, allowing workers to focus on more valuable operations. With the continuous introduction and application of such robots, industrial embodied intelligence is expected to reshape manufacturing production models, pushing the industry’s intelligent transformation into a new stage. This project cooperation signifies the arrival of large-scale commercialization for industrial embodied intelligence and represents an upgrade in the industrial ecological partnership between Agibot and Fulin Precision. As robotic application scenarios scale up, Fulin Precision, as a joint supplier for robots, has already sensed the opportunities brought by industry innovation.”

Wang Chuang, President of the General Purpose Business Unit at Agibot, stated: “This nearly hundred-unit level commercial order with Fulin Precision not only verifies the technological maturity of embodied robots but, more importantly, signifies manufacturing’s recognition of their commercial value. We consider this project a benchmark case for the large-scale application of industrial embodied robots. We will use the implementation path of ‘Scenario Penetration – Data Accumulation – Technology Iteration’ as a reference for more industrial scenarios such as automotive manufacturing and 3C electronics.”

Media Contact

Organization: Shanghai Zhiyuan Innovation Technology Co., Ltd.

Contact Person: Jocelyn Lee

Website: https://www.zhiyuan-robot.com

Email: Send Email

City: Shanghai

Country:China

Release id:35601

The post Agibot Wins Fulin Precision Order Valued at Tens of Millions RMB – Marking a Breakthrough in Scaled Deployment of Industrial Embodied AI appeared first on King Newswire. This content is provided by a third-party source.. King Newswire makes no warranties or representations in connection with it. King Newswire is a press release distribution agency and does not endorse or verify the claims made in this release. If you have any complaints or copyright concerns related to this article, please contact the company listed in the ‘Media Contact’ section

file

The crypto community welcomes a new wave of excitement as UpWeGo ($UP) officially launches a community-driven meme coin built on the Ethereum blockchain, symbolizing ambition, humor, ad collective growth. With a rallying cry that 
‘The Only Way Is Up,” UpWeGo unites crypto enthusiasts worldwide under a shared mission of positivity and progress.

The new meme coin has been designed for the people and powered by Ethereum’s decentralized foundation, UpWeGoembodies the unstoppable spirit of rising together There are no taxes, no developer wallets, and a fully burnt liquidity pool (LP) – ensuring a fair and transparent ecosystem where community truly comes.

The key tokenomics of the newly launched coin are as follows:

• Total Supply: 999,999,999,999

• Tax: 0/0

• LP: Burnt

• Contract Address: 0xF95151526F586DB1C99FB6EBB6392AA9CFE13F8E

In the recent development, the UpWeGo has been listed on CoinMarketCap and continues to gain momentum as new holders join daily, spreading the project’s message across the crypto space More than just another meme coin, UpWeGo aims to represent optimism, resilience, and unity n an ever-changing market – a reminder that no matter what happens, the only direction with heading is UP.

With UpWeGo, the company is not just building a token – but a movement. The token is about energy, humor, and belief in what the crypto community can achieve when it rises together.

About the Company – UpWeGo

UpWeGo ($UP) is a community-driven meme coin built on the Ethereum blockchain, created to capture the unstoppable energy of collective growth and optimism in the crypto world. With no taxes, no developer wallets and a burnt liquidity pool, UpWeGostands for transparency, equality, and the power of shared belief. The token is an epitome of movement that celebrates ambition, humor, and unity that reminds the global crypto community that no matter the market’s direction, the only way is UP.

Marketing partner: crmoonboy (crmoon)

For further details visit the following links:

• Website: https://upwegoeth.xyz/

• Twitter (X): https://twitter.com/UPerc20

• Telegram: https://t.me/EthereumUpwego

Media Contact

Organization: upwego

Contact Person: Tadeusz Kusiak

Website: https://upwegoeth.xyz/

Email:
upuptoken@gmail.com

Address:Tuszyńska 32

City: Rzgów

Country:Poland

Release id:35615

The post UpWeGo Officially Launches its New Driven Meme Coin with the Slogan The Only Way is Up for this Community appeared first on King Newswire. This content is provided by a third-party source.. King Newswire makes no warranties or representations in connection with it. King Newswire is a press release distribution agency and does not endorse or verify the claims made in this release. If you have any complaints or copyright concerns related to this article, please contact the company listed in the ‘Media Contact’ section

file

Shanghai, China, 17th Oct 2025 — Recently, Agibot has officially launched Genie Envisioner (GE), a unified world model platform for real-world robot control. Departing from the traditional fragmented pipeline of data-training-evaluation, GE integrates future frame prediction, policy learning, and simulation evaluation for the first time into a closed-loop architecture centered on video generation. This enables robots to perform end-to-end reasoning and execution—from seeing to thinking to acting—within the same world model. Trained on 3,000 hours of real robot data, GE-Act not only significantly surpasses existing State-of-The-Art (SOTA) methods in cross-platform generalization and long-horizon task execution but also opens up a new technical pathway for embodied intelligence from visual understanding to action execution.

Current robot learning systems typically adopt a phased development model—data collection, model training, and policy evaluation—where each stage is independent and requires specialized infrastructure and task-specific tuning. This fragmented architecture increases development complexity, prolongs iteration cycles, and limits system scalability. The GE platform addresses this by constructing a unified video-generative world model that integrates these disparate stages into a closed-loop system. Built upon approximately 3,000 hours of real robot manipulation video data, GE establishes a direct mapping from language instructions to the visual space, preserving the complete spatiotemporal information of robot-environment interactions.

01/ Core Innovation: A Vision-Centric World Modeling Paradigm

The core breakthrough of GE lies in constructing a vision-centric modeling paradigm based on world models. Unlike mainstream Vision-Language-Action (VLA) methods that rely on Vision-Language Models (VLMs) to map visual inputs into a linguistic space for indirect modeling, GE directly models the interaction dynamics between the robot and the environment within the visual space. This approach fully retains the spatial structures and temporal evolution information during manipulation, achieving more accurate and direct modeling of robot-environment dynamics. This vision-centric paradigm offers two key advantages:

Efficient Cross-Platform Generalization Capability: Leveraging powerful pre-training in the visual space, GE-Act requires minimal data for cross-platform transfer. On new robot platforms like the Agilex Cobot Magic and Dual Franka, GE-Act achieved high-quality task execution using only 1 hour (approximately 250 demonstrations) of teleoperation data. In contrast, even models like π0 and GR00T, which are pre-trained on large-scale multi-embodiment data, underperformed GE-Act with the same amount of data. This efficient generalization stems from the universal manipulation representations learned by GE-Base in the visual space. By directly modeling visual dynamics instead of relying on linguistic abstractions, the model captures underlying physical laws and manipulation patterns shared across platforms, enabling rapid adaptation.

Accurate Execution Capability for Long-Horizon Tasks: More importantly, vision-centric modeling endows GE with powerful future spatiotemporal prediction capabilities. By explicitly modeling temporal evolution in the visual space, GE-Act can plan and execute complex tasks requiring long-term reasoning. In ultra-long-step tasks such as folding a cardboard box, GE-Act demonstrated performance far exceeding existing SOTA methods. Taking box folding as an example, this task requires the precise execution of over 10 consecutive sub-steps, each dependent on the accurate completion of the previous ones. GE-Act achieved a 76% success rate, while π0 (specifically optimized for deformable object manipulation) reached only 48%, and UniVLA and GR00T failed completely (0% success rate). This enhancement in long-horizon execution capability stems not only from GE’s visual world modeling but also benefits from the innovatively designed sparse memory module, which helps the robot selectively retain key historical information, maintaining precise contextual understanding in long-term tasks. By predicting future visual states, GE-Act can foresee the long-term consequences of actions, thereby generating more coherent and stable manipulation sequences. In comparison, language-space-based methods are prone to error accumulation and semantic drift in long-horizon tasks.

02/ Technical Architecture: Three Core Components

Based on the vision-centric modeling concept, the GE platform consists of three tightly integrated components:

GE-Base: Multi-View Video World Foundation Model: GE-Base is the core foundation of the entire platform. It employs an autoregressive video generation framework, segmenting output into discrete video chunks, each containing N frames. The model’s key innovations lie in its multi-view generation capability and sparse memory mechanism. By simultaneously processing inputs from three viewpoints (head camera and two wrist cameras), GE-Base maintains spatial consistency and captures the complete manipulation scene. The sparse memory mechanism enhances long-term reasoning by randomly sampling historical frames, enabling the model to handle manipulation tasks lasting several minutes while maintaining temporal coherence.

Training uses a two-stage strategy: first, temporal adaptation training (GE-Base-MR) with multi-resolution sampling at 3-30Hz makes the model robust to different motion speeds; subsequently, policy alignment fine-tuning (GE-Base-LF) at a fixed 5Hz sampling rate aligns with the temporal abstraction of downstream action modeling. The entire training was completed in about 10 days using 32 A100 GPUs on the AgiBot-World-Beta dataset, comprising approximately 3,000 hours and over 1 million real robot data instances.

GE-Act: Parallel Flow Matching Action Model: GE-Act serves as a plug-and-play action module, converting the visual latent representations from GE-Base into executable robot control commands through a lightweight architecture with 160M parameters. Its design cleverly parallels GE-Base’s visual backbone, using DiT blocks with the same network depth as GE-Base but smaller hidden dimensions for efficiency. Via a cross-attention mechanism, the action pathway fully utilizes semantic information from visual features, ensuring generated actions align with task instructions.

 

GE-Act’s training involves three stages: action pre-training projects visual representations into the action policy space; task-specific video adaptation updates the visual generation component for specific tasks; task-specific action fine-tuning refines the full model to capture fine-grained control dynamics. Notably, its asynchronous inference mode is key: the video DiT runs at 5Hz for single-step denoising, while the action model runs at 30Hz for 5-step denoising. This “slow-fast” two-layer optimization enables the system to complete 54-step action inference in 200ms on an onboard RTX 4090 GPU, achieving real-time control.

GE-Sim: Hierarchical Action-Conditioned Simulator: GE-Sim extends GE-Base’s generative capability into an action-conditioned neural simulator, enabling precise visual prediction through a hierarchical action conditioning mechanism. This mechanism includes two key components: Pose2Image conditioning projects 7-degree-of-freedom end-effector poses (position, orientation, gripper state) into the image space, generating spatially aligned pose images via camera calibration; Motion vectors calculate the incremental motion between consecutive poses, encoded as motion tokens and injected into each DiT block via cross-attention. 

This design allows GE-Sim to accurately translate low-level control commands into visual predictions, supporting closed-loop policy evaluation. In practice, action trajectories generated by the policy model are converted by GE-Sim into future visual states; these generated videos are then fed back to the policy model to produce the next actions, forming a complete simulation loop. Parallelized on distributed clusters, GE-Sim can evaluate thousands of policy rollouts per hour, providing an efficient evaluation platform for large-scale policy optimization. Furthermore, GE-Sim also acts as a data engine, generating diverse training data by executing the same action trajectories under different initial visual conditions.

These three components work closely together to form a complete vision-centric robot learning platform: GE-Base provides powerful visual world modeling capabilities, GE-Act enables efficient conversion from vision to action, and GE-Sim supports large-scale policy evaluation and data generation, collectively advancing embodied intelligence.

EWMBench: World Model Evaluation Suite

Additionally, to evaluate the quality of world models for embodied tasks, the team developed the EWMBench evaluation suite alongside the core GE components. It provides comprehensive scoring across dimensions including scene consistency, trajectory accuracy, motion dynamics consistency, and semantic alignment. Subjective ratings from multiple experts showed high consistency with GE-Bench rankings, validating its reliability for assessing robot task relevance. In comparisons with advanced models like Kling, Hailuo, and OpenSora, GE-Base achieved top results on multiple key metrics reflecting visual modeling quality, aligning closely with human judgment.

Open-Source Plan & Future Outlook

The team will open-source all code, pre-trained models, and evaluation tools. Through its vision-centric world modeling, GE pioneers a new technical path for robot learning. The release of GE marks a shift for robots from passive execution towards active ‘imagine-verify-act’ cycles. In the future, the platform will be expanded to incorporate more sensor modalities, support full-body mobility and human-robot collaboration, continuously promoting the practical application of intelligent manufacturing and service robots.

Media Contact

Organization: Shanghai Zhiyuan Innovation Technology Co., Ltd.

Contact Person: Jocelyn Lee

Website: https://www.zhiyuan-robot.com

Email: Send Email

City: Shanghai

Country:China

Release id:35600

The post Agibot Released the Industry First Open-Source Robot World Model Platform – Genie Envisioner appeared first on King Newswire. This content is provided by a third-party source.. King Newswire makes no warranties or representations in connection with it. King Newswire is a press release distribution agency and does not endorse or verify the claims made in this release. If you have any complaints or copyright concerns related to this article, please contact the company listed in the ‘Media Contact’ section

file

Shanghai, China, 17th Oct 2025 — Recently, Agibot officially launched the “Genie Trailblazer Global Recruitment Program”, aiming to invite top researchers worldwide to collaboratively define and create artificial intelligence capable of perceiving, reasoning, and emotionally engaging with the physical world.

This initiative provides a platform for scientific collaboration, focusing on three core research directions: general-purpose embodied intelligence models, embodied world models, and advanced teleoperation. Agibot warmly welcomes researchers from diverse backgrounds, including multimodal modeling, reinforcement learning, robotics, and world model experts, as long as they are passionate about shaping the future of embodied intelligence.

Moreover, Agibot offers robust support and resources to participating researchers. During the project period, researchers will receive free access to Agibot’s general-purpose embodied intelligence robot, Genie G1, enabling them to conduct data collection, model testing, and iterative development using this advanced physical platform to accelerate the realization of their ideas. Additionally, Agibot provides meticulously annotated real-world datasets and a high-fidelity digital twin environment, offering critical support to tackle the core challenges of sim-to-real transfer.

On the technical front, the high-performance robot platform features agile mobility and dexterous manipulation capabilities, integrated with an end-to-end data closed-loop system that spans the entire pipeline—from data collection and simulation-based training to real-world deployment. Researchers can deploy general-purpose embodied models on Agibot’s platform for end-to-end training and testing in real environments. They can also leverage AgiBot World, AgiBot Digital World, and specialized embodied-scenario datasets to validate their models’ understanding of physical laws, thereby endowing robots with planning and “imagination” capabilities.

To incentivize participation, Agibot has established a cash prize pool totaling RMB 1 million. Teams selected for the program and delivering outstanding results will be awarded the“GT-Star”. Exceptional developers will also gain direct access to internship opportunities and fast-track recruitment pathways at Agibot.

Agibot believes that the future of intelligence begins with embodiment. This initiative is expected to ignite global researchers’ innovative potential and collectively advance the frontier of embodied intelligence technologies.

Media Contact

Organization: Shanghai Zhiyuan Innovation Technology Co., Ltd.

Contact Person: Jocelyn Lee

Website: https://www.zhiyuan-robot.com

Email: Send Email

City: Shanghai

Country:China

Release id:35597

The post Genie Trailblazer Program by Agibot Empowers Embodied Intelligence Technology appeared first on King Newswire. This content is provided by a third-party source.. King Newswire makes no warranties or representations in connection with it. King Newswire is a press release distribution agency and does not endorse or verify the claims made in this release. If you have any complaints or copyright concerns related to this article, please contact the company listed in the ‘Media Contact’ section

file

Columbus, United States, 17th Oct 2025 – A study in the journal Sensors unveils an affordable and efficient system that captures high-fidelity spectral data in low-light and underground environments, opening new possibilities for geology, agriculture, and manufacturing.

A research team from The Ohio State University has developed an active hyperspectral imaging system capable of capturing precise spectral data in the complete absence of light. The design, constructed from low-cost commercial components, presents an accessible and efficient alternative to traditional technologies.

The breakthrough, detailed in the peer-reviewed journal Sensors, introduces a compact and energy-efficient prototype poised to transform material analysis in environments previously considered inaccessible for high-fidelity spectral sensing.


Traditional hyperspectral imaging systems identify materials by analyzing their special spectral “fingerprints” across hundreds of wavelength bands. However, these passive systems are fundamentally dependent on ambient light and often require costly, complex optical components, restricting their use in dark or confined settings such as underground tunnels, mines, or even for quality control in dimly lit industrial facilities.
The Ohio State team’s special approach overcomes these limitations by replacing passive detection with a programmable array of 76 single-wavelength LEDs synchronized with a full-spectrum camera. This active illumination method allows the system to generate its own light source, ensuring high-quality spectral data acquisition regardless of external lighting conditions.


“Our goal was to make hyperspectral imaging more efficient, accessible, and usable in real-world settings — not just in laboratories,” said Dr. Rongjun Qin, a professor in the Department of Civil, Environmental and Geodetic Engineering and the corresponding author of the study. “This system allows us to take the lab to the field, wherever that field may be.”

To validate its performance, the team tested the prototype on a variety of samples, including fruits, plant leaves, and rock specimens, under challenging low-light conditions. The results demonstrated a remarkable capability to distinguish subtle spectral changes—such as variations in fruit freshness or mineral composition—that are invisible to standard RGB cameras. A machine learning analysis of the captured data achieved a classification accuracy of up to 90%, a significant improvement over the 70% accuracy obtained with conventional imaging under similar conditions.


Beyond its high accuracy, the system offers substantial gains in efficiency. The prototype reduced the full-image acquisition time from 20 seconds to under 2.5 seconds. By eliminating the need for complex filters and bulky external lighting setups, the design is not only more cost-effective but also highly portable and consumes less power. Its lightweight form factor is suitable for portable field use or integration with drone-based platforms, enabling faster and more sustainable field operations.

A central theme of the research is the democratization of advanced sensing technology. Built entirely from commercially available, or “off-the-shelf,” components, the system drastically lowers the financial barrier for smaller laboratories and research institutions.

“All of the components are low-cost and readily available. This means laboratories and universities worldwide can adapt our design with minimal resources,” stated Yang Tang, a Ph.D. student and the study’s lead author. “We believe this will empower a wider scientific community and open up new opportunities for AI-driven material analysis and education in optical sensing.”

The potential applications for this technology are extensive and cross-sectoral. In geological exploration and mining, it can operate in dark, subterranean environments without the need for expensive and cumbersome lighting equipment. In agriculture and food safety, it offers a reliable method for monitoring crop health or detecting contamination. For manufacturing and environmental monitoring, it provides consistent and repeatable data under any lighting condition, enhancing quality control and compliance.


The research team plans to continue refining the system by enhancing its capture speed, increasing spectral resolution, and further integrating the hardware into a fully portable and automated imaging platform. “We envision a future where hyperspectral imaging becomes as routine as taking a photograph — efficient, affordable, and universally accessible,” added Dr. Qin.

About the Study

 The full study, titled “An Active Hyperspectral Imager by a Programmable LED Array and a Full-Spectrum Camera,” is published in Sensors. It can be accessed at the following DOI: https://doi.org/10.3390/s23031437

About The Ohio State University 

The Ohio State University is one of the leading public research institutions in the United States, advancing interdisciplinary science and engineering with a focus on data analytics, sustainability, and applied technology.

About CAIMO Global 

CAIMO Global is an academic media and science communication agency dedicated to bridging academic research and global audiences. The agency collaborates with universities, research institutions, and scholars worldwide to help academic studies gain understanding, increase visibility, and generate impact, ensuring that outstanding research is seen, shared, and remembered.

 

Media Contact

Organization: CAIMO Global Academic Media Co., Ltd.

Contact Person: YangXue

Website: https://flo.host/QNirG-Q/

Email: Send Email

City: Columbus

Country:United States

Release id:35603

The post New Active Hyperspectral System From Ohio State University Researchers Enables Precise Spectral Sensing in Complete Darkness appeared first on King Newswire. This content is provided by a third-party source.. King Newswire makes no warranties or representations in connection with it. King Newswire is a press release distribution agency and does not endorse or verify the claims made in this release. If you have any complaints or copyright concerns related to this article, please contact the company listed in the ‘Media Contact’ section

file

South Korea, 17th Oct 2025 – Pre-conception (pregnancy-prep) wellness formula Espregnol announced that, following a nationwide sell-out of its 5th production run in the Republic of Korea, it has begun manufacturing the 6th batch with a renewed premium formula. In line with this, the company stated it is preparing to expand global offline distribution by building partnerships with local OB/GYN clinics and pharmacy networks.

According to the company, surging interest in EsPregnol has been accompanied by growing demand for women’s circulation & balance management (Balanced Vital Sync. Flow), leading to increased inquiries from potential overseas partners. 

While maintaining multi-language labeling and allergen disclosures compliant with each country’s regulations, as well as bottle-level batch QR traceability (showing summaries of key quality tests such as heavy metals and microbiological counts), EsPregnol is discussing the co-distribution of a standardized counseling sheet for physicians and pharmacists at pharmacy counters (including full ingredient lists and recommended 2–3-hour co-administration intervals).

At the initial stage, the roadmap under review includes pilot listings with pharmacy chains, tests in airport/drugstore channels, and integration with a 24/7 international Adverse Event Reporting (AER) system, to be rolled out sequentially.

An EsPregnol spokesperson said, “Having sold out our 5th run domestically and commenced premium-renewal 6th production, we will pursue both stable supply and standardized offline counseling. Through collaborations with medical and pharmacy networks, we are preparing information delivery and distribution aligned with each country’s standards.”

 

Media Contact

Organization: Onim

Contact Person: Oh, Raegwon

Website: https://from-in-labs.com/

Email: Send Email

Country:Korea South

Release id:35542

The post Espregnol to Expand Global Offline Channels via Partnerships with Medical and Pharmacy Networks appeared first on King Newswire. This content is provided by a third-party source.. King Newswire makes no warranties or representations in connection with it. King Newswire is a press release distribution agency and does not endorse or verify the claims made in this release. If you have any complaints or copyright concerns related to this article, please contact the company listed in the ‘Media Contact’ section

file

United States, 17th Oct 2025 – Whisber, a cutting-edge crypto intelligence platform, today announced the launch of its AI-driven trend monitoring tool designed to empower crypto investors and traders with clear, timely, and actionable insights.

The platform delivers long-term and short-term crypto trend signals with clear indicators such as Buy, Sell, Hold, Bullish, Bearish, Uptrend, Downtrend, and ongoing trend duration. Users can choose to receive these signals directly via email or Telegram alerts, or access them through the Whisber website.

By removing the noise and emotional triggers often caused by FOMO (Fear of Missing Out) and panic selling, Whisber helps users make smarter, data-driven trading decisions. During bear markets, the system also emphasizes capital preservation, ensuring traders protect their funds while waiting for stronger opportunities.

“Our mission is simple—help traders cut through emotions and focus on clear signals,” said Tom Lee . “Whisber isn’t just about spotting opportunities—it’s about building the discipline to avoid costly mistakes.”
 

Key Benefits

  • AI-powered Crypto trend forecasting
  • Clear action guidance (Buy/Hold/Sell
  • Real-time alerts via multiple channels
  • Reduced risk of emotional trading mistakes 
  • Capital-saving strategies during downturns
     

About Whisber

Whisber is an AI-driven crypto trend monitoring platform that helps traders and investors make smarter, more disciplined decisions. By combining long-term forecasting with short-term alerting, Whisber empowers its users to cut through market noise, avoid FOMO-driven mistakes, and preserve capital across all market conditions.

For more information, visit https://www.whisber.com.

Media Contact

Organization: Whisber PR Team

Contact Person: Whisber PR Team

Website: https://www.whisber.com

Email: Send Email

Country:United States

Release id:35308

Disclaimer: This press release does not constitute financial or investment advice. Cryptocurrency trading involves risk, and readers should conduct their own research or consult a financial advisor before making decisions.

The post Whisber Launches AI-Powered Crypto Trend Monitoring Tool to Help Traders Beat FOMO and Fear appeared first on King Newswire. This content is provided by a third-party source.. King Newswire makes no warranties or representations in connection with it. King Newswire is a press release distribution agency and does not endorse or verify the claims made in this release. If you have any complaints or copyright concerns related to this article, please contact the company listed in the ‘Media Contact’ section

file

United States, 17th Oct 2025 – TrustedCISO LLC, a leader in cybersecurity advisory and compliance solutions, proudly announces the release of A CISO Guide to Cyber Resilience, authored by its founder and seasoned cybersecurity expert Debra Baker. With over 30 years of experience in security leadership, Baker provides organizations with a practical and actionable roadmap to strengthen their cybersecurity posture, achieve compliance readiness, and embed resilience into their organizational culture.

In a world where data breaches, ransomware, and regulatory demands are constantly evolving, organizations of all sizes struggle to align their security programs with business growth. A CISO Guide to Cyber Resilience addresses these challenges head-on, equipping executives, CISOs, vCISOs, and IT leaders with proven strategies that go beyond traditional defense mechanisms. Instead of focusing solely on protection, Baker emphasizes the need to build resilience—ensuring trust, enabling business continuity, and fostering long-term success.

“Cybersecurity isn’t just about defending against threats,” said Debra Baker, founder of TrustedCISO LLC. “It’s about creating a culture of resilience where security governance, compliance, and risk management become enablers of innovation and growth. This book provides leaders with the frameworks and real-world insights they need to confidently steer their organizations through an increasingly complex digital environment.”

Practical Guidance for Security and Compliance Leaders

The book covers essential areas that today’s organizations must master to remain competitive and secure. Baker breaks down industry-recognized frameworks and certifications such as SOC 2 compliance, ISO 27001 certification, and the Cybersecurity Maturity Model Certification (CMMC), providing step-by-step insights into how organizations can prepare for and achieve compliance.

From understanding the role of security governance in risk management to designing security programs that scale with organizational growth, A CISO Guide to Cyber Resilience bridges the gap between theory and practice. The book also highlights leadership principles that CISOs and vCISOs can apply immediately to align security strategies with broader business goals.

Complementing TrustedCISO’s Mission

The release of this book reflects TrustedCISO LLC’s mission to empower organizations to thrive securely in today’s digital-first world. TrustedCISO is known for its scalable vCISO services and comprehensive compliance solutions that help businesses confidently navigate regulatory requirements, from CMMC to SOC 2, ISO 27001, HIPAA, and beyond.

By providing flexible programs tailored to business needs, TrustedCISO enables clients to strengthen their cyber resilience while maintaining focus on innovation and growth. A CISO Guide to Cyber Resilience serves as both an educational resource and a companion to these services, offering executives and IT leaders a framework for success.

Availability

A CISO Guide to Cyber Resilience is now available for purchase on Amazon at https://amzn.to/3Vt1g0o. Readers can also learn more about the book and access additional resources through TrustedCISO’s official website at https://trustedciso.com/ciso-book-a-ciso-guide-to-cyber-resilience/.

About TrustedCISO LLC

TrustedCISO LLC is a veteran-owned cybersecurity and compliance advisory firm dedicated to helping organizations strengthen their security governance, risk management, and compliance readiness. With more than three decades of experience, the company provides scalable vCISO services, compliance readiness programs, and security solutions aligned to frameworks such as SOC 2, ISO 27001, CMMC, and HIPAA. TrustedCISO is committed to delivering trusted expertise that enables organizations to grow securely in a constantly changing cyber landscape.
Website: https://trustedciso.com

Media Contact

Organization: TrustedCISO LLC

Contact Person: Debra Baker

Website: https://trustedciso.com/

Email: Send Email

Country:United States

Release id:35584

The post TrustedCISO LLC Founder Debra Baker Releases A CISO Guide to Cyber Resilience appeared first on King Newswire. This content is provided by a third-party source.. King Newswire makes no warranties or representations in connection with it. King Newswire is a press release distribution agency and does not endorse or verify the claims made in this release. If you have any complaints or copyright concerns related to this article, please contact the company listed in the ‘Media Contact’ section

file

NordFX has received the Best Fastest Payout Award at Forex Expo Dubai 2025, held on October 6–7 at the Dubai World Trade Centre. The recognition highlights the company’s success in delivering rapid, reliable withdrawals through advanced automation, including its 24/7 automatic crypto withdrawal system. This innovation allows traders to access funds within minutes, reinforcing NordFX’s reputation for efficiency and trust. The award reflects the broker’s ongoing investment in technology and client-focused service, marking another milestone in its mission to enhance speed, transparency, and user experience in global trading.

NordFX has been honored with the Best Fastest Payout Award at Forex Expo Dubai 2025, recognizing the company’s ongoing commitment to ensuring rapid and reliable fund withdrawals for its global client base. The award was presented during the two-day event held on October 6–7 at the Dubai World Trade Centre, where NordFX participated as a Diamond Sponsor.

The Forex Expo Dubai is one of the most prominent gatherings for the trading and investment industry, bringing together brokers, fintech innovators, liquidity providers, and traders from across the world. The event serves as a benchmark for excellence and innovation, acknowledging companies that deliver meaningful improvements to the trading experience. NordFX recognition in the Fastest Payout category highlights the company’s efforts to streamline transaction processes and enhance client trust through technological advancement.

According to the NordFX team, the award reflects years of focused development in payment automation and infrastructure optimization. The company has implemented a range of solutions designed to accelerate withdrawals, particularly through its automated crypto withdrawal system. This feature enables clients to access their funds within minutes, even outside of traditional banking hours, offering 24/7 liquidity and flexibility.

Vanessa Polson, Marketing Manager at NordFX, commented on the achievement: “This recognition means a great deal to us because fast, secure payouts are one of the most tangible ways we build trust with our clients. Our team has worked hard to automate and simplify the withdrawal process, especially for crypto assets, where speed and transparency are critical. Receiving this award at such a respected global event reinforces that our approach is delivering real value.”

The introduction of automatic crypto withdrawals has been a significant factor in achieving faster turnaround times. Combined with a well-established system for card, bank, and e-wallet transactions, the company’s payment network is now capable of processing most requests almost instantly. For traders, this efficiency helps maintain confidence and operational agility, allowing them to reallocate capital quickly between markets and seize trading opportunities without delay.

At the Forex Expo, NordFX representatives met with traders, partners, and industry peers to discuss evolving payment standards, technological innovation, and the growing role of crypto in cross-border financial services. The award serves as further validation of NordFX dedication to enhancing user experience through innovation and reliability.

NordFX continues to expand its presence in global markets while maintaining a client-focused approach that prioritizes speed, transparency, and security. The company views this award not only as recognition of past achievements but also as motivation to keep advancing in digital transaction technology and customer service excellence.

Media Contact

Organization: NordFX Ltd.

Contact Person: Vanessa Polson

Website: https://nordfx.com/

Email: Send Email

Country:Saint Lucia

Release id:35591

The post NordFX Wins Best Fastest Payout Award at Forex Expo Dubai 2025 appeared first on King Newswire. This content is provided by a third-party source.. King Newswire makes no warranties or representations in connection with it. King Newswire is a press release distribution agency and does not endorse or verify the claims made in this release. If you have any complaints or copyright concerns related to this article, please contact the company listed in the ‘Media Contact’ section

file