After the success of Baddie.nl, one of the most-played online games in the Netherlands, the founders announce ForYouGaming Inc., a new U.S.-based parent company created to scale the brand internationally. The company’s first major move is the upcoming launch of Baddie.com, a global version of the popular lifestyle game that blends humor, social dynamics, and competition.

Baddie.com Marks the Beginning of a Global Expansion by ForYouGaming Inc.

Following the success of Baddie.nl, one of the most-played online games in the Netherlands, the creators announce the launch of ForYouGaming Inc., a new U.S.-based parent company formed to lead the brand’s international expansion.

The first major step in that strategy is the upcoming launch of Baddie.com — a global version of the hit web game that blends lifestyle, humor, and competition into one addictive experience.

ForYouGaming CEO, Stefan Gaasbeek: “Baddie.nl showed us what’s possible. Players connected, competed, and built entire online communities. With Baddie.com, we’re taking that energy worldwide — this is just the start.”

From Local Hit to Global Brand

Originally launched in the Netherlands, Baddie.nl quickly evolved into a cultural phenomenon — a game where players build influence, form alliances, and rise through a virtual social world inspired by modern pop culture.

The upcoming Baddie.com is purpose-built for the U.S. — reworked to match American culture and vibe, with a refreshed interface, new characters, and storylines tailored to U.S. audiences. It will be accessible across web, iOS, and Android.

Goal: turn Baddie into a global entertainment brand — with an American-first launch.
 

ForYouGaming Inc.: A New Global Structure

ForYouGaming Inc. was created to centralize and scale multiple titles under one global gaming structure. The company’s mission is to develop, publish, and grow web-based entertainment experiences that merge gaming with digital culture.

ForYouGaming CMO: “This move allows us to think bigger — beyond regions, beyond platforms. We’re building a network of games that share the same DNA: accessible, social, and built to scale.”

Next Titles in Development

In addition to Baddie, ForYouGaming manages other projects including Boef.nl, an online crime game with a strong male audience, and holds premium gaming domains such as Gangsters.com, currently in development. Each title contributes to a growing network of entertainment experiences built around one central idea — making online games that people actually live in, not just play.
 

About ForYouGaming Inc.

ForYouGaming Inc. is a next-generation gaming company creating online experiences that merge entertainment, culture, and community.
The company develops and publishes web and mobile games for a global audience, connecting players through storytelling, competition, and creativity.

Its portfolio includes Baddie.nl, Boef.nl, and the upcoming international release Baddie.com.
With an expanding lineup of titles, ForYouGaming Inc. is defining the next era of interactive entertainment.

Media Contact

Organization: ForYouGaming Inc.

Contact Person: Stefan Gaasbeek

Website: https://foryougaming.com

Email: Send Email

Contact Number: +12132321375

Address:1301 North Broadway, Los Angeles

Address 2: STE 32021

City: Los Angeles

State: California

Country:United States

Release id:35622

The post Baddie Marks the Beginning of a Global Expansion by ForYouGaming Inc appeared first on King Newswire. This content is provided by a third-party source.. King Newswire makes no warranties or representations in connection with it. King Newswire is a press release distribution agency and does not endorse or verify the claims made in this release. If you have any complaints or copyright concerns related to this article, please contact the company listed in the ‘Media Contact’ section

file

Shreyas Media Founder Gandra Srinivas Rao says the Guinness World Record-winning success of Vijayawada Utsav proves every city’s potential to host grand carnivals, as the company plans year-round celebrations across Andhra Pradesh featuring international artists.

Shreyas Media LLP has announced plans to launch a series of grand carnivals across India, inspired by the unprecedented success of Vijayawada Utsav—a record-breaking event that entered the Guinness World Records as the world’s largest carnival. Following the overwhelming public response, the company aims to celebrate India’s rich cultural diversity by bringing similar large-scale events to various states throughout the year.

The Vijayawada Utsav, organized in association with the Society for Vibrant Vijayawada and Andhra Pradesh Tourism, transformed the city into a festival hub from September 22 for 11 days. The event featured 284 performances by over 3,000 artists, 11 concerts, and 11 drone shows—an unprecedented celebration that drew nearly 50 lakh visitors and generated an estimated ₹1,000 crore in local business.

Buoyed by this success, Shreyas Media plans to replicate the carnival model across India, celebrating regional festivals such as Bihu, Onam, Ganesh Chaturthi, Pongal, Lohri, Durga Puja, and Sankranti with grandeur. The company will also host year-round celebrations at tourist hotspots like Araku and Gandikota in Andhra Pradesh, including 30 major concerts and a large-scale Araku Coffee Festival featuring international artists.

“Our success with Vijayawada Utsav has proven that every city in India holds the potential to become a vibrant carnival destination,” said Gandra Srinivas Rao, Founder of Shreyas Media. “Through these celebrations, we aim to unite people from across India and abroad on a single platform of culture and joy, while giving a significant boost to local economies and creating thousands of employment opportunities. We have set a target of achieving ₹5,000 crore in business through the Vijayawada Utsav in the next five years.”

He added that Andhra Pradesh Chief Minister Nara Chandrababu Naidu appreciated the event for placing Vijayawada on the global tourism map. Rao also acknowledged the efforts of Vijayawada MP Kesineni Shivanath (Chinni) and the Society for Vibrant Vijayawada in making the event a resounding success.

Having previously executed over 3,500 promotional and cultural events across India, the USA, Canada, and the UAE, Shreyas Media is now poised to establish a new legacy in India’s entertainment and cultural tourism landscape through its upcoming carnival series.

About Shreyas Media LLP
Shreyas Media LLP is India’s leading event management and entertainment company, renowned for organizing large-scale promotional, cultural, and public events across India and abroad. With over 3,500 successful events and a Guinness World Record to its credit, the company continues to redefine the landscape of experiential entertainment.

 

Media Contact

Organization: Shreyas Media LLP

Contact Person: Gandra Srinivas Rao

Website: https://shreyasgroup.net

Email:
Srinivas@shreyasgroup.net

City: Hyderabad

State: Telangana

Country:India

Release id:35620

The post Shreyas Media Announces Nationwide Carnival Series After Guinness Record-Breaking Vijayawada Utsav appeared first on King Newswire. This content is provided by a third-party source.. King Newswire makes no warranties or representations in connection with it. King Newswire is a press release distribution agency and does not endorse or verify the claims made in this release. If you have any complaints or copyright concerns related to this article, please contact the company listed in the ‘Media Contact’ section

file

Gros-Islet, Saint Lucia, 17th Oct 2025 – NordFX has received the Best Fastest Payout Award at Forex Expo Dubai 2025, recognizing the broker’s dedication to delivering rapid and reliable withdrawal processing for traders worldwide. The award was presented during the two-day event held on October 6–7 at the Dubai World Trade Centre, where NordFX participated as a Diamond Sponsor.

Forex Expo Dubai remains one of the largest and most influential industry events, drawing thousands of professionals from the trading, fintech, and investment sectors. The awards highlight innovation and quality of service across the global financial ecosystem. NordFX recognition in the Fastest Payout category underscores the company’s focus on efficient transaction systems and strong customer support.

Over recent years, NordFX has introduced a series of payment-processing upgrades aimed at accelerating withdrawals and ensuring around-the-clock accessibility. A key component has been the automation of crypto withdrawals, enabling traders to access their funds within minutes and outside of traditional banking hours. Together with optimized systems for card, bank, and e-wallet transactions, these advances have positioned NordFX among the industry leaders in payout speed.

The award also reflects NordFX wider goal of improving trust and transparency in online trading. By ensuring that clients can move funds quickly and securely, the company supports a more flexible trading experience where capital can be redeployed across markets with minimal delay.

At Forex Expo Dubai 2025, the NordFX team engaged with traders, partners, and industry representatives to discuss evolving payment standards and the growing role of crypto in international finance. The Fastest Payout recognition further reinforces NordFX long-term commitment to innovation, reliability, and service excellence.

Media Contact

Organization: NordFX Ltd.

Contact Person: Vanessa Polson

Website: https://nordfx.com/

Email:
marketing@nordfx.com

Address:Ground Floor, The Sotheby Building

Address 2: Rodney Village, Rodney Bay

City: Gros-Islet

Country:Saint Lucia

Release id:35592

The post NordFX Receives Best Fastest Payout Award at Forex Expo Dubai 2025 appeared first on King Newswire. This content is provided by a third-party source.. King Newswire makes no warranties or representations in connection with it. King Newswire is a press release distribution agency and does not endorse or verify the claims made in this release. If you have any complaints or copyright concerns related to this article, please contact the company listed in the ‘Media Contact’ section

file

China, 17th Oct 2025 — Recently, a research team jointly formed by Agibot, Chuangzhi Academy, The University of Hong Kong, and others, published a breakthrough study that systematically explores three key dimensions of data diversity in robot manipulation learning: task diversity, robot embodiment diversity, and expert diversity. This research challenges the traditional belief in robotics learning that “more diverse data is always better,” providing new theoretical guidance and practical pathways for building scalable robot operating systems.

Task Diversity: Specialist or Generalist? Data Provides the Answer

 A core question has long perplexed researchers in robot learning: when training a robot model, whether to focus on data highly relevant to the target task for “specialist” training, or to collect data from various tasks for a “generalist” learning approach.

To answer this, a clever comparative experiment was designed, constructing two pre-training datasets based on the AgiBot World dataset with identical sizes but drastically different task distributions:

  • “Specialist” Dataset (Task Sampling) – 10% of tasks most relevant to the target tasks were carefully selected, all containing the five core atomic skills required for evaluation: pick, place, grasp, pour, and fold. As shown in the figures, this strategy, while having lower skill diversity, is highly concentrated on the skills needed for the downstream tasks.
  • “Generalist” Dataset (Trajectory Sampling) – 10% of trajectories from each task were randomly sampled, preserving the full task diversity spectrum of the original dataset. Although this approach resulted in fewer trajectories directly related to the target skills (59.2% vs. 71.1%), it achieved a more balanced skill distribution.

The results revealed an unexpected trend. As shown, the “Generalist” trajectory sampling strategy significantly outperformed the “Specialist” approach on four challenging tasks, with an average performance improvement of 27%. More notably, the advantage of diversity was even more pronounced on complex tasks requiring higher semantic and spatial understanding – for example, a 0.26 point increase (39% relative improvement) on the Make Sandwich task, and a 0.14 point increase (70% relative improvement) on the Pour Water task.

Why did diversity win? The analysis revealed that the trajectory sampling strategy not only brought skill diversity but also implicitly included richer scene configurations, object variations, and environmental conditions. This “incidental” diversity significantly enhanced the model’s generalization ability, allowing the robot to better adapt to different objects, lighting conditions, and spatial layouts.

Based on the discovery that “diversity is more important,” the research team explored a deeper question: given sufficient task diversity, does increasing data volume continue to improve performance? Experimental results show that the average score of the GO-1 model exhibited a stable upward trajectory as pre-training data volume increased. Crucially, this improvement followed a strict Scaling Law! By fitting a power-law curve, Y = 1.24X^(-0.08), the team found a highly predictable power-law relationship between model performance and pre-training data volume, with a remarkable correlation coefficient of -0.99.

The significance of this finding lies not only in the numbers but also in a major breakthrough in research methodology. Past scaling law research in embodied intelligence primarily focused on single-task scenarios, small models, and no pre-training phase. This study extends scaling law exploration for the first time to the multi-task pre-training phase for foundation models, demonstrating that, given sufficient task diversity, large-scale pre-training data can provide continuous, predictable, and quantifiable performance gains for robot foundation models.

Embodiment Diversity: Cross-Robot Transfer Using Single-Platform Data

The robotics community has long held that for a model to generalize across different robot platforms, pre-training data must include data from as many diverse robot embodiments as possible. This belief led to large-scale multi-embodiment datasets like Open X-Embodiment (OXE), which includes 22 different robots.

However, cross-embodiment training introduces significant challenges: vast differences in physical structure, and inherent disparities in action and observation spaces between platforms complicate model learning. Facing these challenges, the team delved deeper: despite morphological differences, the action spaces of their end-effectors are essentially similar. When different robots make their end-effectors follow the same trajectory in world coordinates, they can produce comparable behaviors. This observation led to a key hypothesis: a model pre-trained on data from a single robot embodiment might easily transfer learned knowledge to new robot configurations, bypassing the complexities of cross-embodiment training. To validate this bold hypothesis, the team designed a “one versus many” experimental showdown:

  • RDT-AWB: Pre-trained on the Agibot World dataset (1 million trajectories, single Agibot Genie G1 robot), containing no data from the target test robots.
  • RDT-OXE: Pre-trained on the OXE dataset (2.4 million trajectories, 22 robot types), containing data from the target test robots, theoretically holding a “home advantage.”

Testing was conducted on three platforms: the Franka arm in the ManiSkill simulation, the Arx arm in the RoboTwin simulation, and the Piper arm in the real-world Agilex environment. In the cross-embodiment adaptation experiment in the ManiSkill environment, RDT-OXE initially showed its “home advantage,” slightly leading at 125 samples per task. However, a turning point occurred at 250 samples: RDT-AWB quickly caught up. As data increased further, RDT-AWB began to surpass RDT-OXE and widened the gap, a growth that followed a power-law relationship. This indicates that the single-embodiment pre-trained model not only achieves effective cross-embodiment transfer but also exhibits superior scaling properties.

To ensure generalizability, in the real-world Agilex environment, RDT-AWB outperformed RDT-OXE on 3 out of 4 tasks, achieving comprehensive victory from simulation to reality.

Additionally, tests were conducted to evaluate the cross-embodiment capability of the GO-1 model (pre-trained only on Agibot World) on the Lingong and Franka platforms using a folding task. Even without seeing the task or the specific embodiment in pre-training, the model required only 200 data points to successfully transfer and adapt, with GO-1 + AWB achieving an average score 30% higher than GO-1 trained from scratch.

These results have disruptive theoretical and practical implications. Theoretically, they challenge the traditional notion that multi-embodiment training is necessary for cross-embodiment deployment, suggesting that high-quality single-embodiment pre-training offers a simpler path. Practically, this can drastically reduce data collection costs by focusing on high-quality data from a single platform and simplify training pipelines, offering a new path for cross-platform robot model application.

Expert Diversity: Identifying Harmful Noise to Enhance Learning Efficiency

An often-overlooked yet crucial factor in robot learning is Expert Diversity – the variation in demonstration data distribution arising from differences in operator habits, skill levels, and inherent randomness. Unlike standardized NLP or CV datasets collected from the internet, robot datasets consist of continuous robot motions highly sensitive to operator behavior.

The classic PushT task, illustrated in the figures, exemplifies this phenomenon. Here, the robot (blue circle) must push a gray T-shaped object to a green target area. Despite the identical goal, the collected expert demonstrations show clear multi-modal characteristics. Spatial multi-modality is evident in different trajectory choices: the robot can approach from the left or right side of the object, forming distinct spatial paths, reflecting different operator understandings of the task strategy. Velocity multi-modality occurs when similar trajectories are executed at different speeds: even with similar paths, varying execution speeds produce entirely different demonstration profiles in the time dimension, with some operators acting quickly and decisively, others more slowly and cautiously.

These two types of multi-modality have completely different impacts on learning. Spatial variation represents meaningful task strategies; these diverse solutions should be preserved as they enrich the model’s understanding of the task and help prevent out-of-distribution (OOD) inference. However, velocity variation often introduces unnecessary noise, complicating current action-chunk-based imitation learning by forcing the model to learn these distribution characteristics simultaneously, increasing difficulty without adding substantive strategic value.

To address this challenge, the team proposed a clever two-stage distribution debiasing framework centered on introducing a Velocity Model (VM). In the first stage, the VM is trained to predict speed from action chunks using an MSE loss, learning the expected speed distribution for each input from the velocity-biased training data. This stage equips the VM with knowledge of reasonable speed distributions corresponding to different action patterns. In the second stage, during policy training, the VM first predicts an unbiased speed for each training sample. This predicted speed is then used to convert the original actions into unbiased actions. The policy is subsequently trained using these unbiased actions as supervision targets, effectively simplifying the distribution complexity and allowing the model to focus on learning the core task strategy without being distracted by speed variations.

The team validated the distribution debiasing approach on two representative tasks: Wipe Table and Make Sandwich. The model trained on debiased data, named GO-1-Pro, consistently outperformed the standard GO-1 model on both tasks and across all data scales. Notably, GO-1-Pro demonstrated exceptional data efficiency – achieving comparable or superior performance using only half the training data required by GO-1, effectively doubling data utilization efficiency.

The advantages of the debiasing method were particularly pronounced in low-data scenarios. Under the scarce condition of only 15 demonstrations, GO-1-Pro improved performance on the Make Sandwich task by 48% and the Wipe Table task by 39%. In data-scarce settings, multi-modal distributions in speed and space create significant interference, hindering the model’s ability to capture core spatial patterns. By decoupling these confounding factors, the debiasing method enables the model to focus on learning essential spatial relationships, leading to more efficient and robust policy learning even with limited data, providing a practical technical path for enhancing model performance and data efficiency.

This study systematically explores data scaling for robot manipulation, revealing three key insights that challenge conventional wisdom: task diversity is more critical than the quantity of single-task demonstrations; embodiment diversity is not strictly necessary for cross-embodiment transfer; and expert diversity can be detrimental due to velocity multi-modality. These findings overturn the traditional “more diversity is always better” paradigm, proving that quality trumps quantity, and insightful curation trumps blind accumulation. True breakthrough lies not in collecting more data, but in understanding the essence of data, identifying valuable diversity, and eliminating harmful noise, charting a more efficient and precise development path for robot learning.

Media Contact

Organization: Shanghai Zhiyuan Innovation Technology Co., Ltd.

Contact Person: Jocelyn Lee

Website: https://www.zhiyuan-robot.com

Email: Send Email

City: Shanghai

Country:China

Release id:35602

The post Agibot Groundbreaking Release – New Perspectives on Task Embodiment and Expert Data Diversity appeared first on King Newswire. This content is provided by a third-party source.. King Newswire makes no warranties or representations in connection with it. King Newswire is a press release distribution agency and does not endorse or verify the claims made in this release. If you have any complaints or copyright concerns related to this article, please contact the company listed in the ‘Media Contact’ section

file

China, 17th Oct 2025 — On August 21, 2025, Agibot held its inaugural Partner Conference in Shanghai. As the theme of “Advancing with Intelligence, Embarking on a New Era”, the conference comprehensively showcased Agibot’s full-chain layout across “product, technology, business, ecosystem, capital, and team” through strategic announcements, demonstrations of eight scenario-based solutions, and immersive experiences with hundreds of robots. Leveraging the “One Body, Three Intelligences” full-stack technology architecture and a comprehensive product matrix for all scenarios, Agibot is collaborating with partners to accelerate the commercialization process of embodied intelligence and propel the industry from technological exploration to scale commercialization.

PART1. Collaborative Software and Hardware Builds a New Embodied Intelligence Ecosystem, with Multi-Scenario Coverage Accelerating the New Era of Commercial Embodied Intelligence

At the main forum, Deng Taihua, Chairman and CEO of Agibot, delivered an impactful opening speech. He proposed that the world is on the eve of the great explosion of embodied intelligence, with artificial intelligence rapidly advancing towards AGI. He stated that 2025 will be an inflection point for the commercial development of embodied intelligent robots, which will ultimately become the next generation of mass-market intelligent terminal, following phones and cars.

Adhering to its original aspiration of creating infinite productivity, Agibot aims to become a global leader in the intelligent robotics field, pioneer the general-purpose robot industry ecosystem, and accelerate the arrival of the era of general intelligence. Deng Taihua also elaborated on Agibot’s core strategy: guided by the goal of general-purpose, mass-produced humanoid robots, focus on building a full-stack software and hardware platform with strong intelligence and easy collaboration; progressively promote deployment in industrial, commercial, and home commercial scenarios; and ultimately build an application ecosystem for a general intelligent robot platform.

Although established less than three years ago, Agibot has already achieved numerous industry-leading milestones. The company continuously builds core product competitiveness around the “One Body, Three Intelligences” concept, achieving the industry’s only full-series, full-scenario product layout, and establishing a full-stack technology layout encompassing the body, cerebellum, and brain. In business operations, by continuously strengthening product capabilities and progressively deploying products suitable for different scenarios, it is gradually achieving cross-domain scale commercialization, maintaining strong business development momentum while continuously breaking into high-quality major client bases.

Deng Taihua emphasized that ecosystem co-creation is the core driver for the scaling of the embodied intelligence industry, explicitly listing it as one of the company’s core strategies. Agibot will pool global innovative forces through three main paths: open source, being integrated, and capital empowerment.

In terms of technology open source, Agibot has open-sourced the robot middleware AimRT and a million-unit real-robot dataset, and launched the first embodied intelligence operating system, “Lingqu OS”, thereby promoting the industry’s journey towards standardized, scaled, and ecosystem-driven development.

In building the business ecosystem, Agibot implements an Enablement Strategy, using its own platform technology to integrate the vertical capabilities of leading industry partners in R&D, market, delivery, etc., to create industry-specific embodied agents covering eight scenarios including guided reception, entertainment & commercial performances, intelligent manufacturing, and logistics sorting. Simultaneously, it is building a layered distribution system, effectively lowering the barrier to cooperation by clarifying responsibilities, rights, and benefits, and establishing incentive mechanisms.

Furthermore, to support early-stage innovation, Agibot launched the first startup acceleration program focused on the embodied intelligence industry chain—”Agibot Plan A”. This plan aims to incubate 50+ high-potential early-stage projects and build a trillion-yuan industrial ecosystem within three years. Agibot will provide participants with benefits including technical support, financing empowerment, scenario access, and entrepreneurial incubation. Deng Taihua announced on-site that the first startup cohort officially opened for applications globally from robot startups and developer teams on August 21, 2025.

Currently, the Agibot team is continuously expanding, with increasing talent density. On this foundation, Agibot also clearly presented its plans and goals for the next five years to its partners.

Deng Taihua concluded, “From technological breakthroughs to industry explosion, from Chinese innovation to global leadership, every step Agibot takes is inseparable from the support of our partners.” Agibot will work hand-in-hand with global partners to promote embodied intelligent robots as a new productive force that changes the world, jointly opening a new chapter in the intelligent era.

PART2. Interpretation of the “1+3” Full-Stack Technology Strategy&Three Product Series Covering Diverse Scenarios

Peng Zhihui, Co-founder and CTO of  Agibot, systematically interpreted the “1+3” full-stack technology strategy, building upon the robot body to develop three core capabilities: Motion Intelligence, Interaction Intelligence, and Task Intelligence. “We are not just making a few robots, but creating a base for a self-evolving general embodied intelligence agent,” he explained. Motion Intelligence enables robots to “walk steadily and move quickly,” achieving adaptive walking on complex terrain based on Sim2Real reinforcement learning. Interaction Intelligence allows them to “hear and understand, chat naturally,” with multi-modal dialogue response times reaching the one-second level. Task Intelligence tackles “grasping accurately and performing delicate tasks,” achieving a closed loop from grasping to fine manipulation through real-robot reinforcement learning. These three intelligences can be flexibly combined in robots of different forms, creating a “one set of capabilities, multiple carriers” technology flywheel.

Peng showcased Agibot’s three product series at the event: The YuanZheng (Expedition) series’ YuanZheng A2 is the industry’s first full-sized humanoid robot for scaled commercial deployment, having passed 2000+ hours of walking tests and obtained safety certifications in China, the US, and Europe. It focuses on guided reception and entertainment/commercial performances, supporting full-body customization. The JingLing (Genie) series’ JingLing G1 possesses native data collection and integrated collection-push capabilities. Paired with platforms like Genie Studio, it is suited for industrial, commercial, and other scenarios. LingXi X2 from the LingXi series is an agile, life-like, and 1.3 meters tall robot, covering scenarios like entertainment/commercial performances, store reception, and scientific research/education.

During the conference, Peng released “LinkCraft,” a robot motion and expression creation platform. Described as a disruptive, AI-powered multi-modal content generation and editing tool for robots, it features rich motion libraries, supports preview editing, motion import, choreography, and performance, reducing the barrier for robot secondary development to virtually zero. Peng emphasized that while robots are moving from labs to life and industry, “interaction and expression” remain bottlenecks, and partners/developers need simpler, more efficient ways to customize robot behavior. “LinkCraft’s vision is to make robots express as naturally as humans and let creators choreograph as freely as directors.” The platform is expected to rapidly enrich interactive forms across various scenarios, accelerating scenario co-creation and ecosystem deployment in commercial services, cultural entertainment, and other fields.

Additionally, Peng impressively unveiled the LingXi X2-W prototype – a wheeled dual-arm robot specifically designed for “Task Intelligence.” Just a month prior, Agibot’s innovative wheel-legged LingXi X2-N had already garnered widespread attention. The LingXi X2-W prototype further embodies the design philosophy of “becoming the smoothest native Task Intelligence body,” featuring core attributes such as an omnidirectional mobile base, high-DOF dual arms with bionic wrists, compact storage (footprint < 0.5㎡), dual power system switching, dexterous three-finger hands with tactile feedback, omnidirectional perception system, powerful edge computing unit, and low cost. Peng stated that the LingXi X2-W is currently in the prototype stage, but with continuous breakthroughs in algorithms and models, it is expected to become a benchmark for the next generation of embodied intelligence task robots.

“YuanZheng reaches out, JingLing gets the job done, LingXi wins hearts,” Peng emphasized, noting that the synergistic efforts of these three series will help Agibot accelerate rapidly. “Looking ahead three years, Agibot aims to achieve deployment of hundreds of thousands of general-purpose robots, support autonomous generalization across hundreds of tasks, and build an open, evolvable, self-growing general robot ecosystem.”

PART3. Business Initiatives: Multiple Measures to Improve Industry Development

Jiang Qingsong, Partner and Vice President of Agibot, stated that the company currently focuses on eight major scenarios in business: guided reception, entertainment/commercial performances, industrial intelligence, logistics sorting, security inspection, commercial cleaning, data collection/training, and scientific research/education. It has launched customized solutions and achieved large-scale applications across multiple industries.

To popularize promote technology and further scenario deployment, Jiang released the 2025 Partner Policy, proposing the construction of a multi-tier partner system. Based on the principle of “directing premium resources to premium partners,” it provides comprehensive support to jointly build a synergistic “technology-product-scenario” ecosystem and share industry dividends.

“Embodied intelligence is moving from the laboratory to all industries. Agibot not only has the industry’s most complete robot product family but has also built a channel system that allows partners to ‘board with low barriers and grow with high returns.’ The goal is to work with partners to truly convert AI’s creativity into customer productivity,” Jiang said.

Additionally, the conference featured four sub-forums, eight commercial scenario exhibition areas, and an AgiBot Night tech party. Through detailed product displays, case studies of scenario deployment, and immersive interactions, partners could directly experience the commercial capabilities of embodied intelligence.

As the industry’s first large-scale ecosystem gathering, the conference helped solidify consensus across the industry chain through the clear communication of its core strategy. In the future, Agibot will continue to join hands with partners to accelerate the commercial implementation of embodied intelligence, promote the industry’s shift from technological exploration to scale commercialization, and establish a new global benchmark for the embodied intelligence industry.

Media Contact

Organization: Shanghai Zhiyuan Innovation Technology Co., Ltd.

Contact Person: Jocelyn Lee

Website: https://www.zhiyuan-robot.com

Email: Send Email

City: Shanghai

Country:China

Release id:35598

The post Agibot First Partner Conference was Successfully Held Showing that Its Full-Chain Layout is Accelerating Commercialization of Embodied Intelligence appeared first on King Newswire. This content is provided by a third-party source.. King Newswire makes no warranties or representations in connection with it. King Newswire is a press release distribution agency and does not endorse or verify the claims made in this release. If you have any complaints or copyright concerns related to this article, please contact the company listed in the ‘Media Contact’ section

file

Shanghai, China, 17th Oct 2025 — Recently, Zhiyuan Innovation (Shanghai) Technology Co., Ltd. (hereinafter referred to as Agibot) has reached a project cooperation agreement with Fulin Precision Co., Ltd.(hereinafter referred to as Fulin Precision) valued at tens of millions of RMB. Nearly one hundred Agibot’s Expedition A2-W robots will be deployed across Fulin Precision’s factories. This milestone represents not only the first large-scale commercial contract for embodied robots in China’s industrial sector but also the world’s first scaled deployment of such robots in smart manufacturing scenarios. It signifies that industrial embodied AI has officially transitioned from the technical validation phase into a new era of scalable commercialization.

In July 2025, the first set of Expedition A2-W robots successfully completed a live demonstration of routine industrial operations in a material feeding scenario on Fulin Precision’s production line. Its capability of handling 1,000 turnover boxes per shift successfully matched the full production schedule demand for a single production line that month. This current deployment of nearly one hundred units marks a leapfrog upgrade from a “single-factory pilot verification” to “multi-factory, full-line coverage” in the application of robotic box depalletizing and material feeding. The robots’ operational scope has expanded from the initial 2 production line points to 15 feeding points across two core workshops – the powertrain and reducer workshops – undertaking raw material delivery tasks for a daily production capacity of over 500 units. They also handle automated empty box retrieval, performing nearly ten thousand box-moving actions per shift, becoming the core transport force connecting the production processes of the two workshops. The Agibot Expedition A2-W wheeled general-purpose robot is specifically designed for flexible intelligent manufacturing scenarios and can be widely applied in various tasks such as turnover box depalletizing/palletizing, transportation, and loading/unloading.

A breakthrough innovation of this cooperation lies in the deep collaborative system built through “Embodied Robot + AMR”. In this system, the embodied A2-W robot is responsible for picking and placing turnover boxes from multi-layer racks, while the AMR handles the transport of heavy full-pallet materials within the workshop. Relying on the production system’s intelligent task dispatch and dynamic material call mechanisms, the AMR delivers materials from the picking area to the line-side feeding points. The embodied A2-W robot can automatically identify boxes, autonomously adjust its posture, complete depalletizing/feeding and empty box retrieval, realizing a fully automated workflow from material outbound to production line feeding and empty box circulation. This system leverages both the embodied robot’s generalized operational capability for turnover boxes and its adaptability to complex scenes, as well as the AMR’s efficiency advantages in long-distance, high-load transportation. It addresses the pain points of traditional automation equipment in adapting flexibly to multi-workshop, multi-category scenarios, providing a directly reusable collaborative solution for manufacturing scenarios involving turnover box depalletizing/palletizing, transportation, and loading/unloading.

Furthermore, the large-scale application of the Expedition A2-W is underpinned by three core technological breakthroughs:

1. Multi-modal Perception System: Real-time recognition of dynamic obstacles (personnel, equipment, materials) ensuring safety in human-robot collaborative environments.
2. Dual-arm Coordination & High-precision Manipulation: Adapts to containers of varying sizes and weights.
3. Autonomous Error-correction Algorithm: Enables the robot to autonomously recover workflow during unexpected anomalies without human intervention.

Deng Yang, Director of the Engineering Department at Fulin Precision, stated: “The performance of the Expedition A2-W on the production line has exceeded expectations – it autonomously adjusts its grasping posture for irregularly stacked boxes, avoids obstacles in real-time when personnel cross its path, and achieved zero failures in nearly ten thousand operations per shift. Its anti-interference and error correction capabilities are far superior to traditional automation equipment. More importantly, it can take over repetitive tasks in the workshop that are prone to causing lumbar muscle strain, allowing workers to focus on more valuable operations. With the continuous introduction and application of such robots, industrial embodied intelligence is expected to reshape manufacturing production models, pushing the industry’s intelligent transformation into a new stage. This project cooperation signifies the arrival of large-scale commercialization for industrial embodied intelligence and represents an upgrade in the industrial ecological partnership between Agibot and Fulin Precision. As robotic application scenarios scale up, Fulin Precision, as a joint supplier for robots, has already sensed the opportunities brought by industry innovation.”

Wang Chuang, President of the General Purpose Business Unit at Agibot, stated: “This nearly hundred-unit level commercial order with Fulin Precision not only verifies the technological maturity of embodied robots but, more importantly, signifies manufacturing’s recognition of their commercial value. We consider this project a benchmark case for the large-scale application of industrial embodied robots. We will use the implementation path of ‘Scenario Penetration – Data Accumulation – Technology Iteration’ as a reference for more industrial scenarios such as automotive manufacturing and 3C electronics.”

Media Contact

Organization: Shanghai Zhiyuan Innovation Technology Co., Ltd.

Contact Person: Jocelyn Lee

Website: https://www.zhiyuan-robot.com

Email: Send Email

City: Shanghai

Country:China

Release id:35601

The post Agibot Wins Fulin Precision Order Valued at Tens of Millions RMB – Marking a Breakthrough in Scaled Deployment of Industrial Embodied AI appeared first on King Newswire. This content is provided by a third-party source.. King Newswire makes no warranties or representations in connection with it. King Newswire is a press release distribution agency and does not endorse or verify the claims made in this release. If you have any complaints or copyright concerns related to this article, please contact the company listed in the ‘Media Contact’ section

file

The crypto community welcomes a new wave of excitement as UpWeGo ($UP) officially launches a community-driven meme coin built on the Ethereum blockchain, symbolizing ambition, humor, ad collective growth. With a rallying cry that 
‘The Only Way Is Up,” UpWeGo unites crypto enthusiasts worldwide under a shared mission of positivity and progress.

The new meme coin has been designed for the people and powered by Ethereum’s decentralized foundation, UpWeGoembodies the unstoppable spirit of rising together There are no taxes, no developer wallets, and a fully burnt liquidity pool (LP) – ensuring a fair and transparent ecosystem where community truly comes.

The key tokenomics of the newly launched coin are as follows:

• Total Supply: 999,999,999,999

• Tax: 0/0

• LP: Burnt

• Contract Address: 0xF95151526F586DB1C99FB6EBB6392AA9CFE13F8E

In the recent development, the UpWeGo has been listed on CoinMarketCap and continues to gain momentum as new holders join daily, spreading the project’s message across the crypto space More than just another meme coin, UpWeGo aims to represent optimism, resilience, and unity n an ever-changing market – a reminder that no matter what happens, the only direction with heading is UP.

With UpWeGo, the company is not just building a token – but a movement. The token is about energy, humor, and belief in what the crypto community can achieve when it rises together.

About the Company – UpWeGo

UpWeGo ($UP) is a community-driven meme coin built on the Ethereum blockchain, created to capture the unstoppable energy of collective growth and optimism in the crypto world. With no taxes, no developer wallets and a burnt liquidity pool, UpWeGostands for transparency, equality, and the power of shared belief. The token is an epitome of movement that celebrates ambition, humor, and unity that reminds the global crypto community that no matter the market’s direction, the only way is UP.

Marketing partner: crmoonboy (crmoon)

For further details visit the following links:

• Website: https://upwegoeth.xyz/

• Twitter (X): https://twitter.com/UPerc20

• Telegram: https://t.me/EthereumUpwego

Media Contact

Organization: upwego

Contact Person: Tadeusz Kusiak

Website: https://upwegoeth.xyz/

Email:
upuptoken@gmail.com

Address:Tuszyńska 32

City: Rzgów

Country:Poland

Release id:35615

The post UpWeGo Officially Launches its New Driven Meme Coin with the Slogan The Only Way is Up for this Community appeared first on King Newswire. This content is provided by a third-party source.. King Newswire makes no warranties or representations in connection with it. King Newswire is a press release distribution agency and does not endorse or verify the claims made in this release. If you have any complaints or copyright concerns related to this article, please contact the company listed in the ‘Media Contact’ section

file

Shanghai, China, 17th Oct 2025 — Recently, Agibot has officially launched Genie Envisioner (GE), a unified world model platform for real-world robot control. Departing from the traditional fragmented pipeline of data-training-evaluation, GE integrates future frame prediction, policy learning, and simulation evaluation for the first time into a closed-loop architecture centered on video generation. This enables robots to perform end-to-end reasoning and execution—from seeing to thinking to acting—within the same world model. Trained on 3,000 hours of real robot data, GE-Act not only significantly surpasses existing State-of-The-Art (SOTA) methods in cross-platform generalization and long-horizon task execution but also opens up a new technical pathway for embodied intelligence from visual understanding to action execution.

Current robot learning systems typically adopt a phased development model—data collection, model training, and policy evaluation—where each stage is independent and requires specialized infrastructure and task-specific tuning. This fragmented architecture increases development complexity, prolongs iteration cycles, and limits system scalability. The GE platform addresses this by constructing a unified video-generative world model that integrates these disparate stages into a closed-loop system. Built upon approximately 3,000 hours of real robot manipulation video data, GE establishes a direct mapping from language instructions to the visual space, preserving the complete spatiotemporal information of robot-environment interactions.

01/ Core Innovation: A Vision-Centric World Modeling Paradigm

The core breakthrough of GE lies in constructing a vision-centric modeling paradigm based on world models. Unlike mainstream Vision-Language-Action (VLA) methods that rely on Vision-Language Models (VLMs) to map visual inputs into a linguistic space for indirect modeling, GE directly models the interaction dynamics between the robot and the environment within the visual space. This approach fully retains the spatial structures and temporal evolution information during manipulation, achieving more accurate and direct modeling of robot-environment dynamics. This vision-centric paradigm offers two key advantages:

Efficient Cross-Platform Generalization Capability: Leveraging powerful pre-training in the visual space, GE-Act requires minimal data for cross-platform transfer. On new robot platforms like the Agilex Cobot Magic and Dual Franka, GE-Act achieved high-quality task execution using only 1 hour (approximately 250 demonstrations) of teleoperation data. In contrast, even models like π0 and GR00T, which are pre-trained on large-scale multi-embodiment data, underperformed GE-Act with the same amount of data. This efficient generalization stems from the universal manipulation representations learned by GE-Base in the visual space. By directly modeling visual dynamics instead of relying on linguistic abstractions, the model captures underlying physical laws and manipulation patterns shared across platforms, enabling rapid adaptation.

Accurate Execution Capability for Long-Horizon Tasks: More importantly, vision-centric modeling endows GE with powerful future spatiotemporal prediction capabilities. By explicitly modeling temporal evolution in the visual space, GE-Act can plan and execute complex tasks requiring long-term reasoning. In ultra-long-step tasks such as folding a cardboard box, GE-Act demonstrated performance far exceeding existing SOTA methods. Taking box folding as an example, this task requires the precise execution of over 10 consecutive sub-steps, each dependent on the accurate completion of the previous ones. GE-Act achieved a 76% success rate, while π0 (specifically optimized for deformable object manipulation) reached only 48%, and UniVLA and GR00T failed completely (0% success rate). This enhancement in long-horizon execution capability stems not only from GE’s visual world modeling but also benefits from the innovatively designed sparse memory module, which helps the robot selectively retain key historical information, maintaining precise contextual understanding in long-term tasks. By predicting future visual states, GE-Act can foresee the long-term consequences of actions, thereby generating more coherent and stable manipulation sequences. In comparison, language-space-based methods are prone to error accumulation and semantic drift in long-horizon tasks.

02/ Technical Architecture: Three Core Components

Based on the vision-centric modeling concept, the GE platform consists of three tightly integrated components:

GE-Base: Multi-View Video World Foundation Model: GE-Base is the core foundation of the entire platform. It employs an autoregressive video generation framework, segmenting output into discrete video chunks, each containing N frames. The model’s key innovations lie in its multi-view generation capability and sparse memory mechanism. By simultaneously processing inputs from three viewpoints (head camera and two wrist cameras), GE-Base maintains spatial consistency and captures the complete manipulation scene. The sparse memory mechanism enhances long-term reasoning by randomly sampling historical frames, enabling the model to handle manipulation tasks lasting several minutes while maintaining temporal coherence.

Training uses a two-stage strategy: first, temporal adaptation training (GE-Base-MR) with multi-resolution sampling at 3-30Hz makes the model robust to different motion speeds; subsequently, policy alignment fine-tuning (GE-Base-LF) at a fixed 5Hz sampling rate aligns with the temporal abstraction of downstream action modeling. The entire training was completed in about 10 days using 32 A100 GPUs on the AgiBot-World-Beta dataset, comprising approximately 3,000 hours and over 1 million real robot data instances.

GE-Act: Parallel Flow Matching Action Model: GE-Act serves as a plug-and-play action module, converting the visual latent representations from GE-Base into executable robot control commands through a lightweight architecture with 160M parameters. Its design cleverly parallels GE-Base’s visual backbone, using DiT blocks with the same network depth as GE-Base but smaller hidden dimensions for efficiency. Via a cross-attention mechanism, the action pathway fully utilizes semantic information from visual features, ensuring generated actions align with task instructions.

 

GE-Act’s training involves three stages: action pre-training projects visual representations into the action policy space; task-specific video adaptation updates the visual generation component for specific tasks; task-specific action fine-tuning refines the full model to capture fine-grained control dynamics. Notably, its asynchronous inference mode is key: the video DiT runs at 5Hz for single-step denoising, while the action model runs at 30Hz for 5-step denoising. This “slow-fast” two-layer optimization enables the system to complete 54-step action inference in 200ms on an onboard RTX 4090 GPU, achieving real-time control.

GE-Sim: Hierarchical Action-Conditioned Simulator: GE-Sim extends GE-Base’s generative capability into an action-conditioned neural simulator, enabling precise visual prediction through a hierarchical action conditioning mechanism. This mechanism includes two key components: Pose2Image conditioning projects 7-degree-of-freedom end-effector poses (position, orientation, gripper state) into the image space, generating spatially aligned pose images via camera calibration; Motion vectors calculate the incremental motion between consecutive poses, encoded as motion tokens and injected into each DiT block via cross-attention. 

This design allows GE-Sim to accurately translate low-level control commands into visual predictions, supporting closed-loop policy evaluation. In practice, action trajectories generated by the policy model are converted by GE-Sim into future visual states; these generated videos are then fed back to the policy model to produce the next actions, forming a complete simulation loop. Parallelized on distributed clusters, GE-Sim can evaluate thousands of policy rollouts per hour, providing an efficient evaluation platform for large-scale policy optimization. Furthermore, GE-Sim also acts as a data engine, generating diverse training data by executing the same action trajectories under different initial visual conditions.

These three components work closely together to form a complete vision-centric robot learning platform: GE-Base provides powerful visual world modeling capabilities, GE-Act enables efficient conversion from vision to action, and GE-Sim supports large-scale policy evaluation and data generation, collectively advancing embodied intelligence.

EWMBench: World Model Evaluation Suite

Additionally, to evaluate the quality of world models for embodied tasks, the team developed the EWMBench evaluation suite alongside the core GE components. It provides comprehensive scoring across dimensions including scene consistency, trajectory accuracy, motion dynamics consistency, and semantic alignment. Subjective ratings from multiple experts showed high consistency with GE-Bench rankings, validating its reliability for assessing robot task relevance. In comparisons with advanced models like Kling, Hailuo, and OpenSora, GE-Base achieved top results on multiple key metrics reflecting visual modeling quality, aligning closely with human judgment.

Open-Source Plan & Future Outlook

The team will open-source all code, pre-trained models, and evaluation tools. Through its vision-centric world modeling, GE pioneers a new technical path for robot learning. The release of GE marks a shift for robots from passive execution towards active ‘imagine-verify-act’ cycles. In the future, the platform will be expanded to incorporate more sensor modalities, support full-body mobility and human-robot collaboration, continuously promoting the practical application of intelligent manufacturing and service robots.

Media Contact

Organization: Shanghai Zhiyuan Innovation Technology Co., Ltd.

Contact Person: Jocelyn Lee

Website: https://www.zhiyuan-robot.com

Email: Send Email

City: Shanghai

Country:China

Release id:35600

The post Agibot Released the Industry First Open-Source Robot World Model Platform – Genie Envisioner appeared first on King Newswire. This content is provided by a third-party source.. King Newswire makes no warranties or representations in connection with it. King Newswire is a press release distribution agency and does not endorse or verify the claims made in this release. If you have any complaints or copyright concerns related to this article, please contact the company listed in the ‘Media Contact’ section

file

Shanghai, China, 17th Oct 2025 — Recently, Agibot officially launched the “Genie Trailblazer Global Recruitment Program”, aiming to invite top researchers worldwide to collaboratively define and create artificial intelligence capable of perceiving, reasoning, and emotionally engaging with the physical world.

This initiative provides a platform for scientific collaboration, focusing on three core research directions: general-purpose embodied intelligence models, embodied world models, and advanced teleoperation. Agibot warmly welcomes researchers from diverse backgrounds, including multimodal modeling, reinforcement learning, robotics, and world model experts, as long as they are passionate about shaping the future of embodied intelligence.

Moreover, Agibot offers robust support and resources to participating researchers. During the project period, researchers will receive free access to Agibot’s general-purpose embodied intelligence robot, Genie G1, enabling them to conduct data collection, model testing, and iterative development using this advanced physical platform to accelerate the realization of their ideas. Additionally, Agibot provides meticulously annotated real-world datasets and a high-fidelity digital twin environment, offering critical support to tackle the core challenges of sim-to-real transfer.

On the technical front, the high-performance robot platform features agile mobility and dexterous manipulation capabilities, integrated with an end-to-end data closed-loop system that spans the entire pipeline—from data collection and simulation-based training to real-world deployment. Researchers can deploy general-purpose embodied models on Agibot’s platform for end-to-end training and testing in real environments. They can also leverage AgiBot World, AgiBot Digital World, and specialized embodied-scenario datasets to validate their models’ understanding of physical laws, thereby endowing robots with planning and “imagination” capabilities.

To incentivize participation, Agibot has established a cash prize pool totaling RMB 1 million. Teams selected for the program and delivering outstanding results will be awarded the“GT-Star”. Exceptional developers will also gain direct access to internship opportunities and fast-track recruitment pathways at Agibot.

Agibot believes that the future of intelligence begins with embodiment. This initiative is expected to ignite global researchers’ innovative potential and collectively advance the frontier of embodied intelligence technologies.

Media Contact

Organization: Shanghai Zhiyuan Innovation Technology Co., Ltd.

Contact Person: Jocelyn Lee

Website: https://www.zhiyuan-robot.com

Email: Send Email

City: Shanghai

Country:China

Release id:35597

The post Genie Trailblazer Program by Agibot Empowers Embodied Intelligence Technology appeared first on King Newswire. This content is provided by a third-party source.. King Newswire makes no warranties or representations in connection with it. King Newswire is a press release distribution agency and does not endorse or verify the claims made in this release. If you have any complaints or copyright concerns related to this article, please contact the company listed in the ‘Media Contact’ section

file

Columbus, United States, 17th Oct 2025 – A study in the journal Sensors unveils an affordable and efficient system that captures high-fidelity spectral data in low-light and underground environments, opening new possibilities for geology, agriculture, and manufacturing.

A research team from The Ohio State University has developed an active hyperspectral imaging system capable of capturing precise spectral data in the complete absence of light. The design, constructed from low-cost commercial components, presents an accessible and efficient alternative to traditional technologies.

The breakthrough, detailed in the peer-reviewed journal Sensors, introduces a compact and energy-efficient prototype poised to transform material analysis in environments previously considered inaccessible for high-fidelity spectral sensing.


Traditional hyperspectral imaging systems identify materials by analyzing their special spectral “fingerprints” across hundreds of wavelength bands. However, these passive systems are fundamentally dependent on ambient light and often require costly, complex optical components, restricting their use in dark or confined settings such as underground tunnels, mines, or even for quality control in dimly lit industrial facilities.
The Ohio State team’s special approach overcomes these limitations by replacing passive detection with a programmable array of 76 single-wavelength LEDs synchronized with a full-spectrum camera. This active illumination method allows the system to generate its own light source, ensuring high-quality spectral data acquisition regardless of external lighting conditions.


“Our goal was to make hyperspectral imaging more efficient, accessible, and usable in real-world settings — not just in laboratories,” said Dr. Rongjun Qin, a professor in the Department of Civil, Environmental and Geodetic Engineering and the corresponding author of the study. “This system allows us to take the lab to the field, wherever that field may be.”

To validate its performance, the team tested the prototype on a variety of samples, including fruits, plant leaves, and rock specimens, under challenging low-light conditions. The results demonstrated a remarkable capability to distinguish subtle spectral changes—such as variations in fruit freshness or mineral composition—that are invisible to standard RGB cameras. A machine learning analysis of the captured data achieved a classification accuracy of up to 90%, a significant improvement over the 70% accuracy obtained with conventional imaging under similar conditions.


Beyond its high accuracy, the system offers substantial gains in efficiency. The prototype reduced the full-image acquisition time from 20 seconds to under 2.5 seconds. By eliminating the need for complex filters and bulky external lighting setups, the design is not only more cost-effective but also highly portable and consumes less power. Its lightweight form factor is suitable for portable field use or integration with drone-based platforms, enabling faster and more sustainable field operations.

A central theme of the research is the democratization of advanced sensing technology. Built entirely from commercially available, or “off-the-shelf,” components, the system drastically lowers the financial barrier for smaller laboratories and research institutions.

“All of the components are low-cost and readily available. This means laboratories and universities worldwide can adapt our design with minimal resources,” stated Yang Tang, a Ph.D. student and the study’s lead author. “We believe this will empower a wider scientific community and open up new opportunities for AI-driven material analysis and education in optical sensing.”

The potential applications for this technology are extensive and cross-sectoral. In geological exploration and mining, it can operate in dark, subterranean environments without the need for expensive and cumbersome lighting equipment. In agriculture and food safety, it offers a reliable method for monitoring crop health or detecting contamination. For manufacturing and environmental monitoring, it provides consistent and repeatable data under any lighting condition, enhancing quality control and compliance.


The research team plans to continue refining the system by enhancing its capture speed, increasing spectral resolution, and further integrating the hardware into a fully portable and automated imaging platform. “We envision a future where hyperspectral imaging becomes as routine as taking a photograph — efficient, affordable, and universally accessible,” added Dr. Qin.

About the Study

 The full study, titled “An Active Hyperspectral Imager by a Programmable LED Array and a Full-Spectrum Camera,” is published in Sensors. It can be accessed at the following DOI: https://doi.org/10.3390/s23031437

About The Ohio State University 

The Ohio State University is one of the leading public research institutions in the United States, advancing interdisciplinary science and engineering with a focus on data analytics, sustainability, and applied technology.

About CAIMO Global 

CAIMO Global is an academic media and science communication agency dedicated to bridging academic research and global audiences. The agency collaborates with universities, research institutions, and scholars worldwide to help academic studies gain understanding, increase visibility, and generate impact, ensuring that outstanding research is seen, shared, and remembered.

 

Media Contact

Organization: CAIMO Global Academic Media Co., Ltd.

Contact Person: YangXue

Website: https://flo.host/QNirG-Q/

Email: Send Email

City: Columbus

Country:United States

Release id:35603

The post New Active Hyperspectral System From Ohio State University Researchers Enables Precise Spectral Sensing in Complete Darkness appeared first on King Newswire. This content is provided by a third-party source.. King Newswire makes no warranties or representations in connection with it. King Newswire is a press release distribution agency and does not endorse or verify the claims made in this release. If you have any complaints or copyright concerns related to this article, please contact the company listed in the ‘Media Contact’ section

file