DEEPX (CEO Lokwon Kim), a pioneer in ultra-low-power on-device AI semiconductors, is demonstrating its mass-production-ready product lineup in collaboration with 11 leading Taiwanese industrial PC and server companies at Computex Taipei 2025, held from May 20–23. Participating partners include MSI, IBASE, Inventec, Biostar, Portwell, AIC, Jetone, Mitwell, AAEON, DFI, and Zotac. This extensive collaboration solidifies DEEPX's position as a global AI semiconductor company with the most active local ecosystem network in Taiwan—one of the world's foremost hardware hubs.During the exhibition, DEEPX operates its own booth while also showcasing its NPUs live at partner booths. Global buyers and visitors can experience DEEPX's production-ready AI accelerators integrated into industrial PCs, workstations, and servers, including:• DX-M1 M.2 module for sub-3W ultra-low-power inference,• DX-H1 PCIe accelerator for high-performance, multi-channel AI processing,• DX-AiBOX, an edge system optimized for seamless deployment across customer platforms,• DX-V3 SoC, dedicated to AI-powered smart cameras.These solutions are being validated across various use cases with partners in factory automation, surveillance systems, smart city infrastructure, and intelligent buildings.The first-generation DX Series delivers up to 20x higher power efficiency and one-tenth the heat and total cost of ownership (TCO) compared to conventional AI accelerators. These advantages are setting new standards for fanless edge system design and cost-efficient cloud infrastructure, especially within Taiwan's ecosystem, which accounts for over 80% of the world's IPC and server production."Taiwan is a global manufacturing hub with a complete AI value chain—from components to systems to end solutions," said Lokwon Kim, CEO of DEEPX. "By working closely with local partners, DEEPX is reshaping the high-cost, high-power GPGPU paradigm and enabling customers to build the most stable and efficient AI systems."Recently, DEEPX was named one of the "Top 100 AI Companies" by Business Next (¼Æ¦ì®É¥N), a leading business and technology publication in Taiwan. The company has also established a Taipei branch office, localizing its customer support and logistics operations while accelerating partnerships with companies including Lanner, ASUS, ASRock, Supermicro, LEX, and Innodisk. To stay updated on the latest from DEEPX, follow the official DEEPX LinkedIn page.The DEEPX booth is located on the 4th floor of Nangang Exhibition Center Hall 2, in the AI & Edge Computing Zone, with additional showcases available throughout the 1st to 4th floors at partner booths.
The Internet has become an integral part of our everyday lives in the current digital era, changing how we interact, study, work, and relax. Online settings have made previously unheard-of levels of accessibility, flexibility, and creativity possible. Whether we're streaming our favorite programs, working with colleagues across countries, or attending virtual schools. There's so much we can do online now.Empowering choicesOne notable example of leveraging online environments for user empowerment is BonusFinder in Canada. This platform assists users in navigating the online gaming landscape by providing comprehensive reviews and comparisons of various gaming sites.By offering detailed information on bonuses, game selections, and user experiences, BonusFinder enables Canadians to make informed decisions tailored to their preferences. This exemplifies how online platforms can enhance user autonomy and satisfaction in the digital realm.Flexibility and accessibility in online learningOnline education has revolutionized access to learning, breaking down geographical and temporal barriers. Students can now engage with course materials at their own pace, accommodating diverse schedules and learning styles.This flexibility particularly benefits individuals balancing education with work or family commitments. Institutions like Boston University highlight that online programs often offer lower tuition fees and eliminate commuting costs. This makes education more affordable and accessible to a broader audience.Personalized learning experiencesDigital platforms facilitate personalized learning by allowing students to focus on areas where they need improvement and skip content they've mastered. Adaptive learning technologies and interactive tools, such as quizzes and simulations, cater to individual learning preferences, enhancing engagement and retention.Moreover, revisiting rÈZcordÈZd lÈZctures and materials enables students to reinforce their understanding at their convenience.Promoting inclusivity and accessibilityOnline environments have significantly enhanced inclusivity and accessibility in education and professional development. Digital platforms often incorporate features such as closed captions, screen readers, and adjustable text sizes. This makes content more accessible to individuals with disabilities.Moreover, the flexibility of online learning allows individuals from diverse backgrounds and circumstances, including those in remote areas or with caregiving responsibilities, to access quality education and training. Institutions like Bellevue University emphasize that online education breaks down geographical and temporal barriers. This, in turn, provides equal opportunities for learners regardless of location or personal commitments.Enhancing mental health supportThe digital realm has also expanded access to mental health resources, offering support through teletherapy, online support groups, and mental health apps. These platforms provide convenient and often more affordable options for individuals seeking assistance. This reduces the stigma associated with mental health care.However, it's crucial to approach online mental health resources with discernment. Experts caution against over-reliance on AI-driven therapy chatbots. While they can offer immediate support, they lack the nuanced understanding of human therapists and may not be suitable for all situations.Fostering global collaboration and cultural exchangeOnline environments facilitate global collaboration, connecting individuals across continents to share ideas, work on projects, and learn from diverse perspectives. This interconnectedness enriches the learning experience. In turn, this promotes cultural exchange and prepares individuals for the globalized workforce.Various platforms offer courses from universities worldwide. This allows learners to engage with international content and peers. Such exposure broadens intellectual horizons and fosters empathy and cross-cultural understanding, essential skills in today's interconnected world.Environmental sustainabilityTransitioning to online environments contributes to environmental conservation by reducing the need for physical resources. Traditional classrooms consume significant amounts of paper and energy. In comparison, online learning minimizes paper usage and lowers carbon emissions associated with commuting.According to The Starfish Canada, online education helps save millions of trees annually. In turn, this decreases paper waste and the demand for physical infrastructure.Enhancing digital literacy and technical skillsEngagement with online platforms inherently improves digital literacy. It's a crucial skill in today's technology-driven world. Students become proficient in using various digital tools and platforms, such as learning management systems, video conferencing software, and collaborative applications. These skills are transferable to the workforce, where digital proficiency is increasingly valued.Fostering global connections and collaborationOnline environments enable individuals to connect and collaborate across geographical boundaries. Virtual classrooms and forums bring together diverse perspectives, enrich discussions, and promote cultural exchange.This global interconnectedness prepares students for the international nature of the workforce, where cross-cultural communication and collaboration are essential.Supporting mental health and well-beingThe flexibility and accessibility of online environments can positively impact mental health by reducing stress associated with rigid schedules and commuting. Institutions emphasize the importance of supportive online learning environments that address mental health. This offers counseling and wellness programs to promote resilience and academic success.Encouraging lifelong learningOnline platforms provide opportunities for continuous learning beyond traditional educational settings. Professionals can upskill or reskill through online courses, staying current with industry trends and advancements. This accessibility to lifelong learning fosters personal growth and adaptability in a rapidly evolving job market.ConclusionOnline environments have become integral to modern life. They offer numerous benefits that enhance education, promote sustainability, and foster global connections. Platforms like BonusFinder in Canada exemplify how digital tools can empower users to make informed choices tailored to their preferences.So, as technology continues to evolve, embracing the advantages of online environments will be key to personal and societal advancement in the digital age.
Wedge Networks, a global leader in real-time threat prevention, and Edgecore Networks, a leader in open networking and Wi-Fi infrastructure, are proud to announce the launch of a joint OEM-based Managed Secure Wi-Fi Solution, WedgeCND, combining carrier-grade wireless connectivity with advanced, AI-powered cybersecurity.Set to debut at Computex 2025, this integrated solution enables MSPs, ISPs, and government organizations to deploy cyber-secure, cloud-managed Wi-Fi services at scale, with unmatched cost-efficiency, zero-touch provisioning, and enterprise-grade protection for all connected endpoints."This partnership combines Edgecore's robust, scalable Wi-Fi infrastructure with Wedge's patented real-time threat prevention to deliver WedgeCND as a truly AI-drive, cloud-first and cyber-first managed Wi-Fi platform. Our partnership with Edgecore made this a highly cost-effective solution, delivering significant savings in Total Cost of ownership when compared with industry peers." said Dr. Hongwen Zhang, CEO and CTO of Wedge Networks. "We're excited to introduce WedgeCND at Computex as the world's most cost-effective, AI-driven, and security-centric Wi-Fi offering."Next-Gen Managed Wi-Fi – Key Highlights✅ Edgecore's high-performance, leading-edge technology with the broadest Wi-Fi use case coverage. • 802.11ac (Wi-Fi 5), 802.11ax (Wi-Fi 6/6E), 802.11be (Wi-Fi 7)• Supports all Wi-Fi use cases: SMB, WFH, Enterprise, MDU, campus, warehouse, industrial, full indoor/outdoor portfolio• Delivers significant savings in combined AP and cloud licensing costs compared to industry peers.✅ Zero-Touch Cloud Management at Scale • Edgecore ecCLOUD enables centralized, multi-tenant management of APs, networks, and firmware — ideal for service providers and distributed enterprises.✅ Cybersecurity Built-In, Not Bolted OnWedgeCND is a Wi-Fi-optimized configuration of WedgeARP, an award winning and patented network management and real-time threat prevention platform from Wedge Networks. WedgeCND is tailor configured to meet the enterprise-grade security compliance needs of cloud managed Wi-Fi networks. Through a lightweight cloud orchestration architecture, it enables Wi-Fi APs with high performance, advanced security functions:• Secure Web Gateway (SWG)• AI based malware prevention to stop never-before-seen targeted malware• NGFW with Deep Packet and Deep Content Inspections o Network DLP o VLAN based network segmentation o ZTNA of all devices✅ AI-Driven Real-time Protection for Every Connected Device• Wedge's AI-powered real-time inspection engine blocks threats inline, without perceivable latency and stops never-before-seen malware at the network track — making it ideal for zero-trust, highperformance Wi-Fi networks.✅ Cost-Effective, Scalable, Globally Available• Designed for high-density, cost-sensitive deployments.• Available through both Edgecore and Wedge's global OEM, SI, and MSP channel network."Edgecore is excited to enable this powerful secure Wi-Fi solution with Wedge Networks. WedgeCND will also be available for all Edgecore ecCLOUD customers and partners," said TengTai Hsu, Vice President of Edgecore Networks. "This offering brings together best-in-class wireless hardware with cutting-edge enterprise-grade security functions — all managed through the cloud, and deployable at massive scale."Join Us at Computex 2025Experience live demos of the WedgeCND, a partnership between Wedge and Edgecore, for Managed Secure Wi-Fi solution at Computex 2025, where the combined power of high-performance APs and enterprise-grade cyber protection will be on full display.Availability• Now available for Edgecore EAP101 and EAP102.• Full Edgecore/ecCLOUD AP lineup support expected Q3 2025.• Offered in Canada and globally via Wedge Networks and Edgecore international channel partners.Partner with UsFor sales and partnership opportunities, pilot programs, or early deployment support:• Wedge Networks:sales@wedgenetworks.com• Edgecore Networks: ecwifi-info@edge-core.comClick/scan the QR code to see it in action: About Wedge NetworksHeadquartered in Calgary, Canada, Wedge Networks is an innovation leader in real-time threat prevention. Its Cloud Orchestrated Edge Detection and Response software platform WedgeARP™, with its patented Deep Content Inspection and AI threat detection and prevention engines, are protecting millions of endpoints in critical infrastructure and connected environments in 20+ countries/regions.About Edgecore NetworksA subsidiary of Accton Technology Corporation, Edgecore Networks delivers scalable, open networking and Wi-Fi solutions for service providers, cloud operators, and enterprises worldwide.
The high power consumption of electronic systems, driven by applications in Artificial Intelligence (AI) and High-Performance Computing (HPC), is generating a significant demand to address overheating and thermal management challenges. This extends from the infrastructure of data center rackmount AI servers to user-end AI PCs and a diverse range of electronic computing systems, consuming substantial resources and time for engineering teams to confront these new challenges. To effectively address the computational demands of the AI era while simultaneously mastering thermal design challenges, the use of multiphysics simulation systems, employing Computational Fluid Dynamics (CFD) methods to simulate airflow distribution and cooling efficiency, has become a prominent product development focus in the electronics and semiconductor industries.To embrace this immense AI demand and innovation momentum, Altair, the global leader in computational intelligence with a comprehensive portfolio spanning computer-aided engineering (CAE), high-performance computing (HPC), and AI solutions, offers a suite of multiphysics analysis and simulation solutions specifically addressing thermal management and related complex challenges. Frank Wu, Vice President of Global CFD Business at Altair, illustrates the capabilities of Altair® ultraFluidX® simulation software through application examples in thermal flow simulation, assisting the industry in rediscovering the "cooling" pathway in electronic system design.Leading-edge aerodynamics simulation tool ultraFluidX delivers innovative cooling solutions for electronic devicesAltair has cultivated long-standing collaborations with customers in Detroit, the automotive manufacturing hub of the United States. Its ultraFluidX, initially developed for the automotive industry to handle aerodynamic characteristics and simulation technology, utilizes Computational Fluid Dynamics (CFD) analysis to identify designs that minimize air resistance, thereby reducing fuel consumption and enhancing vehicle stability – establishing its leadership in the field. Now, trending products such as electronic systems, AI PCs, and data center AI servers face thermal system design challenges. Beyond optimizing cooling performance, these often require overcoming coupled multiphysics issues like noise and vibration. Due to the involvement of various design departments and architectures, Altair introduced ultraFluidX as a design simulation tool to address this complexity, supporting electronics manufacturing and thermal system supply chain partners and establishing crucial ultraFluidX usage examples to create optimized thermal system design workflows.He exemplifies solutions for the cooling fan and overall thermal system design, where the primary focus is on maximizing cooling efficiency through the airflow and aerodynamic effects generated by the fan. Secondly, it addresses the noise effects resulting from the turbulence created by the interaction of airflow with the fan blades – two key parameters influencing the quality of thermal system design. Effective analysis of these two main design requirements utilizes air pressure flow analysis to understand aerodynamics. In terms of processing and computational efficiency, ultraFluidX holds a leading position in the market. This is attributed to ultraFluidX's mesh generation method, employing the Lattice Boltzmann Method (LBM). Compared to traditional CFD mesh generation, LBM offers the advantage of not requiring simplification of complex geometric shapes and enables rapid part replacement, reducing the time engineers spend on meshing. Coupled with the Large Eddy Simulation (LES) mathematical model and GPU computing power for calculation and solving, it significantly reduces computation time compared to traditional CPU-based methods. A single ultraFluidX computation yields both flow field and sound field results, eliminating the need for secondary calculations. Furthermore, leveraging GPU computing resources enables a comprehensive grasp of details, providing a novel solution for fan and thermal design in electronic products. Regarding GPU computing power, Altair offers three flexible service models to cater to diverse customer needs: first, customers can utilize their own GPU processor servers to run ultraFluidX software; second, they can access ultraFluidX through Altair's connection to public cloud computing services; and third, Altair provides dedicated GPU systems for customers to use within their internal environments. These three models encompass different business service content, offering multiple choices to meet various customer requirements.AI-driven design tools enhance collaboration, accelerate design convergence, and foster product innovationTo meet the demands for design precision and reduced design and validation times, Altair's AI technology is rapidly proving its value. For instance, for manufacturers of fans or thermal modules, after accumulating a substantial amount of historical ultraFluidX data, software such as Altair PhysicsAI™ can be used to learn the relationships between these flow field plots and sound pressure level. This enables the creation of accurate AI prediction models, allowing for rapid evaluation and design of more effective fan blade shapes to reduce aerodynamic noise while ensuring maximum cooling capacity, and overcoming various engineering challenges.This example of AI technology accelerating simulation, design, and AI applications offers the electronics and semiconductor industries a new perspective on solving engineering design problems. More multiphysics simulation technologies and applications will play an indispensable role in electronic system and semiconductor design, with their scope and reach continuously expanding. This includes areas such as chip package design, electronic printed circuit boards, and the common technical challenges in electronic systems involving structural, impact, electrical, thermal, thermal warpage, and fluid dynamics phenomena.Altair Technology Conference Taiwan 2025 Credit: AltairAltair Technology Conference focuses on "AI-powered engineering"To ensure customers stay ahead of this product design trend, Altair will convene its Altair Technology Conference in Taipei on May 28, 2025, focusing on the widespread application scenarios of "AI-powered Engineering." Leveraging its extensive experience in the simulation domain and the power of AI technology, Altair will provide research and design engineers in the semiconductor and electronics manufacturing industries with comprehensive multiphysics simulation solutions. This will empower R&D and manufacturing teams to grasp complex physical phenomena early in the product development cycle, enabling optimized designs.In recent years, Altair has not only integrated numerous AI capabilities into its comprehensive HyperWorks platform to streamline modeling and analysis processes but has also leveraged features like PhysicsAI. By utilizing vast amounts of historical multiphysics simulation data, PhysicsAI helps engineers create entirely new design concepts. This conference will feature an overview of the new capabilities in HyperWorks 2025, demonstrating its seamless integration of finite element analysis, multiphysics, and materials simulation to assist R&D teams in solving complex and interconnected challenges. Keynote speeches will also delve into realizing smarter product engineering through AI agents and small language models (sLLMs).Furthermore, the conference will address the various challenges arising from the popularity of AI chips and AI PCs, ranging from advanced high-level packaging chips to thermal modules on PCBs or electronic systems, and liquid cooling systems in data center servers. Dr. Chou, Ming, Chief Engineer at Altair and a 2025 National Academy of Engineering elected, will provide insights for electronics and semiconductor customers in Taiwan on leveraging Altair SimLab in PCB and 3DIC packaging design. He will also share how to integrate AI prediction and present case studies from world-class IC design industry clients, promising compelling content.To learn more about Altair's solutions and event details, please click here to visit the registration website.
Chenbro, a global leader in the server chassis industry, is making its most significant appearance to date at COMPUTEX 2025, taking place from May 20 to 23 in Taipei. This year's expanded presence highlights Chenbro's strategic focus on three core service models: OTS, JDM, and OEM Plus. Chenbro showcases its latest AI server enclosure solutions, along with cloud server products co-developed with leading partners from the U.S. and Taiwan. Demonstrating a full spectrum of capabilities—from standard products to fully customized solutions—Chenbro reaffirms its robust R&D and manufacturing strength in addressing the evolving needs of global enterprise and cloud markets.Nvidia Global Vice President and Taiwan General Manager Eunice Chiu, alongside Chenbro CEO and General Manager Ya-Nan Chen. Credit: Chenbro Exemplifying Chenbro's core values with three service modelsAt COMPUTEX 2025, Chenbro showcases its strength through three key service models. The OTS demonstration highlights the latest Nvidia MGX and DC-MHS solutions, along with a product roadmap spanning AI, Cloud, Storage, and Edge applications. Its modular design approach enables flexible configurations to meet diverse customer requirements, including OTS-customization solutions. In the JDM area, Chenbro features high-density AI servers and 21-inch OCP ORV3-compliant chassis, co-developed with clients. This model also underscores Chenbro's global R&D collaboration, along with its JPDP (Joint Product Design Process) and DFM (Design for Manufacturing) capabilities that enhance collaboration efficiency and improve product quality. Meanwhile, the OEM Plus showcase emphasizes Chenbro's manufacturing glocalization strategy, Lean Intelligent Manufacturing, and tooling knowledge management, demonstrating its achievements in lean production and manufacturing transformation.Chenbro CEO, Corona Chen, stated that with its strong capabilities in electro-mechanical integration, Chenbro has adopted innovative and diversified business models to expand its market reach. By operating three parallel service models, Chenbro can meet the needs of a wide range of customers—including cloud service providers, system integrators, brand vendors, and channel partners—offering quick and efficient responses to market demands. This approach accelerates the deployment of AI, HPC, and big data infrastructure across data centers and enterprises. Looking ahead, Chenbro will continue to strengthen its global presence by expanding localized manufacturing and R&D teams, aiming to capture new growth opportunities in the AI and cloud server markets in collaboration with its customers.Deepening AI strategy with Nvidia MGX architecture and server developmentAI servers remain a key focus in the market. With the release of Nvidia's GB300 NVL72 platform, Chenbro, an Nvidia MGX ecosystem provider, showcases the next-generation Nvidia GB300 compute tray at COMPUTEX 2025. Chenbro also displays a full range of AI products based on the Nvidia MGX architecture, including 1U, 2U, and 4U server chassis. Many leading technology companies at the event have also demonstrated AI servers co-developed with Chenbro, highlighting the collaborative momentum among industry leaders and the promising future of AI applications.In addition to its work with Nvidia, Corona further emphasized that Chenbro will continue to strengthen its collaborations with CPU providers such as AMD, Intel, and Ampere. Chenbro is committed to investing in the R&D and manufacturing of next-generation servers, exploring the future landscape of AI, and addressing diverse application needs to build a mutually beneficial and thriving industry ecosystem.About ChenbroFounded in 1983, Chenbro has been the trailblazer in designing and manufacturing of own-brand rackmount system, tower server and PC chassis for over 42 years. Chenbro is not only qualified by the first-tier server brands and provides OEM Plus and ODM/JDM services with EMS companies, but also successfully develops its OTS (Off The Shelf) products to meet market demand. Chenbro extends its business footprint to datacenters and industrial solutions by continuously investing in technologies and delivers the most trusted server chassis with the highest standard of innovation. For more information, please visit www.chenbro.com
PEGATRON, a globally recognized Design, Manufacturing, and Service (DMS) provider, is pleased to announce its participation in COMPUTEX 2025, where it will unveil a comprehensive portfolio of advanced rack-scale solutions designed to meet the increasing complexity and scale of AI and data center workloads. These solutions deliver exceptional compute density, energy efficiency, and scalability, aligned with open infrastructure standards.A major highlight of PEGATRON's COMPUTEX showcase is the introduction of the RA4802-72N2, a rack solution featuring the NVIDIA GB300 NVL72, which includes 72 NVIDIA Blackwell Ultra GPUs and 36 NVIDIA Grace CPUs. This system delivers up to a 50X increase in AI factory output with optimized inference capabilities. The rack integrates PEGATRON's in-house developed Coolant Distribution Unit (CDU) to enhance cooling efficiency in high-density environments. Equipped with redundant hot-swappable pumps and a cooling capacity of 310 kW, it ensures optimal performance and high reliability for mission-critical data center operations.Also debuting is the PEGATRON AS208-2A1, a 2U liquid-cooled server system accelerated by the NVIDIA HGX B300 system and dual AMD EPYC™ 9005 processors. Scaled to 48U NVIDIA MGX rack solution, it supports 128 GPUs and 32 CPUs within a high-efficiency, direct liquid cooling framework. This platform delivers exceptional compute density and thermal control while enabling efficient GPU utilization across the rack. Designed for the AI reasoning era with increased compute and expanded memory capacity, it offers breakthrough performance for complex workloads from agentic systems and reasoning to video generation, making it ideal for every data center.Additionally, PEGATRON will be offering NVIDIA RTX PRO 6000 Blackwell servers (AS400-2A1, AS205-2T1), which provide nearly universal acceleration for a broad range of enterprise AI workloads, from multimodal AI inference and physical AI to design, scientific computing, graphics and video applications.Pushing the boundaries of rack-scale compute performance even further, PEGATRON unveils AS501-4A1, the 5OU system features the latest AMD Instinct™ MI350 series GPUs and AMD EPYC™ 9005 processors. Scaled up to 51OU liquid-cooled rack solution, it integrates configuration of 128 AMD Instinct™ MI350 series GPUs and platforms. The solution employs direct-to-chip liquid cooling across both GPUs and CPUs, enabling sustained performance for generative AI, inference, training, and high performance computing—all within a compact, energy-optimized footprint."With the increasing scale and complexity of AI workloads, data center infrastructure must evolve to deliver higher performance, better efficiency, and thermal resilience," said Dr. James Shue, SVP & CTO of PEGATRON. "Our latest liquid-cooled solutions reflect our commitment to enabling the next wave of AI innovation through scalable, ultra high-density systems optimized for real-world deployment."PEGATRON welcomes attendees to Booth #L0118, 4th Floor, Nangang Exhibition Center, Hall 1, from May 20–23, 2025, to explore its newest platforms and engage with the experts behind PEGATRON's breakthrough compute and cooling technologies.PEGATRON Liquid-Cooled Ultra High Density GPU RackPhoto: Company
Retronix Technologies Inc. announced the launch of two cutting-edge AI edge computing platforms, developed in collaboration with Renesas Electronics Corporation.The newly unveiled Sparrow Hawk Single Board Computer (SBC) and Raptor System on Module (SoM) are both powered by the latest Renesas R-Car V4H System-on-Chip (SoC), delivering up to 30 TOPS (Dense) of AI inference performance. These open platforms are designed to support a wide range of embedded edge AI applications and smart automotive solutions.Sparrow Hawk focuses on robotics, industrial automation, and rapid prototyping, offering a highly flexible and cost-effective development platform. Raptor, with its modular design and multi-camera processing capabilities, is engineered for commercial vehicles, advanced driver-assistance systems (ADAS), and autonomous guided vehicles (AGVs), meeting demanding requirements for reliability and AI edge computing.Product Highlight 1: Sparrow Hawk — A Versatile Platform for Edge AI ApplicationsSparrow Hawk is a compact and highly expandable edge AI development board featuring the Renesas R-Car V4H SoC. It offers up to 30 TOPS of dense AI inference performance and supports a fully open-source Linux environment, accelerating the development of industrial and embedded AI solutions.Key Features:*Optimized for Edge Intelligence: Designed for industrial robots, smart manufacturing, and autonomous control systems. *Raspberry Pi HAT Compatible: Easily integrates with popular modules and sensors to streamline development.*High AI Performance: Handles real-time image processing and AI workloads with ease thanks to 30 TOPS deep learning capabilities.*Open Development Environment: Built on an open-source Linux architecture with extensive community support.*Developer-Friendly Pricing: Campaign program at only USD 300, no paper contract required to get started.*Compact Design: Measures just 146mm x 90mm, ideal for embedded and terminal devices.Rich I/O and Expansion Interfaces:*8GB / 16GB LPDDR5 memory*Dual-camera interface and 40-pin GPIO header*1x DisplayPort, PCIe (4x USB3.0, 1x M.2 Key-M), 2x CAN-FD, Audio (2x In. 1x Out) and AVB Ethernet*Supports USB PD 20V power input and MicroSD removable storageRetronix Sparrow HawkPhoto: CompanyProduct Highlight 2: Raptor — Automotive-Grade AI SoM for Smart Vehicle Vision ProcessingRaptor is a high-performance SoM designed for automotive vision processing and edge AI computation. Powered by the Renesas R-Car V4H SoC, it supports multiple camera inputs, pre-processing, and AI inference. Raptor is ideal for applications including ADAS, smart cockpits, surround-view systems, and AGVs.Key Features: *Automotive-Grade Architecture: Built with safety-oriented design principles, long-term supply, and compliance with automotive standards.*Multi-Camera Support: Integrated ISP supports up to 8 video channels with synchronized vision processing.*Powerful AI and Specialized Automotive IP: Delivers 30 TOPS AI inference performance with integrated Image Rendering Unit, Dense Optical Flow, Structure from Motion, and CV/Deep Learning Engines.*Reference Carrier Design & Custom Development: Retronix offers reference designs and engineering services to accelerate product development. *Comprehensive Software Resources: Compatible with Yocto Linux and includes the Renesas AI Hybrid Compiler toolkit.*High Reliability: Designed for high-temperature environments with optimized power efficiency for automotive use.Retronix RaptorPhoto: CompanyAvailability and Computex ShowcaseSparrow Hawk and Raptor are scheduled to sample in late Q2 2025, alongside the launch of a developer program and open-source community support platform to help users rapidly prototype and deploy AI applications.We warmly invite industry professionals to visit Retronix at Computex 2025 (Booth N0814 / B-4) to experience the capabilities of Sparrow Hawk and Raptor firsthand. Explore their architectures, image processing performance, and AI inference efficiency across smart manufacturing, robotics, unmanned vehicles, and intelligent automotive applications.
At the 2025 COMPUTEX Product Showcase, JMicron Technology Corp., a global leader in high-speed interface bridge controllers, alongside its wholly-owned subsidiary KaiKuTeK Inc., introduced a new line of ultra-fast storage bridge controller solutions. These advancements enable next-generation enclosure types and open the door to a wide range of new applications in data storage. In addition to the storage innovations, the companies also unveiled their latest breakthrough in smart sensing: a 60GHz millimeter-wave radar-based AI sensing technology, bringing exciting news for the smart home experience.JMicron demonstrated its latest high-speed bridge controller, highlighting the JMS591 and the JMB595 solution. The JMS591 (USB 3.2 Gen2 x2 & eSATA 6Gb/s to 5 ports SATA 6Gb/s), a single-chip multi-bay hardware RAID solution which supports RAID 0/1/5/10/JBOD, demonstrated that sequential read/write performance can reach 2,000 MB/s, and it can also control computer fans and a liquid crystal display module (LCM). Compared to current solutions, the JMS591 upgrades data transfer speeds, improving the stability and effectiveness of hardware RAID functions. Moreover, it is expected that the JMS591 will be adopted widely across multi-bay application such as network-attached storage (NAS), direct-attached storage (DAS), network video recorder (NVR) and digital video recorder (DVR) markets by providing a high cost-effective multi-bay RAID storage solution, while the market will continue to keep an eye on the JMS591. On the other hand, the JMB595 (PCIe Gen4x4 to 16 ports SATA 6Gb/s), a multi-bay storage solution prototype, is not only suitable for high-end surveillance and private cloud applications, but also serves as another option in the entry-level server market. Hence, the industry shows high expectations on the JMB595."Through our accumulation of technical expertise and position as a market leader, we are creating a high-speed data transfer and storage application trend, collaborating with our key clients to develop the next-generation bridge controllers," said Tony Lin, JMicron's VP of Marketing & Sales Center.KaiKuTeK unveiled its latest 60GHz mmWave radar AI sensing technology, which integrates a proprietary antenna design, advanced DSP and AI accelerators, and self-developed algorithms. This innovation brings precise target behavior tracking and positioning recognition. The breakthrough effectively addresses long-standing challenges in traditional smart home products related to human presence detection. For instance, smart electronic locks can detect an approaching person via mmWave radar and automatically activate facial recognition or other unlocking modes. Fans and air conditioners can detect user locations to adjust airflow dynamically, creating a "wind follows the person" effect or enabling personalized temperature control. Meanwhile, TVs can optimize sound staging based on viewer positioning, delivering an immersive experience. This innovation not only enables electronic devices such as electronic locks, fans, air conditioners, and TVs to interact with users more intelligently, but the streamlined design significantly reduces the Total Cost of Ownership (TCO) and delivers simultaneous benefits of energy conservation and carbon reduction, setting a new standard in the consumer electronics market."Our long-term focus is on integrating mmWave radar with DSP and AI to create more intuitive and intelligent human-machine interfaces," said Mike Wang, CEO of KaiKuTeK. "The adoption of 60GHz mmWave radar represents a breakthrough, not only solving smart home detection challenges but also introducing unprecedented convenience for users. We look forward to expanding this technology into industrial and IoT applications."With its leading expertise in DSP/ AI/ ML technologies and antenna design, KaiKuTeK continues to demonstrate its strong potential for technological innovation. The future of mmWave radar applications seems promising, a trend driven by rising demand for contactless technologies and intelligent automation. In response, KaiKuTeK is actively partnering with global technology leaders to fast-track commercialization efforts. The company plans to introduce a new wave of consumer products featuring this advanced radar technology in the second half of 2025. This innovation opens new growth opportunities across industries, setting the stage for the next generation of smart environments.We sincerely invite you to visit JMicron and KaiKuTeK at Courtyard by Marriott Taipei #Sea Hall during COMPUTEX.JMS591 multi-bay hardware RAID solution
We've crossed a threshold. AI used to be about research papers and new models making ever-higher benchmark scores, but now it's pushing a new gold rush of innovation. AI agents solving real-world problems is the new opportunity for solo entrepreneurs to revolutionize industries. This monumental shift sees systems being built today powering real applications and services that are entering the hands of users and changing business operations around the world.This transformation isn't limited to large companies with deep pockets anymore. A solo developer with a vision and the right tools can now create an AI-driven app and bring it to market. The barriers to entry have been lowered and the door to innovation is wide open."This year can truly be considered the inaugural year of artificial intelligence applications," said Alex Yeh, Founder and CEO of GMI Cloud, following his visit to NVIDIA's GTC 2025 in San Jose. What once felt like long-term speculation is now unfolding rapidly now that real use-cases are being served with a surge of AI-native products from solo developers and startups.At the heart of this momentum is the rise of AI agents: software systems that can perceive, reason, plan, and take autonomous action. They're powering everything from intelligent customer support tools to domain-specific solutions like personalized fashion search engines that not only identify styles but also suggest looks and purchasing options in real-time.AI agents are distinct from traditional software in that they possess a level of autonomous decision-making that allows them to learn from interactions and adapt in real time. This makes them more dynamic, responsive, and capable of handling complex tasks with minimal human oversight, paving the way for smarter, more personalized user experiences.Fueling this shift is a convergence of trends: powerful open-source LLMs like DeepSeek and LLaMA4, a growing emphasis on inference, and a robust ecosystem of modular, composable AI tools. Together, these advances allow small teams—or even individuals—to build sophisticated AI agents at an unprecedented speed.But this accessibility depends on infrastructure that can keep pace. High-performance GPUs, flexible environments, and tightly integrated tools are necessary for developing good AI solutions. Building from scratch is expensive and risky, especially with the current pace of development. That's why platforms that provide a fully integrated AI development stack are becoming essential accelerators, enabling innovators to focus on their ideas without worrying about the infrastructure.Companies like GMI Cloud have emerged as key enablers in this landscape. With four data centers in Taiwan and the U.S., access to over a thousand NVIDIA H100 and H200 GPUs, and a nearly 50-member technical team, GMI Cloud has built an AI application development platform that streamlines the entire lifecycle—from training and fine-tuning to inference and deployment.By integrating computing resources with popular open-source tools, GMI Cloud gives developers and enterprises a unified environment that dramatically shortens the path from prototype to product. Users can deploy AI applications using a simple API interface and scale resources in real-time through flexible subscription or pay-as-you-go pricing models.This flexibility extends to deployment environments as well—cloud, on-prem, or hybrid—depending on client needs. That makes it easier for businesses to maintain data security while still taking advantage of GPU acceleration.The Era of Solo Entrepreneurs Is Here—Industrial Sectors to Lead in AI Robot Adoption"In the age of AI Agents, we're on the verge of seeing explosive growth in solo entrepreneurship," said Alex Yeh. There's a growing need for AI startups for accessible conditions to fuel good AI development. In the past, accessing the infrastructure and resources needed for AI development was often a costly and complicated process. Developers had to invest heavily in high-performance hardware, navigate complex software environments, and deal with long deployment cycles. Now, with neoclouds like GMI Cloud, users can simply create an account, pay, and book a time slot to access training resources. Pricing is available via subscription or pay-as-you-go models, giving users the flexibility to scale computing resources in real time according to demand.As AI agents continue to evolve, solo developers are empowered to create intelligent, scalable products that can disrupt industries. Take, for example, a solo developer who used open-source LLMs to create an AI-powered personal finance assistant. With minimal initial investment, this product is now helping thousands of users optimize their financial decisions. These are the kinds of innovations that AI agents unlock, enabling anyone to build impactful solutions.This year's Computex will revolve around the theme "AI Next," highlighting three major areas: "Smart Computing & Robotics," "Next-Gen Technologies," and "Future Mobility." Alex Yeh believes the logical next step in this AI Agent era is the deployment of intelligent robots across real-world environments, with industrial applications being the most promising. GMI Cloud will showcase its powerful AI capabilities at Computex, demonstrating how its unique business model addresses the global shortage of GPUs for AI development. At the same time, the company continues to fulfill its mission: "Build AI Without Limits."Alex Yeh points out that 2025 marks the beginning of the AI application era. With its powerful GPU infrastructure, GMI Cloud aims to empower the rise of solo entrepreneurs in the age of AI Agents.Photo: DIGITIMES
With 2025 Computex Taipei focusing on the three major themes of "AI & Robotics", "Next-Gen Tech" and "Future Mobility", global technology giants have gathered to display their AI technology prowess, focusing on the core concept of "AI Next". The rapid deployment of AI applications has also accelerated urgent demand for high-efficiency storage technologies across various application scenarios. As the global leader of NAND Flash controllers, Silicon Motion plays a key role in AI ecosystem development. Meeting Diverse Storage Requirements, from Low Latency and Power Efficiency to High Data Throughput, to Support Edge AI Growth"The emergence of DeepSeek has greatly lowered the threshold for AI applications," pointed out Mr. Kou, President and CEO of Silicon Motion. As an open source technology, DeepSeek has been able to reduce the cost of language model training. It has gradually subverted the industry's traditional views on AI and led to the accelerating popularization of edge applications. He emphasizes that a wave of AI adoption has already begun for devices from smartphones and laptops to wearable devices, and that storage technologies are crucial in supporting this revolution.In his analysis of AI storage architecture, Mr. Kou remarked that the storage system requirements for each stage of the implementation process differ when implementing AI applications in various scenarios, from initial data ingestion to the preparation, training, and inference stages. For instance, data ingestion requires import of a large amount of data, meaning that high write throughput is required. On the other hand, low latency performance and support for a wide variety of I/O sizes has greater importance in the model training stage. Although these requirements vary, the overall architecture must still possess five core characteristics: high data throughput, low latency, low power, scalability, and high reliability, in order to meet the needs of AI applications.In response to the massive data demands of AI applications, Silicon Motion leads innovation in storage technologies by upgrading NAND controller technology. Mr. Kou said that data application processes can be effectively optimized through hierarchical management and smart identification mechanisms. Flexible data placement (FDP) technology can also serve to improve efficiency and durability, while also offering the advantages of being low latency and low cost. For data security and reliability, the product also adopts advanced encryption standards and a tamper-proof hardware design. In combination with end-to-end data path protection mechanisms and Silicon Motion's proprietary NANDXtend™ technology, this enhances data integrity and prolongs the SSD's lifespan. In addition, Silicon Motion supports 2Tb QLC NAND and 6/8-Plane NAND, combining smart power management controllers (PMC) with advanced process technology to effectively reduce energy consumption while improving storage density.Not only that, it can also be paired with Silicon Motion's unique PerformaShape technology, which utilizes a multi-stage architecture algorithm to help optimize SSD performance based on user-defined QoS sets. Together, FDP and PerformaShape can not only help users effectively manage data and reduce latency, but also significantly improve overall performance by approximately 20-30%. These technologies are specifically suited for AI data pipelines in multi-tenant environments, including key stages such as the data ingestion, data preparation, model training, and inference processes.Creating Comprehensive Solutions to Realize Customer AI Applications Across Cloud and Edge ComputingIn response to data center and cloud storage needs, Silicon Motion has launched the world's first 128TB QLC PCIe Gen5 enterprise SSD reference design kit. By adopting the MonTitan SSD development platform, which comes equipped with an SM8366 controller, it is able to support PCIe Gen5 x4, NVMe 2.0, and OCP 2.5 standards. With a continuous read speed of over 14 Gb/s and a random access performance of over 3.3 million IOPS, it boasts a performance improvement of over 25%. This design is able to speed up training of large language models (LLM) and graph neural networks (GNN) while also reducing AI GPU energy consumption, allowing it to meet high-speed data processing demands.For edge storage solutions, Mr. Kou stated that the number of edge devices with AI capabilities will grow rapidly. He forecast: "The AI humanoid robot market will see explosive growth in the next 5 to 10 years." Systems at different levels have different storage requirements. For example, at the sensor level, data needs to be processed and filtered in real time to ensure accurate data sensing, while decision-making relies on multi-modal fusion reasoning, which entails more demanding storage performance and data integration capabilities. Meanwhile, at the execution level, various calibration parameters must be stored to enable the robot to act and think more similarly to humans. In response, Silicon Motion has actively deployed NVMe SSD, UFS, eMMC, and BGA SSD storage solutions, and values greater cross-industry collaboration to build a shared eco-system, in order to promote the further evolution of smart terminal storage technologies.Additionally, Silicon Motion has launched a variety of high-efficiency, low-power controller to meet the AI application needs of edge devices: The SM2508 PCIe Gen5 controller is designed for AI laptops and gaming consoles, featuring up to 50% lower power consumption compared to similar products. The SM2324 supports USB 4.0 high-speed portable storage devices up to 16TB in size. The SM2756 UFS 4.1 controller has a 65% higher power efficiency compared to UFS 3.1, providing an excellent storage experience for AI smartphones. In response to the urgent need for high-speed and high-capacity storage required for self-driving cars, Silicon Motion has also joined hands with global NAND manufacturers and module makers to jointly create storage solutions for smart automobiles."Storage technology undoubtedly acts as a core link in the AI ecosystem," emphasized Mr. Kou. Taiwan has a complete and highly integrated semiconductor and information and communications industry chain. It is capable not only of building AI servers, but also possesses great potential for promoting the development of AI applications. He believes that more practical AI edge computing devices and groundbreaking applications will be launched at a rapid pace in the future, and that storage solutions will face increasingly demanding requirements due to challenges in processing massive amounts of data. Silicon Motion will continue to use technological innovation as a driving force to actively support AI development.Mr. Kou expressed that the fast-paced development of generative AI has led to lower barriers to adoption for related applications. Silicon Motion aims to satisfy the market's needs through offering a diverse range of high-efficiency, low-power storage solutions.Photo: Silicon Motion Technology