Month: July 2025

  • OpenInfer Eases AI-Assistant Development

    OpenInfer Eases AI-Assistant Development

    Edge AI promises the benefits of cloud AI, plus responsiveness and privacy, because local model execution eliminates sending personal information to the cloud and waiting for a reply. However, edge systems’ variety challenges developers. They differ in supported data types, processing power, memory capacity, and connected sensors. The Silicon Valley startup OpenInfer addresses the edge…

  • Premiere: Intel 2Q 2025 Earnings Call Review

    Premiere: Intel 2Q 2025 Earnings Call Review

    I’m filing this under Site News. I shot and posted my first YouTube video. It covers the same content as the 2Q25 Intel earnings article, aiming to reach a new audience. Thanks for supporting XPU.pub over the past 1.5 years, and I appreciate any support you can lend to the new format. Video quality should…

  • Intel Backs 18A and 14A Development in Q2 2025 Call

    Intel Backs 18A and 14A Development in Q2 2025 Call

    Intel’s CEO, Lip-Bu Tan, voiced a disciplined capex strategy, approving 18A and 14A investments only after yields, performance, and customers meet targets.

  • Cuda Comes to RISC-V Hosts, Not Devices

    Cuda Comes to RISC-V Hosts, Not Devices

    Nvidia intends to port Cuda to RISC-V, but this is less revolutionary than it sounds. Speaking at the Fifth RISC-V China Summit, Nvidia VP Frans Sijstermans stated the company is cooperating with hardware partners to support the open architecture in the standard Cuda version. However, this doesn’t imply Nvidia will support RISC-V chips as alternatives…

  • Pulsar Adds Hardware to Innatera’s Neuromorphic AI Base

    Pulsar Adds Hardware to Innatera’s Neuromorphic AI Base

    Innatera’s Pulsar is a low-power, RISC-V microcontroller leveraging spiking neural networks (SNNs) for efficient sensor-data processing.

  • Ceva Boosts NeuPro-M NPU Throughput and Efficiency

    Ceva Boosts NeuPro-M NPU Throughput and Efficiency

    Ceva has revised its NeuPro-M AI accelerator (NPU), introducing a configuration with more multiply-accumulate units (MACs) and updating the architecture to enhance real-world throughput and power efficiency. The NeuPro-M scales from a single-engine design integrating 4,096 eight-bit MACs to an eight-engine configuration with 64K eight-bit MACs. For even greater performance, licensees of the NeuPro-M design…

error: Selecting disabled if not logged in