Startup World

A recently launched 14-page technical paper from the group behind DeepSeek-V3, with DeepSeek CEO Wenfeng Liang as a co-author, sheds light on the Scaling Challenges and Reflections on Hardware for AI Architectures.
This follow-up to their initial technical report looks into the intricate relationship between large language design (LLM) development, training, and the underlying hardware facilities.
The paper moves beyond the architectural specifics of DeepSeek-V3 to check out how hardware-aware model co-design can efficiently resolve the limitations of existing hardware, eventually allowing affordable large-scale training and inference.https:// arxiv.org/pdf/2505.09343The quick scaling of LLMs has actually exposed crucial bottlenecks in present hardware architectures, especially concerning memory capability, computational efficiency, and interconnect bandwidth.
DeepSeek-V3, trained on a cluster of 2048 NVIDIA H800 GPUs, functions as a compelling case study demonstrating how a synergistic technique in between model style and hardware considerations can overcome these restrictions.
This research study concentrates on the interaction in between hardware architecture and model design in achieving cost-effective massive training and reasoning, aiming to supply actionable insights for efficiently scaling LLMs without compromising efficiency or accessibility.Key locations of focus in the paper include: Hardware-Driven Model Design: Analyzing how hardware attributes, such as FP8 low-precision computation and scale-up/scale-out network properties, affect architectural choices within DeepSeek-V3.
Hardware-Model Interdependencies: Investigating how hardware abilities shape model development and how the developing needs of LLMs drive requirements for next-generation hardware.Future Directions for Hardware Development: Drawing useful insights from DeepSeek-V3 to guide the co-design of future hardware and design architectures for scalable and cost-efficient AI systems.DeepSeek-V3 includes several essential architectural innovations, as highlighted in Figure 1 of the paper, consisting of the DeepSeekMoE architecture and Multi-head Latent Attention (MLA).
These styles straight take on the core challenges of scaling LLMs: memory efficiency, cost-effectiveness, and reasoning speed.Memory Efficiency: MLA and KV Cache OptimizationLLMs exhibit rapid growth in memory needs, surpassing the slower growth of high-speed memory like HBM.
While multi-node parallelism offers a service, enhancing memory usage at the source stays essential.
DeepSeek addresses this bottleneck with Multi-head Latent Attention (MLA), which utilizes forecast matrices to compress the key-value (KV) representations of all attention heads into a smaller sized latent vector, trained collectively with the model.
During reasoning, just this compressed hidden vector requires to be cached, considerably minimizing memory consumption compared to keeping complete KV caches for each head.Beyond MLA, DeepSeek highlights other important strategies for KV cache size reduction, providing inspiration for future advancements in memory-efficient attention systems: Shared KV (GQA; MQA): Multiple attention heads share a single set of key-value pairs, significantly compressing storage.Window KV: Limiting the context window for KV caching.Quantization Compression: Reducing the accuracy of stored KV values.Table 1 in the paper compares the per-token KV cache memory footprint of DeepSeek-V3, Qwen-2.5 72B, and LLaMA-3.1 405B.
DeepSeek-V3 accomplishes an amazing reduction, needing just 70 KB per token, significantly lower than LLaMA-3.1 405Bs 516 KB and Qwen-2.5 72Bs 327 KB.Cost-Effectiveness: DeepSeekMoE for Sparse ComputationFor sporadic computation, DeepSeek established DeepSeekMoE, an advanced Mixture-of-Experts (MoE) architecture (Figure 1, bottom right).
MoE designs provide two crucial advantages in terms of cost-effectiveness: Reduced Training Compute: By selectively triggering a subset of specialist parameters per token, MoE architectures enable a considerable increase in the total number of criteria while maintaining manageable computational needs.
DeepSeek-V3 boasts 671B parameters, nearly three times that of its predecessor V2 (236B), yet only triggers 37B criteria per token.
On the other hand, dense models like Qwen2.572 B and LLaMa3.1405 B need all parameters to be active throughout training.
Table 2 demonstrates that DeepSeekV3 attains similar or remarkable efficiency to these thick designs with an order of magnitude less computational cost (around 250 GFLOPS per token vs.
394 GFLOPS for the 72B dense model and 2448 GFLOPS for the 405B dense model).
Benefits for Personal Use and Local Deployment: The selective activation of specifications in MoE models equates to significantly lower memory and compute requirements throughout single-request reasoning.
DeepSeek-V2 (236B criteria), for instance, just activates 21B criteria during reasoning, making it possible for near or above 20 tokens per second (TPS) on AI SoC-equipped personal computers a capability far exceeding that of similarly sized thick designs on comparable hardware.
This opens possibilities for customized LLM agents running locally.Enhanced Inference Speed: Overlapping Computation and CommunicationDeepSeek prioritizes both system-level optimum throughput and single-request latency for reasoning speed.
To maximize throughput, the model uses a double micro-batch overlapping architecture from the beginning, deliberately overlapping interaction latency with computation.Furthermore, DeepSeek decouples the calculation of MLA and MoE into unique stages.
While one micro-batch performs part of the MLA or MoE calculation, the other concurrently executes the matching scheduling communication.
On the other hand, during the 2nd micro-batchs calculation phase, the first micro-batch undertakes the combine communication action.
This pipelined approach enables seamless overlap of all-to-all interaction with constant calculation, making sure full GPU usage.
In production, DeepSeek utilizes a prefill and decode separation architecture, designating large-batch prefill and latency-sensitive decode requests to different-sized expert-parallel groups, maximizing system throughput under real-world serving conditions.The paper likewise touches upon the value of test-time scaling for thinking models and highlights the crucial function of high token output speed in reinforcement learning workflows and for reducing user-perceived latency in long reasoning sequences.
Enhancing inference speed through hardware-software co-innovation is therefore vital for the efficiency of thinking models.FP8 Mixed-Precision TrainingWhile quantization techniques like GPTQ and AWQ have actually considerably minimized memory requirements mainly for inference, DeepSeek has actually originated using FP8 mixed-precision training for a large-scale MoE model.
Despite NVIDIAs Transformer Engine supporting FP8, DeepSeek-V3 marks a significant step as the first publicly recognized big design to utilize FP8 for training.
This accomplishment, arising from close partnership in between infrastructure and algorithm teams, together with extensive experimentation, considerably decreases computational expenses while maintaining model quality, making massive training more feasible.
Figure 1 illustrates the FP8 accuracy used in the forward and backward passes during training.LogFMT for Efficient CommunicationDeepSeek also uses low-precision compression for network interaction within the DeepSeek-V3 architecture.
Throughout EP parallelism, tokens are arranged utilizing fine-grained FP8 quantization, decreasing interaction volume by 50% compared to BF16, thereby considerably shortening communication time.Beyond traditional floating-point formats, DeepSeek try out an unique information type called LogFMT-nBit (Logarithmic Floating-Point Formats).
Current Hardware Architecture and its ConstraintsDeepSeek presently utilizes the NVIDIA H800 GPU SXM architecture (Figure 2), which, while based upon the Hopper architecture similar to the H100, features decreased FP64 calculate performance and NVLink bandwidth (400 GB/s below 900 GB/s in H100) due to regulatory requirements.
This significant decrease in intra-node scaling bandwidth positions difficulties for high-performance workloads.
To compensate, each node is equipped with eight 400G Infiniband (IB) CX7 network user interface cards (NICs) to improve inter-node scaling capabilities.Hardware-Aware Parallelization and Model Co-designTo browse the constraints of the H800 architecture, the DeepSeek-V3 design integrates hardware-aware style considerations for parallelization, consisting of: avoiding Tensor Parallelism (TP), improving Pipeline Parallelism (PP), and speeding up Expert Parallelism (EP).
Specific information of these methods are available in the initial paper.A crucial aspect of model co-design is node-aware routing for the TopK specialist selection strategy in the MoE architecture.
Provided the roughly 4:1 bandwidth distinction in between intra-node (NVLink, ~ 160 GB/s effective) and inter-node (IB, ~ 40 GB/s reliable per NIC) interaction, DeepSeek designed the routing to leverage the higher intra-node bandwidth.
By grouping the 256 routing professionals (4 per GPU in an 8-node, 64-GPU setup) into 8 groups of 32 experts, each living on a single node, and algorithmically making sure that each token is routed to at a lot of 4 nodes, DeepSeek alleviates the IB communication traffic jam and improves effective interaction bandwidth throughout training.
Tokens predestined for professionals on the very same node can be sent through IB when and after that forwarded via NVLink, reducing redundant IB traffic.Scale-Up and Scale-Out Convergence: Future Hardware DirectionsWhile node-aware routing minimizes bandwidth demands, the bandwidth variation in between NVLink and IB complicates the implementation of communication-intensive kernels.
Currently, GPU Streaming Multiprocessors (SMs) deal with both network message processing and information forwarding by means of NVLink, taking in considerable calculate resources.
DeepSeek supporters for integrating intra-node (scale-up) and inter-node (scale-out) interaction into a combined framework.Integrating dedicated co-processors for network traffic management and seamless forwarding between NVLink and IB domains could minimize software complexity and maximize bandwidth usage.
Hardware support for vibrant traffic deduplication could further optimize techniques like DeepSeek-V3s node-aware routing.
DeepSeek also explores emerging adjoin procedures like Ultra Ethernet Consortium (UEC) and Ultra Accelerator Link (UALink), keeping in mind the Unified Bus (UB) as a recent method to converging scale-up and scale-out.
The paper details methods for accomplishing this merging at the programming framework level, including unified network adapters, committed communication co-processors, flexible forwarding and broadcast/reduce systems, and hardware synchronization primitives.Bandwidth Contention and LatencyAnother restriction of present hardware is the lack of versatility in dynamically allocating bandwidth between different traffic types on NVLink and PCIe.
Transferring KV cache information from CPU memory to GPUs during inference can saturate PCIe bandwidth, leading to contention with inter-GPU EP interaction via IB, potentially degrading general efficiency and triggering latency spikes.
DeepSeek suggests services including dynamic NVLink/PCIe traffic prioritization, I/O chiplet integration, and CPU-GPU adjoin within the scale-up domain.Network Co-design: Multi-Plane Fat-TreeFor DeepSeek-V3 training, a Multi-Plane Fat-Tree (MPFT) scale-out network was released (Figure 3).
Each node, equipped with 8 GPUs and 8 IB NICs, designates each GPU-NIC pair to a various network aircraft.
Furthermore, each node has a 400 Gbps Ethernet RoCE NIC connected to a separate storage network plane for accessing the 3FS distributed file system.
The scale-out network uses 64-port 400G IB switches, theoretically supporting up to 16,384 GPUs while keeping the expense and latency advantages of a two-layer network.
Due to policy and regulative restrictions, the real implementation involved over 2 thousand GPUs.The deployed MPFT network did not fully understand its intended architecture due to present restrictions of the IB ConnectX-7.
Preferably (Figure 4), each NIC would have several physical ports, each connected to a different network plane but provided to the user as a single sensible interface through port bonding.
This would permit a single Queue Pair (QP) to flawlessly send out and get messages across all offered ports, comparable to package spraying.
Native out-of-order layout support within the NIC would be essential to make sure message consistency and appropriate ordering semantics, as packets from the same QP might traverse different network courses and arrive out of order.
InfiniBand ConnectX-8 natively supports four aircrafts, and future NICs with full support for innovative multi-plane capabilities will significantly benefit the scalability of two-layer fat-tree networks for large AI clusters.
Overall, multi-plane architectures use considerable advantages in fault seclusion, toughness, load balancing, and scalability for large systems.DeepSeek highlights a number of benefits of MPFT, including its structure as a subset of Multi-Rail Fat-Tree (MRFT) allowing seamless combination of existing NVIDIA and NCCL optimizations for MRFT networks, cost-effectiveness, traffic isolation, decreased latency, and effectiveness.
Efficiency analysis comparing MPFT and MRFT (Figures 5 and 6, Table 4) exposed that the all-to-all efficiency of multi-plane networks is very comparable to single-plane multi-rail networks, and the efficiency of MPFT and MRFT was almost identical when training the V3 design on 2048 GPUs.Low-Latency NetworkingIn DeepSeeks design inference, massive EP heavily relies on all-to-all interaction, which is delicate to both bandwidth and latency.
Even microsecond-level intrinsic network latency can substantially impact system performance.DeepSeek analyzes the latency qualities of IB and RoCE (Table 5), noting IBs regularly lower latency, making it more suitable for latency-sensitive work like distributed training and inference.
While RoCE offers a potentially cost-efficient option, its current latency and scalability limitations prevent it from totally meeting the needs of large-scale AI systems.
DeepSeek proposes specific enhancements for RoCE, including devoted low-latency RoCE switches, enhanced routing policies, and boosted traffic seclusion or blockage control mechanisms.To further reduce network interaction latency, DeepSeek makes use of InfiniBand GPUDirect Async (IBGDA).
Traditionally, network communication includes CPU proxy threads, introducing extra overhead.
IBGDA permits GPUs to straight populate Work Request (WR) material and write to RDMA doorbell MMIO addresses, getting rid of the significant latency related to GPU-CPU interaction.
By managing the whole control airplane within the GPU, IBGDA avoids CPU traffic jams, especially when sending many small packages, as the GPUs parallel threads can disperse the workload.
DeepSeeks DeepEP and other works have shown significant efficiency gains using IBGDA, leading DeepSeek to advocate for broad assistance of such features throughout various accelerator devices.Building upon the identified hardware limitations and proposed solutions in particular application contexts, the paper expands the discussion to use positive instructions for future hardware architecture style: Robustness Challenges: Addressing hardware failures and silent information corruption through innovative mistake detection and correction systems for developing non-stop AI infrastructure.CPU Bottlenecks and Interconnect Limitations: Optimizing CPU-accelerator partnership, especially breaking the limitations of traditional interfaces like PCIe for high-speed, bottleneck-free intra-node communication.Intelligent Networks for AI: Creating low-latency and smart networks with innovations like co-packaged optics, lossless mechanisms, and adaptive routing to manage complicated communication demands.Memory Semantic Communication and Ordering: Resolving information consistency and buying challenges in present memory semantic interaction, checking out hardware-level built-in warranties for enhanced communication efficiency.Computation and Compression in the Network: Offloading calculation and compression abilities into the network, specifically for particular workloads like EP, to unlock network bandwidth potential.Memory-Centric Architecture Innovations: Addressing the memory bandwidth crisis driven by rapid design scaling, exploring innovative technologies like DRAM stacking and wafer-scale integration.The paper explores each of these locations with particular insights and suggestions, highlighting the need for a holistic co-design method between hardware and software to enable the ongoing development and accessibility of large-scale AI.In conclusion, this technical report supplies valuable insights into the difficulties and solutions encountered during the development and training of DeepSeek-V3.
By diligently evaluating the interplay in between model architecture and hardware limitations, DeepSeek provides an engaging vision for the future of AI facilities, emphasizing the vital function of hardware-aware co-design in achieving cost-efficient and scalable big language designs.
The papers detailed exploration of methods like MLA, DeepSeekMoE, FP8 training, LogFMT, and the MPFT network, combined with its positive suggestions for hardware development, serves as a substantial contribution to the field of large-scale AI research study and engineering.The PaperInsights into DeepSeek-V3: Scaling Challenges and Reflections on Hardware for AI Architecturesis onarXivLike this: LikeLoading ...





Unlimited Portal Access + Monthly Magazine - 12 issues


Contribute US to Start Broadcasting - It's Voluntary!


ADVERTISE


Merchandise (Peace Series)

 


Do these Buddhist gods mean the function of China's super-secret satellites?Mission spots are a


Sierra made the games of my childhood. Are they still fun to play?


RFK Jr’s plan to ban fluoride supplements will “hurt rural America,” dentists say


Spotify captured hosting numerous fake podcasts that market offering drugs


The empire strikes back with F-bombs: AI Darth Vader goes rogue with profanity, slurs


Google to offer app devs access to Gemini Nano for on-device AI


From birth to gene-edited in 6 months: Custom therapy breaks speed limits


OpenAI introduces Codex, its very first full-fledged AI agent for coding


Forgive me, Volvo, I was incorrect: The 2025 V60 Cross Country evaluation


Carnivorous crocodile-like beasts utilized to terrify the Caribbean


Meta argues enshittification isn't genuine in quote to toss FTC monopoly case


The 2025 VW Tiguan deals with United States tastes at an economical rate


Nintendo says more about how free Switch 2 updates will enhance Switch games


xAI states an unauthorized prompt change caused Grok to concentrate on white genocide


Drop Duchy is a deck-building, Tetris-like, Carcassonne-esque puzzler


Rocket Report: How is your payload fairing? Poland launches test rocket.


The top fell off Australia’s first orbital-class rocket, delaying its launch


FBI alerts of continuous rip-off that uses deepfake audio to impersonate federal government officials


After latest abduct attempt, crypto types inform criminal offense bosses: Transfers are traceable


Raytheon to Build Coyote Factory in UAE


Ukraine’s AIM-9 Sidewinder-Armed Magura-7 Drone Boat on Display


Poland Buys 10,000 Warmate Loitering Ammunitions


QinetiQ Delivers 10,000 th Banshee


Chinese Student Xu Yang Breaks ‘Impossible’ Microdrone World Speed Record at 211 mph


S-100 CAMCOPTER Strengthens Greek FDI Frigate Capabilities


Lyten Announces Next-Generation Drone Propulsion Initiative with American-Made Lithium-Sulfur Batteries


uAvionix Trakr: Assured, Real-time Drone Monitoring Low-Altitude Airspace Awareness in FlightLine


Pierce Aerospace and MITRE Partner to Advance Remote ID Research and Development


ATD-150-- Brazil's First Fully Indigenous Jet-Powered Drone


Amazon shipment drones crashed after mistaking rain for ground: Report


New V-Line Pro delivers 10-hour flight time for DJI drone


DJI leaker shares his concept for the Inspire 4


DJI RC Pro 2 adds Air 3S, Mini 4 Pro support


Inside PG E’& rsquo; s high-flying drone strategy to stop wildfires


Some U.S. merchants had the DJI Mavic 4 Pro for sale ... how?By the time you're reading this, it's unlikely that you'll have the ability to find any of the Mavic 4 Pros for sale at these areas, however for a minimal time, 3 websites had the drone for sale


UrbanLink nearly doubles order of REGENT electric seagliders to transport over 4M passengers a year


DJI Mavic 4 Pro gets feature-packed launch firmware upgrade


DJI Fly app update includes Mavic 4 Pro drone assistance


DJI Mavic 4 Pro flies in Europe with EASA C2 certification


7 big upgrades US purchasers will miss without DJI Mavic 4 Pro


The DJI Mavic 4 Pro is here, but U.S. buyers are left grounded


NVIDIA launches cloud-to-robot computing platforms for physical AI, humanoid advancement


NVIDIA accepts Ekso Bionics into its Connect program


RealMan displays embodied robotics at Automate 2025


Persona AI raises $27M to establish humanoid robotics for shipyards


ABB deploys PixelPaint at Mercedes-Benz plant in Germany


MIT engineers develop senior assist robotic E-BAR to avoid falls in the house


New allowing innovations from Automate 2025


Intuitive Surgical is making a CEO change


Waymo updates 1,200+ robotaxis in software application recall


Former UR president Povlsen joins quantum technology leader


RoboBusiness Pitchfire competition opens require robotics startups


DHL buying 1,000+ Stretch robots from Boston Dynamics


In spite of the hype, Interact Analysis anticipates humanoid adoption to stay slow


Piaggio Fast Forward releases Star Wars accredited android


DeepSeek-V3 New Paper is coming! Unveiling the Secrets of Low-Cost Large Model Training through Hardware-Aware Co-design


Y Combinator start-up Firecrawl is ready to pay $1M to work with 3 AI representatives as workers


AI startup Cohere acquires Ottogrid, a platform for conducting market research


The Nuclear Company raises $51M to establish enormous reactor websites


AI video startup Moonvalley lands $53M, according to filing


A Technology NewsRoom and VivaTech partner for the VivaTech Innovation of the Year


Is $1 billion a great deal of cash these daysDatabricks simply snatched up another AI company.This week, the information analytics giant announced a$1 billion acquisition of Neon, a start-up constructing an open source option to AWS Aurora Postgres. It's


Fake fired Twitter worker ‘Rahul Ligma’ is a real engineer with an AI data startup used by Harvard


Sprinter Health raises $55M to expand its at-home healthcare service


Startups Weekly: A brighter outlook, however do not get carried away


Bain bets on Indian domestic work startup Pronto even as rivals face criticism


Host a tailored Side Event at All Stage 2025 in Boston


Acorns acquires family wealth and digital memory platform EarlyBird


Unpacking Rippling vs Deel: business espionage and a $16.8 B plot twist


Tensor9 assists vendors deploy their software application into any environment using digital twins


$25B-valued Chime files for an IPO, reveals $33M deal with Dallas Mavericks


Anthropic, Google score win by nabbing OpenAI-backed Harvey as a user


Y Combinator states Google is a 'monopolist' that has actually 'stunted' the start-up ecosystem


UP.Labs-Porsche’s newest startup wants to be the Plaid of automotive retail


At A Technology NewsRoom All Stage 2025, Rob Biederman will help founders rethink how to scale


Insurtech Bestow lands $120M Series D from Goldman Sachs, Smith Point Capital


VPN company says it didn't know customers had lifetime memberships, cancels them


FCC commissioner writes op-ed titled, “It’s time for Trump to DOGE the FCC“


Copyright Office head fired after reporting AI training isn't always fair usage


New pope chose his name based upon AI's dangers to human dignity


Germ-theory skeptic RFK Jr. goes swimming in sewage-tainted water


United States and China pause tariffs for 90 days as Trump declares historical trade win


Nintendo warns that it can brick Switch consoles if it detects hacking, piracy


A new era in cancer therapies is at hand


Kratos Develops Two Secretive Loyal Wingman Drones Aimed at European Market


Ondas Gets $3.4 M Iron Drone Raider Counter-UAS System Order from Europe


UK RAF Tests Launch of FPV Drones from Helicopters


NATS Unveils Digital Solutions to Power the Future of Advanced Air Mobility in the UK


NAVAIR to Recompete MARV-EL Unmanned Logistics Rotorcraft Contest


Arlington broadens drone program to accelerate authorities response


A3: North American robotic orders remain stable to begin 2025


Universal Robots releases the UR15, its fastest cobot yet


SS Innovations to send SSi Mantra 3 to FDA in July


Waymo robotaxis to map Boston


Orbbec designs Gemini 435Le to help robots see farther, navigate smarter


Realtime Robotics launches Resolver for motion planning, simulation


Congressman is investigating fintech Ramp's effort to win $25M federal contract


Google launches new initiative to back startups building AI


The tinkerers who opened an elegant coffee machine to AI brewing


The Last of Us episode 5 recap: There’s something in the air


The Justice League is not impressed in Peacemaker S2 teaser


Market groups are not pleased about the impending demise of Energy Star


uAvionix Launches skyAlert: Wearable Aircraft Alerting Device for UAS Operators and Visual Observers


American Startup Aims to Deliver Helicopter Performance at Drone Economics


China’s Weather Drones Experiment – One Cup of Cloud Seed Makes 30 Swimming Pools of Rain


General Atomics Gets $11M MQ-9B Protector Support Contract for the UK RAF


US Navy Air-Launches Next-Gen Missile from Unmanned Aircraft


Humanoid robots can benefit from high-performance seals, says Freudenberg


Standard Bots launches 30kg robot arm and U.S. production facility


Physical fitness tracker Whoop faces unhappy clients over upgrade policy


Elizabeth Holmes’ partner reportedly fundraising for new blood-testing startup


A Technology NewsRoom All Stage 2025 invites Boldstart partner Ellen Chisa to talk early-stage enterprise bets


A Technology NewsRoom All Stage 2025: Prepare 4 VC's Jason Kraus will advise on how to turn mayhem into momentum


When doctors describe your brain scan as a “starry sky,” it’s not good


New Lego-building AI creates models that actually stand up in real life


Wearables company's endless complimentary hardware upgrades were too good to be true


Google’s search antitrust trial is wrapping up—here’s what we learned


Linux kernel is leaving 486 CPUs behind, only 18 years after the last one made


Trump kills broadband grants, calls digital equity program “racist and illegal”


Kids are short-circuiting their school-issued Chromebooks for TikTok clout


Celsius founder Alex Mashinsky sentenced to 12 years for “unbank yourself” scam


Do not look now, but a verified gamer is leading the Catholic Church


Trump cuts tariff on UK automobiles; American carmakers not pleased about it


Doom: The Dark Ages review: Shields up!


Europe launches program to entice scientists away from the US


A star has been destroyed by a wandering supermassive black hole


Rocket Report: Rocket Lab to demo cargo delivery; America’s new ICBM in trouble


UK Certifies Protector as First of its Kind Remotely Piloted Aircraft


DSTA and MBDA Deepen Partnership to Advance C-UAS Capabilities


Latvia's Origin Robotics Unveils BLAZE, a Cost-Effective AI-Powered Drone Interceptor


DZYNE Delivers New Autonomous Cargo Glider ‘Grasshopper’ to US Air Force


OA-1K Skyraider II Walk-Around with Test Pilot


Safety and efficiency in robotics design


ABB upgrades Flexley Mover AMR with visual SLAM capabilities


Northeastern soft robotic arm wins MassRobotics Form Function Challenge at Robotics Summit


Sonair debuts ADAR, a 3D ultrasonic sensor for autonomous mobile robots


Scaling startups in the European market


Investing in overlooked European ecosystems


The US is examining Benchmark's financial investment into Chinese AI startup Manus


The Department of Labor just dropped its investigation into Scale AI


Serena-backed health tech lands first FDA approval for home cervical cancer test


Startups Weekly: Different paths on the road to liquidity


Rippling raises $450M at a $16.8 B evaluation, exposes YC is a client


Meta's speeding up the 'Mad Men to Math Men' pipeline