- TW8816-LB3-GR
- TW8816-LB3-GRS
- TW8834-TA2-CR
- TW8836-LB2-CE
- TW9910-LB2-GR
- TW9910-NB2-GR
- R7F701404EAFB
- R7F7015664AFP-C
- R7F7010323AFP
- R5F10PLJLFB#X5
- DF2676VFC33V
- HD64F2636F20V
- HD64F2638F20V
- HD64F2638WF20V
- DF2633RTE28V
- HD64F2612FA20V
- NCV7321D12R2G
- NCV7342D13R2G
- NCV7351D13R2G
- SPC5674FF3MVR3
- FS32K148UJT0VLQT
- SPC5748GK1MKU6
- FS32K144UAT0VLLT
- FS32K146HAT0MLLT
- CAT24C02TDI-GT3A
- CAT24C256YI-GT3
- S25FL064LABMFB010
- CAT24C64WI-GT3
- CAT24C256WI-GT3
- CAT25160VI-GT3
- CAT24C16YI-GT3
- CAT24C04WI-GT3
- M95256-DRMN3TP/K
- MTFC16GAPALBH-IT
- MTFC8GAKAJCN-1M WT
- MTFC8GAKAJCN-4M IT
- MT52L256M32D1PF-107 WT:B
- MT52L512M32D2PF-107 WT:B
- MT25QL256ABA8E12-0AAT
- MT25QL256ABA8ESF-0AAT
- MT29F128G08AJAAAWP-ITZ:A
- MT29F16G08ABABAWP-IT:B
- MT29F16G08ABACAWP-ITZ:C
- MT29F16G08AJADAWP-IT:D
- MT29F1G08ABAEAWP-IT:E
- MT29F2G01ABAGDWB-IT:G
- MT29F2G08ABAEAWP:E
- MT29F2G08ABAEAWP-IT:E
- 2SJ598-ZK-E1-AZ
- RJK0330DPB-01#J0
- UPA1918TE(0)-T1-AT
- NZ9F4V3ST5G
- PCF8578T/1,118
- STGYA120M65DF2AG
- BU931P
- BU941ZPFI
- BU931T
- ESDAXLC6-1BT2
- STD15P6F6AG
- TESEO-VIC3DA
- STGB20NB41LZT4
- BSS123NH6327XTSA1
- BSS131H6327XTSA1
- BSS126H6327XTSA2
- BSP315PH6327XTSA1
- IPD100N04S402ATMA1
- IPB80N06S2L07ATMA3
- BSP170PH6327XTSA1
- BSP613PH6327XTSA1
- BSS223PWH6327XTSA1
- BSS816NWH6327XTSA1
- AUIRF7341QTR
- MPXV5100GC6U
- MMA6900KQ
- MPXHZ6400AC6T1
- MPX4115AP
- MPX5050DP
- MPXAZ6115AP
- MPXHZ6115A6U
- MPXV5050DP
- MPXV5050GP
- MPXV7002DP
- MPXV7025DP
- MPX5050D
- LMT86QDCKRQ1
- TMP451AIDQFR
- TMP112AQDRLRQ1
- TMP411AQDGKRQ1
- TMP411DQDGKRQ1
- SN74LV4052APWR
- LM5007SD/NOPB
- SNJ54LS08J
- TPS3702CX33DDCR
- TMP6131QDECRQ1
- TMP6131QDYARQ1
- TMP6131QDYATQ1
Active suspension can take into account the smoothness and handling stability of the car, while in the traditional passive suspension design, smoothness and handling stability are often difficult to take into account, and generally have to take a compromise approach.When the load quality changes or road conditions change, the active suspension can ...
IVI can realize a series of applications including 3D navigation, real-time road conditions, IPTV, assisted driving, fault detection, vehicle information, body control, mobile office, wireless communication, online-based entertainment functions and TSP services, which greatly enhance the level of vehicle electronics, networking and intelligence. Ne...
The power system of a car is the whole process of mechanical arrangement in which the power generated by the engine is transmitted to the wheels through a series of power transmissions. The engine runs, in fact, the crankshaft is rotating, and one end of the crankshaft ...
Google TPU Chip Ironwood Technology Explained
In November, Google officially commercialised its seventh-generation TPU chip, Ironwood, marking one of the most significant updates in its AI accelerator roadmap to date. Compared with the sixth-generation TPU (Trillium), Ironwood delivers a fourfold improvement in both model training performance and inference throughput. This leap does not merely represent incremental silicon progress; it directly targets the rapidly growing global demand for large-scale generative AI and enterprise-level AI deployments.
By addressing the core bottleneck in AI inference—namely, the high cost and large energy footprint of deploying trillion-parameter models—Ironwood enables enterprises to run advanced AI workloads more efficiently and affordably. Google has stated that leading AI developer Anthropic is preparing to deploy one million new TPUs to support ongoing development and operation of its Claude model family, illustrating the scale at which modern AI systems now operate.
In the following sections, we will examine Google’s latest TPU in detail.
Introduction to TPUs
What is a TPU and what does it do?
A Tensor Processing Unit (TPU) is a custom-designed application-specific integrated circuit (ASIC) developed by Google for accelerating machine-learning workloads. Google introduced the first-generation TPU internally in 2015, and the company publicly revealed the technology during the 2016 Google I/O conference. Since then, TPUs have become a foundational element of Google’s AI infrastructure.
The TPU differs fundamentally from general-purpose computing chips because all elements of its architecture—logic units, data paths, memory hierarchy, and interconnect—are designed specifically for tensor operations, such as matrix multiplication and convolution. These operations form the mathematical backbone of neural networks, particularly deep learning models used in language processing, vision, speech recognition, and recommendation systems.
By focusing exclusively on these operations, TPUs eliminate unnecessary hardware complexity and achieve extremely high parallelism, enabling substantial improvements in computational efficiency relative to CPUs and GPUs.

What is an ASIC chip?
An ASIC (Application-Specific Integrated Circuit) is a chip tailored to perform a particular task or serve a specific application domain. Unlike CPUs and GPUs—which are designed to handle broad categories of operations—ASICs are engineered with a single purpose in mind.
This design philosophy brings several pronounced advantages:
1. Higher Performance
Because ASICs incorporate hardware structures optimised for a target task, they can execute these operations far more efficiently. For example, AI-focused ASICs like TPUs implement large systolic arrays and streamlined control logic, reducing the number of cycles needed to perform each operation. Pipelining and parallel data flow further minimise latency.
2. Superior Energy Efficiency
General-purpose processors typically waste energy executing functions not directly required for AI tasks, such as branch prediction and complex control flows. ASICs, by contrast, remove unnecessary logic gates and minimise switching activity. This results in significantly lower power consumption and allows higher sustained utilisation of computational units.
3. High Integration and Smaller System Footprint
ASICs can consolidate diverse functional blocks—compute engines, memory controllers, interconnect components—onto a single die. This reduces system size, simplifies board design, and enhances reliability. In mass production, ASICs also benefit from economies of scale, making them cost-effective in high-volume or hyperscale deployments.
The Evolution of Google’s TPU
Google’s TPU programme has evolved rapidly over the past decade:
● 2015 – TPU v1: Introduced as an internal inference accelerator.
● 2016 – TPU v1 publicly unveiled: Demonstrated at Google I/O; used in AlphaGo, showing its ability to support sophisticated reinforcement-learning systems.
● 2018 – TPU v2: Added distributed shared memory and moved towards large-scale training workloads.
● 2020 – TPU v3: Implemented liquid cooling, enabling higher power envelopes and improved thermal stability.
● 2022 – TPU v4: Adopted a 3D torus interconnect topology, dramatically improving multi-chip scaling.
● 2023 – TPU v5: Delivered further improvements in cost-per-compute and training efficiency.
● 2024 – TPU v6 Trillium: Added an MLP core specifically optimised for Transformer-based language models and expanded support for large model training.
● 2025 – TPU v7 Ironwood: A major architectural step forward.
A single Ironwood Superpod integrates 9,216 TPU chips, each equipped with 192 GB of HBM3e memory providing 7.4 TB/s bandwidth, and a peak computational performance of 4,614 TFLOPs (FP8). Collectively, such a system forms one of the world’s most capable AI supercomputers.

How TPUs Differ from CPUs and GPUs
Architectural Differences
The core architectural distinctions between CPUs, GPUs and TPUs can be summarised as follows:
CPU (Central Processing Unit)
CPUs prioritise flexibility and are built with complex control units and deep cache hierarchies. This enables them to handle branching, interrupts and diverse workloads. However, their relatively small number of cores limits their parallel computing capability.
GPU (Graphics Processing Unit)
GPUs contain thousands of small compute cores designed for highly parallel workloads, originally graphics rendering but now widely used for general-purpose matrix operations. However, GPUs still retain general-purpose components that introduce overhead for AI workloads.
TPU (Tensor Processing Unit)
A TPU strips away much general-purpose hardware and instead uses a systolic array, a highly parallel grid of arithmetic units specifically tuned for tensor operations. Data moves rhythmically through the array, allowing tens of thousands of simultaneous multiply-accumulate operations with minimal control overhead.
Application Scenarios
● CPUs are ideal for flexible, small-scale inference, model prototyping and tasks requiring high control complexity.
● GPUs excel at training medium-sized models, executing custom kernels, and supporting a wide range of workloads.
● TPUs dominate in ultra-large-scale training, long-duration workloads, trillion-parameter embedding lookups, and massive parallel inference tasks.
Comparison of CPU, GPU, FPGA, and ASIC (NPU/TPU)
|
Dimension |
CPU |
GPU |
FPGA |
ASIC (NPU / TPU) |
|
Full Name |
Central Processing Unit |
Graphics Processing Unit |
Field-Programmable Gate Array |
Application-Specific Integrated Circuit (Neural Processing Unit / Tensor Processing Unit) |
|
Primary Purpose |
General-purpose computing, OS tasks, logic-heavy operations |
Parallel computation, graphics rendering, AI training & inference |
Customisable hardware logic, prototyping specialised pipelines |
Highly specialised AI computation (tensor/matrix operations) |
|
Architecture Type |
Few powerful cores, deep cache hierarchy |
Thousands of simple cores for massive parallelism |
Reconfigurable logic blocks + routing matrix |
Fixed-function compute arrays (e.g., systolic arrays) optimised for AI |
|
Programming Flexibility |
Very high |
High |
Very high (hardware-level customisation) |
Low (purpose-built for specific workloads) |
|
Performance on AI Workloads |
Low |
High |
Moderate to High (depends on custom design) |
Very High (industry-leading efficiency for LLMs) |
|
Latency Characteristics |
Low latency, good for control-heavy tasks |
Moderate latency |
Very low latency when optimised |
Very low latency for supported AI operations |
|
Energy Efficiency |
Low to moderate |
Moderate |
High (when optimised) |
Very high (2–4× GPU in many cases) |
|
Hardware Customisation |
None |
Limited |
Full hardware customisation |
None after manufacturing (fully fixed) |
|
Scalability in Data Centres |
Limited |
High (multi-GPU clusters) |
Moderate (depends on design complexity) |
Very high (thousands of NPU/TPU chips in pods) |
|
Use Cases |
OS, applications, logic processing, sequential tasks |
Deep learning training, graphics rendering, HPC |
Prototyping, edge AI, specialised pipelines, real-time control |
LLMs, large-scale AI training & inference, recommendation engines |
|
Typical Power Consumption |
45–125 W |
250–700+ W |
Highly variable (1–50 W edge / 100+ W data centre) |
10–200 W (TPU v7 = ~157 W) |
|
Ease of Development |
Easiest |
Easy due to CUDA/ROCm |
Difficult (hardware design skills needed) |
Moderate; requires framework support (XLA/NNAPI) |
|
Cost |
Low |
High |
Variable |
High initial cost, low cost-per-compute for large deployments |
|
Best Strength |
Versatility |
Parallel throughput |
Custom logic & low latency |
Maximum AI efficiency & scale |
|
Main Limitation |
Poor parallel performance |
High power use & less efficient scaling |
Complex development & longer design cycles |
Limited flexibility & tied to manufacturer ecosystem |
TPU Hardware Architecture
The TPU architecture is built around three interdependent subsystems:
1. Compute Subsystem
The systolic array consists of thousands of arithmetic logic units laid out in a two-dimensional grid. Each ALU performs multiply-accumulate (MAC) operations while data flows through the array in a pipelined manner. This design allows near-maximum utilisation of compute resources, surpassing typical GPU utilisation rates for large matrix multiplications.
2. Memory Subsystem
TPUs incorporate multiple layers of memory:
● High-bandwidth HBM3e capable of hundreds of gigabytes per second
● High-speed SRAM caches
● Local register files for extremely low-latency access
This hierarchical approach ensures the compute units are consistently supplied with data, minimising bottlenecks.
3. Interconnect Subsystem
The TPU interconnect enables multiple chips to work in synchrony, forming TPU Pods that scale to thousands of devices. High-speed links and topology-aware routing ensure efficient cross-chip communication.
Google also integrates programmable controllers and data-processing modules that handle scheduling, prefetching, and format conversion, all contributing to performance gains.

HBM in TPU Architectures
High Bandwidth Memory (HBM) is crucial for sustaining the throughput required by modern neural networks. Large models demand enormous amounts of data movement, and HBM3e reduces memory stall time by delivering multi-terabyte-per-second bandwidth. In Ironwood, the 192 GB memory capacity per chip means larger model partitions can be stored locally, reducing the need for inter-chip communication.

Core Technical Advantages of TPUs
1. Energy Efficiency
TPUs allocate the majority of transistors to compute units rather than control logic, enabling them to deliver 2 to 4 times higher performance per watt than contemporary GPUs. This is essential for large-scale AI clusters where energy usage is a major operational and environmental concern.
2. Compute Density
With 4,614 TFLOPs of FP8 compute capability, Ironwood surpasses even Nvidia’s latest Blackwell GB200 GPU on raw inference performance. The smaller physical footprint also enables higher rack density, lowering the total cost of ownership for hyperscale deployments.
3. Cost Effectiveness
TPUs reduce redundant hardware costs and leverage Google’s XLA compiler to optimise models automatically. According to Google Cloud, training large language models on TPUs can be 40–60% cheaper than performing the same tasks on GPUs.
Typical TPU Application Scenarios
TPUs support a wide range of practical AI tasks:
1. Natural Language Processing
Google’s PaLM and Gemini models, among the world’s largest and most capable language models, are trained on TPU Pods. The TPU architecture is particularly effective for attention mechanisms and wide-layer MLPs.
2. Computer Vision
Image classification, object detection, and video understanding workloads benefit from the TPU’s high matrix-multiplication throughput.
3. Recommendation Systems
Services such as Google Search and YouTube rely on TPUs to process enormous embedding tables, enabling personalised content recommendations for billions of users.
4. Edge AI
The Coral Edge TPU supports low-latency inference in industrial inspection, smart retail, and IoT devices, where real-time responses are essential.
Google TPU vs. Nvidia GPU
Architecture and Specifications
Google Ironwood (TPU v7):
● ASIC with systolic array
● FP8 performance: 4,614 TFLOPs
● HBM3e: 192 GB
● Power consumption: 157 W
● Scales up to 9,216 chips per Superpod
Nvidia Blackwell B200 (2024):
● General-purpose GPU
● FP8 performance: 4,500 TFLOPs
● 8-GPU platform memory: 1,440 GB
● Power consumption: 700 W
Nvidia H200 (2025):
● Hopper-derived architecture
● FP8 performance: ~2,560 TFLOPs
● Memory: 141 GB
● Power: 450 W

Performance and Energy Efficiency Comparison
Ironwood slightly exceeds the B200 in FP8 inference and significantly outperforms the H200. TPU’s architecture leads to stronger energy efficiency and better sustained utilisation for large workloads.
Strengths and Limitations
TPU Strengths:
● Exceptional inference throughput
● Industry-leading energy efficiency
● Excellent scaling capabilities
● Tight integration with Google’s software stack
TPU Limitations:
● Restricted to Google Cloud’s ecosystem
● Less flexible for general-purpose workloads
● Higher barrier to entry for custom operator development
GPU Strengths:
● Universal deployment flexibility
● Mature and robust CUDA ecosystem
● Strong support for diverse model types
GPU Limitations:
● Lower energy efficiency
● Scaling inefficiencies in ultra-large clusters
TPU Market Landscape
IDC reports:
● 2024 global GPU market: ~USD 70 billion
● 2024 global ASIC market: ~USD 14.8 billion
● 2030 projections:
○ GPU market > USD 300 billion
○ ASIC market > USD 80 billion
Shipment Forecasts
● 2024 shipments:
○ GPUs: 8.76 million
○ ASICs: 2.83 million
● 2030 forecasts:
○ GPUs: ~30 million
○ ASICs: ~14 million
This corresponds to CAGR:
● GPUs: ~23%
● ASICs: ~30%
Google TPU leads the ASIC sector with over 70% market share in 2024, generating USD 6–9 billion revenue.
The competitive landscape is intensifying:
● Amazon Trainium: Over 200% shipment growth in 2024
● Meta MTIA v2: Focused on inference, with a training-oriented ASIC expected in 2026
● OpenAI ASIC initiative: Targeting 3 nm/A16-class chips with mass production projected for 2026
This increasingly diverse ecosystem indicates that AI-specific silicon is becoming central to the next generation of global compute infrastructure.

FAQs About Google TPU Chips
1. What is the main difference between a TPU and a GPU?
A TPU is a custom-built ASIC designed specifically for tensor operations used in machine learning, particularly large-scale training and inference. It uses a systolic array architecture to maximise matrix multiplication efficiency. A GPU, by contrast, is a general-purpose parallel processor suited for a wide range of workloads, including graphics rendering, scientific computing and AI. TPUs offer superior energy efficiency and better scaling for very large models, while GPUs provide greater flexibility and broader ecosystem support.
2. Why are TPUs particularly effective for large language models (LLMs)?
LLMs rely heavily on large matrix multiplications, high-dimensional embeddings and Transformer layers—all of which map extremely well to the systolic arrays and high-bandwidth memory design of TPUs. TPUs maintain higher utilisation during long-running training cycles and reduce communication overhead across thousands of chips, making them ideal for trillion-parameter models.
3. Can TPUs be used outside of Google Cloud?
At present, Google TPUs are only accessible through Google Cloud’s managed infrastructure. Unlike GPUs, which can be purchased and deployed on-premise or integrated into custom servers, TPUs are not available for independent hardware purchase. This design ensures tight optimisation between Google’s hardware, software stack and data-centre network fabric.
4. How does Google’s Ironwood TPU compare to Nvidia’s Blackwell GPUs?
Ironwood delivers slightly higher FP8 inference performance than the Nvidia B200 and significantly outperforms the H200. It also consumes far less power—157 W compared with around 700 W for a B200—resulting in better performance-per-watt and improved data-centre efficiency. However, GPUs retain advantages in versatility, custom operator development and ecosystem maturity.
5. What workloads benefit most from TPU acceleration?
TPUs excel at large-scale AI workloads that rely on high-throughput tensor operations, such as:
● training and inference of LLMs
● computer vision models with heavy convolutional layers
● massive embedding table lookups used in recommendation systems
● long-duration or hyperscale distributed training tasks They are less suited to workloads requiring extensive branching logic or highly specialised custom kernels.
6. Are TPUs more cost-effective than GPUs for AI training?
For large language models and other matrix-heavy workloads, TPUs generally offer 40–60% lower overall training cost compared with GPUs. Their higher energy efficiency, reduced hardware overhead and XLA compiler optimisations contribute to lower total cost of ownership. However, for smaller models or workloads requiring bespoke GPU kernels, GPUs may still be more economical.
Nov 27, 2025
Qualcomm SA8620P: AI Powerhouse for ADAS and Autonomous Driving
The automotive industry is rapidly evolving toward smarter, safer, and more connected vehicles — and Qualcomm is leading this transformation. Among its groundbreaking automotive platforms, the Qualcomm SA8620P stands out as a high-performance AI chip designed to enable advanced driver assistance systems (ADAS) and autonomous driving in both fuel vehicles and electric vehicles.
As part of the Snapdragon Ride family, the SA8620P combines cutting-edge computing, AI acceleration, and low power efficiency to power the next generation of intelligent assisted driving.
Overview of Qualcomm SA8620P
The SA8620P is a member of Qualcomm’s automotive-grade SoC lineup, purpose-built for advanced in-vehicle computing. It brings together high-speed processing, AI inference, computer vision, and sensor fusion — all within an energy-efficient architecture.
This chip bridges the gap between today’s assisted driving and tomorrow’s fully autonomous vehicles, making it a cornerstone of Qualcomm’s automotive strategy.

Architecture Overview
Under the hood, the Qualcomm SA8620P uses a multi-core CPU and GPU architecture optimized for real-time automotive workloads.
● CPU Complex: Based on high-performance Kryo cores, it handles decision-making and sensor data management.
● GPU Engine: Powered by the Adreno GPU family, it accelerates real-time rendering and perception tasks.
● AI Engine: Equipped with Qualcomm’s Hexagon DSP and dedicated neural processing units, the SA8620P achieves trillions of operations per second (TOPS), handling deep learning models for perception, prediction, and planning.
● Memory & Connectivity: It supports LPDDR5 memory, automotive Ethernet, PCIe, and CAN interfaces for high-bandwidth communication between sensors and ECUs.
● Safety: Built for ISO 26262 ASIL-D compliance, ensuring reliability in mission-critical applications like braking and steering control.
Together, these components enable vehicles to process visual, radar, and LiDAR data in real time — the foundation of modern ADAS systems.
Ecosystem & Partnerships
Qualcomm’s success with SA8620P is not just about silicon—it’s about the ecosystem surrounding it. The chip is part of the Snapdragon Ride Platform, which provides a complete hardware and software stack for automakers and Tier 1 suppliers.
● Collaborations: Major industry players such as BMW, General Motors, Hyundai, and Stellantis have partnered with Qualcomm to integrate Snapdragon Ride technology into their next-generation vehicles.
● Software Support: The platform supports popular automotive OS frameworks, including QNX, Linux, and Android Automotive, making integration flexible and scalable.
● Developer Tools: Qualcomm provides AI model optimization and toolkits to help carmakers deploy neural networks more efficiently on the SA8620P hardware.
This collaborative ecosystem ensures faster time-to-market and future-proof adaptability for both fuel and electric vehicles.

Performance & Benchmarks
While Qualcomm does not publicly release full benchmark data, internal tests and partner reports indicate that the SA8620P delivers exceptional performance across perception and planning workloads.
Key performance characteristics include:
● AI Performance: Up to tens of TOPS for real-time object detection and semantic segmentation.
● Power Efficiency: Optimized for low thermal output, ideal for electric vehicles where power management is crucial.
● Latency: Designed for sub-millisecond sensor fusion and decision loops, critical for responsive driving maneuvers.
This performance level enables not just ADAS features like adaptive cruise control or lane keeping, but also higher autonomy functions such as automatic overtaking and highway autopilot.
Comparison with SA8650P
|
Feature |
Qualcomm SA8620P |
Qualcomm SA8650P |
|
Performance |
High |
Ultra-high |
|
Power Efficiency |
Optimized for balance |
Focused on maximum performance |
|
Target Use |
Mid-range and premium cars |
Luxury and flagship autonomous vehicles |
|
AI Engine |
Next-gen scalable NPU |
Multi-core AI accelerator |
|
Automation Level |
L2–L3 |
L3–L4 |
|
Vehicle Type |
Fuel + Electric vehicles |
Primarily electric + high-end models |
While the SA8650P serves as a flagship chip for high-end autonomous driving, the SA8620P offers a more balanced solution—ideal for automakers who want strong AI performance without excessive cost or power consumption.
Comparison with Competitors
In the growing landscape of automotive AI chips, the SA8620P competes with platforms like NVIDIA Orin, Mobileye EyeQ5, and Tesla FSD processors.
|
Feature |
Qualcomm SA8620P |
NVIDIA Orin |
Mobileye EyeQ5 |
|
Architecture |
CPU + GPU + NPU |
GPU + CPU cluster |
Vision DSP + CPU |
|
Power Efficiency |
Excellent |
High |
Very high |
|
Ecosystem |
Snapdragon Ride + OEM support |
NVIDIA Drive |
Mobileye software stack |
|
Strength |
Balanced AI and efficiency |
Extreme compute power |
Vision-based ADAS focus |
|
Ideal Use |
Mass-market EVs and ADAS |
High-end autonomous cars |
Cost-sensitive ADAS systems |
Qualcomm’s key advantage lies in scalability and power efficiency — enabling AI-driven capabilities even in mid-tier models, not just luxury cars.
Applications in Modern Vehicles
The Qualcomm SA8620P powers a wide range of in-car applications:
● Advanced Driver Assistance: Lane keeping, blind spot detection, automatic emergency braking, and adaptive cruise control.
● Autonomous Driving: Real-time sensor fusion for highway autopilot and parking automation.
● Driver Monitoring: AI-based fatigue detection, facial recognition, and gesture control.
● In-Vehicle Experience: Personalized infotainment and voice interaction powered by onboard AI.
● Energy Optimization: Smart algorithms balance power draw between compute and drivetrain systems, improving EV range and fuel economy.
This adaptability allows automakers to deploy the same chip architecture across multiple vehicle lines — from fuel vehicles to fully electric models.
Future Outlook
As cars evolve into rolling AI computers, the Qualcomm SA8620P plays a foundational role in this transformation.
Future versions of Snapdragon Ride are expected to integrate even higher-performance AI cores, improved 3D perception, and enhanced vehicle-to-everything (V2X) communication. These upgrades will allow cars to share environmental data with each other, improving road safety and traffic efficiency.
Qualcomm’s long-term vision is to create a unified platform for intelligent mobility—where every car learns, adapts, and collaborates within a connected ecosystem.
Final Thoughts
The Qualcomm SA8620P stands as a key milestone in automotive AI computing. By combining robust performance, exceptional efficiency, and broad compatibility across vehicle types, it’s accelerating the transition from assisted to autonomous driving.
In a future where every car is intelligent, connected, and adaptive, Qualcomm’s SA8620P—and its evolving Snapdragon Ride ecosystem—will be the driving force behind safer, smarter mobility for everyone.
FAQs
1. What vehicles use Qualcomm SA8620P?
The SA8620P is being adopted by several global automakers within the Snapdragon Ride platform, including select models from BMW, GM, and Hyundai, among others.
2. How is the SA8620P different from the SA8650P processor?
The SA8650P targets high-end autonomous systems with more compute power, while the SA8620P offers a balanced mix of performance and efficiency for mid-to-premium ADAS.
3. Is the SA8620P suitable for electric vehicles?
Yes. Its low power consumption and compact design make it ideal for EV platforms where energy efficiency is critical.
4. What is Qualcomm Snapdragon Ride?
Snapdragon Ride is Qualcomm’s comprehensive automotive platform, providing hardware, software, and AI tools for ADAS and autonomous driving development.
5. How does SA8620P contribute to intelligent assisted driving?
It processes sensor data from cameras, radar, and LiDAR to enable real-time decision-making, improving safety, comfort, and automation in driving.
Oct 20, 2025
What is NVIDIA DRIVE AGX Thor? A Deep Dive into NVIDIA's Automotive AI Supercomputer
The automotive industry is in the middle of a major shift — cars are no longer just machines with wheels, but computers on wheels powered by AI. At the center of this transformation is NVIDIA DRIVE AGX Thor, a platform designed to handle everything from driver assistance to fully autonomous driving, while also powering in-car infotainment and digital cockpits.
But what exactly is DRIVE AGX Thor, why does it matter, and how does it compare to previous NVIDIA automotive platforms like DRIVE Orin and DRIVE Xavier? Let’s explore.
What is NVIDIA DRIVE AGX Thor?
Announced in 2022, NVIDIA DRIVE AGX Thor is a next-generation central car computer built to unify every intelligent function inside a vehicle. Unlike earlier platforms that focused mainly on autonomous driving, Thor is designed as a single AI supercomputer that handles:
● Autonomous driving & ADAS (Advanced Driver Assistance Systems)
● In-vehicle infotainment systems
● Digital instrument clusters and cockpits
● Driver and passenger monitoring
● AI-based safety and security features
In essence, Thor replaces multiple electronic control units (ECUs) with one powerful, centralized AI platform, simplifying design and enabling higher performance.

Key Specifications of NVIDIA DRIVE AGX Thor
Here are the highlights of Thor’s architecture:
● GPU Architecture: Based on NVIDIA Ada Lovelace GPU architecture
● AI Performance: Up to 2,000 TOPS (trillions of operations per second)
● CPU: Next-gen ARM-based cores integrated with NVIDIA’s custom logic
● Automotive Networking: Support for LPDDR5X memory, PCIe Gen 5, and high-speed sensor input
● Integrated Transformer Engine: Optimized for large AI models, including generative AI and autonomous driving perception networks
● ASILD (Automotive Safety Integrity Level D) functional safety compliance
Thor’s 2,000 TOPS capability makes it one of the most powerful automotive-grade processors ever announced, setting a new benchmark for in-vehicle computing.
Why is DRIVE AGX Thor Important?
Traditional cars use dozens, sometimes hundreds, of ECUs, each dedicated to a specific task: infotainment, ADAS, powertrain control, etc. This results in complexity, high cost, and inefficiency.
Thor changes this model by consolidating all those functions into a single high-performance AI computer. The benefits include:
1. Centralized AI computing – No need for separate chips for infotainment and autonomy.
2. Improved safety – With ASILD certification, Thor ensures reliability for critical driving functions.
3. Cost efficiency – Fewer chips mean reduced BOM (bill of materials) and simplified vehicle architecture.
4. Future-proofing – Optimized for transformer-based AI models, Thor is ready for the era of generative AI in automotive.

Applications of NVIDIA DRIVE AGX Thor
So, what will Thor actually do inside vehicles? Here are some of its main use cases:
● Autonomous Driving – From highway autopilot to full self-driving capabilities, Thor runs perception, mapping, and decision-making models in real time.
● Digital Cockpit – Handles infotainment, passenger entertainment, and instrument clusters simultaneously.
● Driver and Passenger Monitoring – AI cameras that detect driver fatigue, distraction, or monitor passenger safety.
● In-Vehicle AI Services – Real-time voice assistants, generative AI copilots, and cloud-to-car integrations.
● Fleet & Logistics Vehicles – Autonomous trucks, delivery robots, and ride-hailing vehicles powered by a unified compute platform.
NVIDIA DRIVE AGX Thor vs. DRIVE Orin vs. DRIVE Xavier
To put Thor in context, here’s how it compares to its predecessors:
|
Feature |
DRIVE Xavier |
DRIVE Orin |
DRIVE Thor |
|
GPU Architecture |
Volta |
Ampere |
Ada Lovelace |
|
AI Performance |
~30 TOPS |
~254 TOPS |
~2,000 TOPS |
|
Memory Support |
LPDDR4x |
LPDDR5 |
LPDDR5X |
|
Functional Safety |
ASIL-C |
ASIL-D |
ASIL-D |
|
Target Use |
ADAS, entry-level autonomy |
Advanced autonomy, infotainment |
Full vehicle central computer (autonomy + cockpit + AI services) |
Thor clearly represents a massive leap forward, offering nearly 8× the AI performance of DRIVE Orin.
Is NVIDIA DRIVE AGX Thor Available Yet?
As of 2025, NVIDIA has announced partnerships with several automakers — including Zeekr, BYD, and others in the EV and autonomous driving space — who plan to integrate Thor into production vehicles in the coming years.
Mass deployment is expected to begin around 2025–2026, aligning with the rollout of next-generation autonomous EVs.
Pricing of NVIDIA DRIVE AGX Thor
Unlike Jetson modules, NVIDIA doesn’t sell DRIVE Thor as a consumer product. Instead, it’s integrated into automotive platforms in partnership with OEMs and Tier 1 suppliers. Pricing is negotiated case-by-case, but given its complexity, it sits at the high end of NVIDIA’s automotive solutions.
Pros and Cons of NVIDIA DRIVE AGX Thor
Like any technology, NVIDIA DRIVE AGX Thor comes with its strengths and trade-offs. Here’s a quick breakdown:
Pros
● Massive AI performance (2,000 TOPS) — enough to handle autonomy, infotainment, and in-vehicle AI on a single platform.
● Centralized architecture reduces the need for multiple ECUs, lowering cost and complexity for automakers.
● Future-proof design with transformer acceleration for large generative AI models.
● ASIL-D safety compliance ensures reliability for mission-critical automotive applications.
● Scalability — a single platform that can be adapted across entry-level EVs to high-end autonomous vehicles.
Cons
● Not consumer-available — only OEMs and Tier 1 suppliers can access Thor directly.
● High integration cost for automakers compared to simpler ECU setups.
● Power requirements are higher than older platforms, making thermal design more complex.
● Early adoption risk — since mass deployment is just starting (2025–2026), ecosystem maturity may take time.

Future Outlook: The Role of DRIVE AGX Thor in Autonomous Vehicles
Thor is more than just another chip — it represents NVIDIA’s vision of the software-defined car. By centralizing all functions, carmakers can continuously update features via OTA (over-the-air) updates, adding capabilities years after purchase.
With its transformer engine, Thor is also uniquely positioned to handle large generative AI models, meaning future cars may come with copilots that understand natural language, anticipate driver needs, and interact seamlessly with smart infrastructure.
In short: Thor isn’t just about autonomy — it’s about making vehicles intelligent, upgradeable platforms for the AI era.
All In All
The NVIDIA DRIVE AGX Thor marks a new chapter in automotive computing. With 2,000 TOPS of performance, transformer acceleration, and safety certification, it unifies autonomous driving, infotainment, and in-vehicle AI into one platform.
For automakers, it promises reduced complexity, lower cost, and a clear path toward software-defined vehicles. For consumers, it paves the way for safer, smarter, and more connected cars.
As the automotive world accelerates toward autonomy, NVIDIA DRIVE AGX Thor is set to become the central brain of next-generation intelligent vehicles.
FAQ
1. What is NVIDIA DRIVE AGX Thor used for?
Thor is designed to power autonomous driving, infotainment, digital cockpits, and AI assistants — all from a single centralized car computer.
2. How powerful is NVIDIA DRIVE Thor?
It delivers up to 2,000 TOPS, making it one of the most powerful automotive AI processors ever announced.
3. Is DRIVE AGX Thor available today?
Yes, but primarily through NVIDIA’s automotive partners. Consumer availability is limited — it will appear in production vehicles from 2025 onwards.
4. How does DRIVE Thor compare to DRIVE Orin?
Thor is nearly 8× more powerful than Orin, with transformer acceleration, faster memory, and broader unification of vehicle functions.
5. Which companies are using DRIVE Thor?
Automakers like Zeekr, BYD, and other leading EV manufacturers have announced plans to integrate DRIVE Thor into future models.
Sep 22, 2025
How to Test a PNP Transistor with a Digital Multimeter: Step-by-Step Guide
Introduction
One of the most common failure points when troubleshooting electronic devices or repairing PCBS is the transistor. These small but powerful components act as electronic switches and amplifiers and are indispensable in almost every circuit - from audio amplifiers and power supplies to radios and microcontroller-based projects.
A faulty transistor can cause symptoms like:
l No power in a circuit.
l Distorted audio signals.
l Failure of a switching regulator.
l Heating or short circuits in a PCB.
Learn how to use a digital multimeter to test PNP transistors. Without blindly replacing components, you can quickly confirm whether the transistor is in good condition, short-circuited or open-circuited. This article will guide you through relevant theories, tools, step-by-step testing processes, result interpretation, common errors, advanced testing methods, and frequently asked questions.
By the end, you will not only know how to identify the pins of a transistor and test its health condition, but also understand why the readings are like this.
Quick Theory Review: What is a PNP Transistor?
Structure
A PNP transistor is a type of bipolar junction transistor (BJT). It has three terminals:
1. Base (B) – The control terminal.
2. Emitter (E) – The output terminal that emits charge carriers.
3. Collector (C) – The output terminal that collects charge carriers.
Symbolically, a PNP transistor is represented by an arrow pointing to the emitter. (Fig. 1).

(Fig. 1: PNP transistor symbol — arrow pointing inward)
Working Principle
A PNP transistor is formed by sandwiched between two P-type regions and an N-type semiconductor. The arrow direction (pointing inward) indicates the direction of the regular current flow.
Key principle:
l When the base is more negative than the emitter (by ~0.6V for silicon transistors), current flows from emitter to collector.
l Unlike an NPN transistor, which turns on when the base is positive, a PNP turns on when the base is pulled low relative to the emitter.
In simplified terms, you can think of it as:
l Base = switch handle
l Emitter = source
l Collector = output
When the base allows, current flows from emitter to collector.
Internal Equivalent
A transistor can be modeled as two diodes back-to-back:
l Base–Emitter junction behaves like a diode.
l Base–Collector junction behaves like another diode.
This model explains why a multimeter’s diode test mode is the perfect way to check transistor health.
Why Testing a PNP Transistor Matters
Knowing how to test transistors is crucial for:
l Repairing electronics: Power supplies, amplifiers, and motor drivers often fail because of shorted transistors.
l Prototyping: When reusing transistors from old boards, you must ensure they are still functional.
l Education: For students, testing reinforces the theoretical understanding of BJTs.
l Diagnostics: A faulty transistor can mimic other problems, leading you down the wrong troubleshooting path.
By mastering this test method, you save both time and money, and you’ll gain confidence in your repair skills.
Tools You’ll Need
To perform a reliable test, gather these tools:
l Digital Multimeter – With a diode test function. This is the primary tool.
l PNP Transistor under Test – Any general-purpose PNP (e.g., 2N3906, BC558, etc.).
l Datasheet or Pinout Diagram (optional) – Helps confirm pin order if markings are unclear.
l Breadboard & Jumper Wires (optional) – Useful for hands-free measurement.
Test Principle in More Depth
Why does a multimeter work so well for this?
l In diode mode, the multimeter applies a small current and measures the forward voltage drop across a junction.
l Since a PNP has two PN junctions, each should behave like a diode.
l A good transistor will show consistent forward voltage drops (~0.6–0.7V for silicon) and infinite resistance in reverse.
l If either junction fails (short or open), the readings will deviate, revealing a fault.
Step-by-Step Testing Guide
Step 1: Identify the Base (B)
1.Set the multimeter to diode mode.
2.Select two pins at random and connect the probes.
3.Cycle through combinations until you find one pin that shows conduction with both of the other pins.
4.That pin is the base.
● For a PNP transistor, conduction occurs when the black probe is on the base.

(Fig. 2: Black probe on base, red probe on other pins shows conduction)
Step 2: Confirm the Transistor Type (PNP)
1.Keep the black probe on the base.
2.Use the red probe to touch the emitter and collector.
3.If both show a voltage drop of ~0.6–0.7V, you’ve confirmed it’s a PNP transistor.
4.If the opposite happens (red probe must be on the base), then it’s an NPN transistor.

(Fig. 3: Black probe on base, red probe on emitter and collector both show conduction, confirming PNP)
Step 3: Distinguish Between Emitter (E) and Collector (C)
1.With black probe on base, measure voltage drops from base to the two remaining pins.
2.The one with a slightly higher forward voltage drop is the emitter.
3.The lower one is the collector.
Why? The emitter junction is more heavily doped, so its conduction voltage is slightly different.

(Fig. 4: Black probe on base, red probe alternately on emitter and collector, compare voltage differences)
Advanced Testing Methods
Method 1: Resistance Mode
If your meter lacks diode mode, use the resistance range.
l A good junction will show low resistance in forward direction and very high resistance in reverse.
l It’s less precise but still works.
Method 2: hFE (Gain) Testing
Some multimeters have an hFE test socket. Insert the transistor and check its DC gain.
l For general-purpose PNPs, expect hFE values between 100–300.
l A very low or unstable hFE may indicate a weak or damaged transistor.
Method 3: In-Circuit Testing
If you cannot desolder:
l Test directly on the PCB, but beware of parallel paths.
l If readings are inconsistent, always remove the transistor to confirm.
Method 4: Analog Multimeter
Analog meters apply opposite probe polarities in diode mode.
l Red probe is negative, black probe is positive.
l Keep this in mind to avoid confusion.
Interpreting the Results
|
Condition |
Reading with Black Probe on Base |
Interpretation |
|
Good PNP transistor |
0.6–0.7V drop to emitter & collector, other combos = OL |
Normal |
|
Shorted transistor |
0V or near 0V across all pins |
Damaged |
|
Open transistor |
OL (open) in all combinations |
Damaged |
|
Reversed polarity test |
No conduction with red probe on base |
Matches PNP behavior |
Real-World Scenarios
l Audio amplifier repair: A shorted PNP driver transistor may mute one channel.
l Power supply: A failed PNP pass transistor can cause “no output voltage.”
l Arduino project: Using salvaged PNPs? Always test before inserting into your circuit.
Precautions and Mistakes to Avoid
l Always poer off: Testing in a live circuit may damage your meter.
l Remove from circuit for accuracy: In-circuit testing can be misleading due to parallel components.
l Check the datasheet: Different packages (TO-92, TO-220, SOT-23) have different pinouts.
l Watch probe polarity: Digital vs analog meters differ in polarity.
Conclusion
Learning how to test a PNP transistor with a digital multimeter is one of the most useful practical skills for electronics enthusiasts, repair technicians, and students alike.
l You now know how to identify the base, emitter, and collector.
l You can distinguish between good, shorted, or open transistors.
l You’ve learned multiple test methods and common pitfalls.
l You understand why the readings appear as they do, not just how to interpret them.
With practice, this process will become second nature. Next time you encounter a faulty circuit, grab your multimeter and quickly confirm whether the PNP transistor is good or bad.
FAQ
Q1: Can I test high-power PNP transistors the same way?
Yes, the method works the same. Just note that power transistors may have slightly different voltage drops due to higher junction capacitance.
Q2: What if my transistor shows conduction in both directions?
That means it’s shorted internally. Replace it.
Q3: Why does the emitter have a higher voltage drop than the collector?
Because of doping differences. The emitter is heavily doped to supply carriers, so its junction voltage differs slightly.
Q4: Can I use this method for PNP Darlington transistors?
Yes, but expect voltage drops of 1.2–1.4V due to two junctions in series.
Q5: Is it possible for a transistor to pass the diode test but still fail in-circuit?
Yes. A transistor might still have degraded gain (hFE) or breakdown issues under load. The diode test is a first-level check, not a guarantee of perfect performance.
Sep 12, 2025
- My Deep Dive into NVIDIA DRIVE Orin - The Brain of Autonomous Vehicles
- Top 20 Most Advanced Autonomous Driving Chips 2025
- Tesla's HW5 FSD Chip to Use TSMC’s 3nm Process, Mass Production Expected in 2026
- Power Integrations Introduces 1700V Switch ICs for 800V EV Applications
- Toshiba Launches New In-Vehicle Optical Relay TLX9152M For Electric Vehicle BMS
- The Complete Guide to CR1620 Battery Equivalents & Smart Replacements
- CR2330 Lithium Cell Battery Equivalent and Comprehensive Guide
- What Is Wi-Fi 8 and Why It Matters: The Future of Wireless Connectivity
