Micron's Memory Fortress: How HBM4, Record Margins, and a Sold-Out Supply Chain Are Rewriting the Rules of Semiconductor Economics

Micron just posted $23.9B in quarterly revenue — nearly tripling year-over-year — with 74% gross margins and Q3 guidance of $33.5B. HBM4 is in volume production for Nvidia's Vera Rubin, the entire 2026 HBM supply is sold under non-cancellable contracts, and Micron is investing aggressively from Taiwan to India to Idaho to lock in the next decade of AI memory demand.

MU · Information Technology · April 09, 2026

S&P 500 Position

Within Information Technology semiconductors, Micron sits behind Nvidia, Broadcom, and TSMC (ADR) but has emerged as the dominant pure-play memory name in the index. Its $459B market cap places it ahead of AMD, Texas Instruments, and Qualcomm. The competitive dynamic is unique: while most semiconductor peers compete on logic design and compute architecture, Micron competes in a three-player oligopoly (with Samsung and SK Hynix) where supply discipline and node leadership determine pricing power. The AI supercycle has fundamentally altered this dynamic — HBM carries structurally higher margins than commodity DRAM, and Micron's first-mover advantage on HBM4 gives it outsized share of the highest-value memory tier.

Index Weight: ~0.90% | Rank: Approximately #30-35 in S&P 500 by market cap; recently added to the S&P 100

Company Overview

Micron is no longer a cyclical memory commodity play. The company is in the middle of the most profitable period in its 48-year history, driven by a structural shift: AI workloads require orders of magnitude more memory bandwidth and capacity than traditional computing, and Micron is one of only three companies on the planet — alongside Samsung and SK Hynix — that can supply it. The company has begun volume production of HBM4 36GB 12-Hi stacks for Nvidia's Vera Rubin platform, delivering over 2.8 TB/s bandwidth with 20% better power efficiency than HBM3E. It simultaneously shipped the industry's first PCIe Gen6 data center SSD (the Micron 9650) and 192GB SOCAMM2 modules — making it the only memory supplier to bring all three product categories to volume for the Vera Rubin ecosystem at the same time. The financial transformation is staggering. In fiscal Q2 2026 (ended February), Micron reported $23.9 billion in revenue — up 196% year-over-year — with net income of $13.8 billion and gross margins of 74.4%. Q3 guidance calls for approximately $33.5 billion in revenue with margins approaching 81%. The company exited the consumer PC memory market (discontinuing the Crucial brand) in early 2026 to concentrate entirely on enterprise and AI-optimized products. On the manufacturing side, Micron completed a $1.8 billion acquisition of PSMC's Tongluo fab in Taiwan, broke ground on a new Singapore facility, opened India's first semiconductor assembly and test facility, and continues ramping its Idaho and New York greenfield fabs backed by $6.4 billion in CHIPS Act funding as part of a $200 billion long-term U.S. investment commitment.

Products & Revenue

Micron's revenue is derived primarily from two underlying technologies — DRAM and NAND — but is reported through four market-facing business units aligned to where those technologies are deployed. DRAM constitutes roughly 75-80% of revenue and is the high-margin engine, driven by HBM, server DRAM (DDR5, LPDDR5X), and mobile DRAM. NAND generates the remaining 20-25% through data center SSDs, mobile storage (UFS), and embedded applications. The strategic shift toward data center and AI products has fundamentally altered Micron's margin profile: data centers accounted for over 56% of FY2025 revenue, and the mix has only intensified through fiscal 2026 as HBM and data center SSD revenue have surged. The company's entire 2026 HBM capacity is sold out under non-cancellable long-term supply agreements.

Cloud Memory Business Unit (CMBU) (~33%): Supplies HBM (HBM3E, HBM4), high-capacity server DIMMs, and low-power server DRAM for hyperscale cloud data centers. Revenue more than doubled sequentially in FQ2 2026 to approximately $7.75 billion, driven by AI server demand.

Mobile and Client Business Unit (MCBU) (~32%): Provides LPDDR5X mobile DRAM, UFS storage for smartphones, and memory/storage for AI PCs and client devices. Revenue jumped from $2.24B a year ago to approximately $7.71B in FQ2 2026.

Core Data Center Business Unit (CDBU) (~20%): Focuses on enterprise-grade data center SSDs (Micron 9650 Gen6, 9550 Gen5), high-capacity NAND storage for AI training/inference workloads, and CXL-based memory expansion products.

Embedded Business Unit (EBU) / Auto, Industrial & Embedded (~15%): Serves automotive (ADAS, infotainment), industrial IoT, and consumer embedded markets. Growing content per vehicle and physical AI edge applications (drones, robotics) drive increasing DRAM and NAND attach rates.

Segment structure reflects Micron's reorganization to market-facing business units beginning FQ4 FY2025. Revenue percentages are approximate based on FQ2 2026 earnings call disclosures and CNBC/CNBC reporting. Exact segment splits not yet available in 10-Q form for the new structure.

Leadership

Sanjay Mehrotra

CEO since 2017. Co-founded SanDisk in 1988 and served as its CEO from 2011 until its acquisition by Western Digital in 2016. Holds over 40 years of semiconductor memory experience spanning Intel, SEEQ Technology, and Integrated Device Technology. Became Chairman in January 2025 and received the Semiconductor Industry Association's Robert N. Noyce Award in 2023 — the industry's highest individual honor.

Scott DeBoer, EVP, Chief Technology and Products Officer: Leads all technology development and engineering — from silicon design through HBM stacking, SSD firmware, and ASIC development. A Micron lifer since 1995 with a PhD in electrical engineering, DeBoer owns the 1-gamma DRAM node transition and HBM4 roadmap that define the company's competitive position.

Sumit Sadana, EVP and Chief Business Officer: Manages all customer-facing commercial strategy and the critical long-term supply agreements with hyperscalers. Previously Chief Strategy Officer at SanDisk and held executive roles at IBM and Freescale. His NVIDIA co-engineering relationship underpins the Vera Rubin HBM4 design win.

Manish Bhatia, EVP, Global Operations: Oversees the entire global manufacturing footprint — from Idaho to Taiwan to Singapore to India. Led the $1.8B Tongluo fab acquisition, the Singapore groundbreaking, and India's first semiconductor assembly facility opening. Responsible for executing the multi-decade $200B U.S. manufacturing commitment.

Mark Murphy, EVP and Chief Financial Officer: Architects the capital allocation strategy behind Micron's aggressive expansion — guiding CapEx from $20B to $25B for FY2026 — while maintaining a debt-to-equity ratio of just 0.15. Balancing over $10B in construction-related CapEx increases projected for FY2027.

Mark Liu, Board Member: Former Chairman of TSMC — the world's most advanced semiconductor foundry. His presence on Micron's board provides direct strategic insight into advanced packaging, EUV lithography, and the foundry ecosystem critical to HBM4's manufacturing roadmap.

The AI Angle

The non-substitutable memory bottleneck powering AI

Micron's AI strategy is straightforward and devastating in its market impact: own the memory stack that every AI accelerator depends on, from training to inference to edge. The flagship product is HBM4, now in volume production as a 36GB 12-Hi stack achieving over 11 Gb/s pin speeds and greater than 2.8 TB/s bandwidth — a 2.3x improvement over HBM3E. This product is co-designed with Nvidia for the Vera Rubin platform, and Micron has already shipped 48GB 16-Hi HBM4 samples to customers, pointing toward denser configurations. The company's entire 2026 HBM output is locked under non-cancellable contracts. Beyond HBM, Micron is first-to-market with PCIe Gen6 data center SSDs (the Micron 9650, optimized for Nvidia BlueField-4 STX architecture with up to 28 GB/s sequential read throughput) and 192GB-256GB SOCAMM2 modules enabling up to 2TB of memory per CPU in Vera Rubin NVL72 systems. The company also shipped the industry's first 122TB high-capacity SSD delivering 16x the sequential read throughput per watt of capacity-matched HDD configurations. Micron is also pushing into physical AI at the edge through a strategic investment in SiMa.ai, whose Modalix MLSoC platform integrates Micron's LPDDR5X memory for robotics, autonomous vehicles, and industrial automation workloads running LLMs and VLMs locally. This positions Micron to capture memory attach rates across the full AI deployment spectrum — from hyperscale training clusters to autonomous drones. The venture capital arm, led by Andrew Byrnes, is explicitly targeting startups building hardware-software co-optimized edge AI systems where memory bandwidth per watt is the binding constraint. Internally, Micron uses AI throughout its own operations — product design, technology development, manufacturing optimization — reporting 30-40% productivity uplift in select generative AI use cases such as code generation. The 1-gamma DRAM node leverages EUV lithography and delivers 30% better bit density with 20% lower power versus 1-beta, and its yield ramp is running ahead of the record pace achieved on the prior node. The competitive risk is concentration: Micron, Samsung, and SK Hynix constitute the entire global HBM supply chain. SK Hynix's chairman has stated the memory shortage will persist until 2030. Samsung is scrambling to close the HBM yield gap. Micron's risk factors are execution-dependent — yield rates on HBM4 at new fabs like Tongluo, the cadence of EUV tool installations, and whether AI infrastructure spending sustains its current trajectory. Google's TurboQuant compression research briefly spooked memory stocks in March 2026, illustrating that any perception of reduced memory intensity per AI workload creates volatility.

Financial Snapshot

Revenue (TTM): $58.1B — TTM (period ending Feb 28, 2026) | Net Income: $24.1B net income (TTM)

Margins: Gross 74.4% (FQ2 2026), operating ~67% (FQ2 2026), net 41.5% (TTM); Q3 guidance implies gross margins approaching 81%

Micron's financial profile has undergone a structural transformation. FY2025 revenue was $37.4B with 40% gross margins; the TTM figure of $58.1B already incorporates the explosive ramp in HBM and data center revenue. The company guided FY2026 CapEx to $25B (up from $20B), with construction-related CapEx projected to increase by over $10B year-over-year in FY2027 as it builds out Idaho, New York, Taiwan, and Singapore fabs. Despite this investment intensity, the balance sheet remains fortress-grade at 0.15x debt-to-equity, supported by $13.9B in cash and $6.4B in CHIPS Act funding. The $10B share repurchase authorization ($7.19B executed through August 2025) signals management confidence in sustained cash generation.

1-Year Performance

Current price $421.51. The stock has surged approximately 355% over the trailing twelve months and is up roughly 47% year-to-date in 2026.

The 6x-plus move from the 52-week low reflects the market repricing Micron from a cyclical commodity chip stock to a structural AI infrastructure beneficiary. The Q2 earnings blowout (196% YoY revenue growth, EPS of $12.20 vs. $8.60 consensus) and Q3 guidance of $33.5B in revenue were the primary catalysts for the March all-time high. The stock pulled back ~10% in late March amid concerns over Google's TurboQuant memory compression technique and broader questions about AI capex sustainability, but has recovered sharply with an 18% gain over the past week following a U.S.-Iran ceasefire that eased Strait of Hormuz disruption fears and renewed conviction in AI infrastructure spending.

Recent News

Fun Fact: In 1991, Micron attempted to build a RISC processor called FRISC — a 64-bit, 80 MHz chip with floating-point capabilities targeting embedded control and signal processing. The company even set up a subsidiary and had the design integrated into graphics cards before concluding in 1992 that it would not deliver the 'best bang for the buck' and reassigning engineers to other projects. Three decades later, Micron's silicon is inside every major AI accelerator on the planet — not as the processor, but as the memory that determines how fast those processors can actually think.