Celestial AI

  • What it is:Celestial AI is a semiconductor technology company that develops Photonic Fabric, an optical interconnect platform enabling high-speed, energy-efficient data movement for AI data centers.
  • Best for:Hyperscale AI data center operators, XPU makers (NVIDIA, AMD, Intel), AI infrastructure OEMs
  • Pricing:Starting from Custom enterprise pricing
  • Rating:92/100Excellent
  • Expert's conclusion:Celestial AI (via Marvell) is critical to hyperscalers as they build out their next generation multi-rack AI system architecture that requires optical scale-up interconnectivity.
Reviewed byMaxim Manylov·Web3 Engineer & Serial Founder

What Is Celestial AI and What Does It Do?

Celestial AI was a pioneering AI infrastructure company specializing in optical interconnect technology using silicon photonics to revolutionize data movement for AI computing. The company developed the Photonic Fabric platform, including products like PFLink, PFSwitch, and OMIB, targeting data centers, cloud computing, and edge AI applications. It was acquired by Marvell Technology on February 2, 2026.

Acquired
📍Santa Clara, CA
📅Founded 2020
🏢Private (Acquired)
TARGET SEGMENTS
HyperscalersData CentersAI Chip VendorsHigh-Performance ComputingEdge Computing

What Are Celestial AI's Key Business Metrics?

📊
$589M
Total Funding
📊
$2.5B
Latest Valuation
📊
Multiple (Series A, C, C1)
Funding Rounds
🏢
51-200
Employees
👥
Lead hyperscaler customers
Customers

How Credible and Trustworthy Is Celestial AI?

92/100
Excellent

Exceptional credibility demonstrated by massive funding, hyperscaler adoption, and acquisition by established semiconductor leader Marvell Technology.

Product Maturity85/100
Company Stability95/100
Security & Compliance80/100
User Reviews75/100
Transparency85/100
Support Quality90/100
Acquired by Marvell Technology (Feb 2026)$589M total funding from top VCsAdopted by lead hyperscaler customersUnicorn status achieved51-200 specialized employees

What is the history of Celestial AI and its key milestones?

2020

Company Founded

Founded by David Lazovsky and Preet Virk in Santa Clara, CA to develop photonic interconnects for AI computing.

2021

Series A Funding

Raised $56M to develop Photonic Fabric technology platform.

2024

Series C Funding

Closed $175M Series C led by U.S. Innovative Technology Fund.

2024

Hyperscaler Adoption

Announced Photonic Fabric adoption by lead hyperscaler customers.

2025

Series C1 & Unicorn Status

Raised $250M Series C1 at $2.5B valuation, achieving unicorn status.

2026

Acquired by Marvell

Acquired by Marvell Technology to integrate Photonic Fabric into broader AI connectivity platform.

What Are the Key Features of Celestial AI?

📊
Photonic Fabric Platform
Revolutionary optical interconnect technology using light for data movement within chips and between chips.
PFLink
Connectivity solutions available as chiplets or licensable IP for AI accelerator integration.
PFSwitch
Low-latency, high-bandwidth scale-up switches for data center AI infrastructure.
OMIB
Advanced packaging solutions enabling optical interconnects for AI systems.
🔗
Silicon Photonics Integration
Integrates photonics directly into AI accelerators for superior performance vs electronic interconnects.
Multi-Rack Scaling
Enables efficient scaling of AI processors across multiple racks in data centers.

What Technology Stack and Infrastructure Does Celestial AI Use?

Infrastructure

Data center scale AI compute infrastructure

Technologies

Silicon PhotonicsOptical InterconnectsChipletsASIC Design

Integrations

AI AcceleratorsHyperscaler InfrastructureData Center NetworkingAdvanced Packaging

AI/ML Capabilities

Optical processing units and AI accelerators leveraging integrated silicon photonics for machine learning workloads, edge computing, and AI-native applications

Based on company announcements, press releases, and technology descriptions

What Are the Best Use Cases for Celestial AI?

Hyperscale Data Centers
Scale AI clusters across multiple racks with photonic interconnects solving memory wall and data movement bottlenecks
AI Chip Vendors
Integrate PFLink chiplets/IP into next-generation AI accelerators for 10x better performance-per-watt
Cloud Computing Providers
Support massive AI training/inference workloads with energy-efficient optical connectivity solutions
High-Performance Computing Centers
Enables low-latency, high-bandwidth interconnects for advanced scientific computing and AI research.
NOT FORConsumer Mobile Applications
Not suitable as this is focused on enterprise data center and infrastructure scale applications.
NOT FORIndividual Developers
Not applicable - Enterprise / B2B hardware requires large amounts of integration.

How Much Does Celestial AI Cost and What Plans Are Available?

Pricing information with service tiers, costs, and details
Service$CostDetails🔗Source
Photonic Fabric SolutionsCustom enterprise pricingConnectivity, switching, and packaging solutions for AI data centers. Sold to hyperscalers and large enterprisescelestial.ai/solutions
Optical Scale-up InterconnectsCustom quoteHigh-bandwidth, low-latency photonic fabrics for XPUs. Pricing through OEM partners like Marvell post-acquisitionMarvell investor press release
Photonic Fabric SolutionsCustom enterprise pricing
Connectivity, switching, and packaging solutions for AI data centers. Sold to hyperscalers and large enterprises
celestial.ai/solutions
Optical Scale-up InterconnectsCustom quote
High-bandwidth, low-latency photonic fabrics for XPUs. Pricing through OEM partners like Marvell post-acquisition
Marvell investor press release

How Does Celestial AI Compare to Competitors?

FeatureCelestial AICo-Packaged Optics (CPO)Copper InterconnectsUALink Switches
Core FunctionalityPhotonic scale-up fabric for XPUsOptical I/O at package edgeElectrical interconnectsScale-up electrical switching
BandwidthTerabytes low-latencyHighLimited by distanceHigh bandwidth
LatencyNanosecond-classHigher than CelestialHigherHigher than photonics
Power Efficiency2x better than copperModerateBaselineModerate
ReachRack-to-rackPackage-limitedShort distanceRack-scale
Thermal StabilityExcellent for 3D co-packagingModerateGoodGood
PricingEnterprise customEnterprise custom$/Gbps lowerEnterprise custom
IntegrationWithin XPU packageDie edgeStandardSwitch-based
Market AvailabilityPre-commercial (2026)EmergingMatureMature
Enterprise Scale1000s XPUsLimitedRack-limitedRack-scale
Core Functionality
Celestial AIPhotonic scale-up fabric for XPUs
Co-Packaged Optics (CPO)Optical I/O at package edge
Copper InterconnectsElectrical interconnects
UALink SwitchesScale-up electrical switching
Bandwidth
Celestial AITerabytes low-latency
Co-Packaged Optics (CPO)High
Copper InterconnectsLimited by distance
UALink SwitchesHigh bandwidth
Latency
Celestial AINanosecond-class
Co-Packaged Optics (CPO)Higher than Celestial
Copper InterconnectsHigher
UALink SwitchesHigher than photonics
Power Efficiency
Celestial AI2x better than copper
Co-Packaged Optics (CPO)Moderate
Copper InterconnectsBaseline
UALink SwitchesModerate
Reach
Celestial AIRack-to-rack
Co-Packaged Optics (CPO)Package-limited
Copper InterconnectsShort distance
UALink SwitchesRack-scale
Thermal Stability
Celestial AIExcellent for 3D co-packaging
Co-Packaged Optics (CPO)Moderate
Copper InterconnectsGood
UALink SwitchesGood
Pricing
Celestial AIEnterprise custom
Co-Packaged Optics (CPO)Enterprise custom
Copper Interconnects$/Gbps lower
UALink SwitchesEnterprise custom
Integration
Celestial AIWithin XPU package
Co-Packaged Optics (CPO)Die edge
Copper InterconnectsStandard
UALink SwitchesSwitch-based
Market Availability
Celestial AIPre-commercial (2026)
Co-Packaged Optics (CPO)Emerging
Copper InterconnectsMature
UALink SwitchesMature
Enterprise Scale
Celestial AI1000s XPUs
Co-Packaged Optics (CPO)Limited
Copper InterconnectsRack-limited
UALink SwitchesRack-scale

How Does Celestial AI Compare to Competitors?

vs Co-Packaged Optics (CPO)

Celestial's optical integration (directly in XPUs) has better nanosecond latency and superior thermal performance compared to CPO's Edge of Die design. Serves the same hyperscale customer base but with a 3D packaging advantage.

Celestial is the leader in Power Efficiency and XPU Integration Depth for Extreme-Scale AI Clusters.

vs Copper Interconnects

Photonic Fabric achieves 2x power savings, longer reach and higher bandwidth than physical limitations imposed by copper. While copper is less expensive for shorter reach applications, it cannot scale to 1,000 + XPUs across racks.

Optical Transition Inflection Point - Enables Scaling Beyond Physics of Copper.

vs NVIDIA NVLink

Proprietary scale-up of NVLink optimized for NVIDIA GPU's vs Celestials Open Optical Fabric. Works cross-XPU vendor with lower power and longer reach than NVLink.

Multiple Vendor AI Factories Prefer Celestials Fabric A-Gnostic Approach.

vs UALink Consortium

Marvell offers Hybrid Solution combining their proprietary UALink Electrical Switches and Photonics from Celestial. Pure Optical Scale-Up Only Available with Celestial Technology that does not suffer from Electrical Bottleneck Limitations.

Post Acquisition of Celestial by Marvell Creates Complete Leadership in Scale-Up Stack.

What are the strengths and limitations of Celestial AI?

Pros

  • 2x Power Savings Over Copper - Critical to Hyperscale AI Economics.
  • Nanosecond Latency - Allows True All-To-All XPU Communication.
  • Enables Rack-to-Rack Scaling - Supports 1,000 + XPUs in Single Domain.
  • Excellent Thermal Stability - Vertically Co-Packages with Hot XPUs.
  • Releases XPU Die Edge - Enabling More HBM Stacking in Packages.
  • Broader Applicability - Memory Pooling, Die-to-Die Optical Replacement.
  • Strong Funding Traction - Raised >$250 M+, >$5B+ Acquisition Validates Tech.

Cons

  • Pre-Commercial - Revenue Starts H2 2028 Per Marvell Guidance.
  • Only enterprise — no SMB pricing, or self-service options available
  • Complex integration — a 2.5D package assembly ecosystem is required
  • Photonics manufacture — more difficult to produce than electronic solutions
  • Post-close acquisition risk — dependent on Marvell integration
  • Very limited current implementations — need to validate technology at scale
  • Pricing unclear due to custom pricing — only hyperscalers can be used as benchmarks

Who Is Celestial AI Best For?

Best For

  • Hyperscale AI data center operatorsProvides hundreds-of-thousands times improvement in power, bandwidth, latency compared to competing solutions
  • XPU makers (NVIDIA, AMD, Intel)Provides denser HBM packaging and power-efficient scale-up fabrics
  • AI infrastructure OEMsThe Marvell acquisition provides a complete stack of scale-up solutions
  • Reasoning model developersScale up large amounts of data using in-network memory/compute through batch/context processing
  • Multi-rack AI superclustersEliminates all distance limitations imposed by copper in the data center fabric

Not Suitable For

  • SMB AI developersThis is an enterprise only price point for hardware. Consider cloud based GPU instances instead.
  • Edge AI deploymentsA data center fabric technology. Embedded AI chips may be another option.
  • Software-only AI teamsAn infrastructure play for hardware. Focus on optimizing your application framework.
  • Budget-constrained startupsPre-commercial pricing. Need to wait until the broader ecosystem is mature enough to support this solution.

Are There Usage Limits or Geographic Restrictions for Celestial AI?

Availability
Pre-commercial. Shipping H2 2028 via Marvell
Scale Domain
1000s of XPUs across multiple racks
Interconnect Reach
Package-to-package, rack-to-rack optical
Power Efficiency
2x better than copper interconnects
Latency
Nanosecond-class for XPU communication
Target Market
Hyperscale AI data centers, XPU makers
Deployment Timeline
Marvell acquisition closes Q1 2026
Revenue Milestones
$500M run rate by Q4 FY2028, $1B by FY2029

Is Celestial AI Secure and Compliant?

Data Center Grade ReliabilityThermal stability for multi-kW XPU co-packaging environments
Manufacturing Security2.5D packaging designed for high-volume AI infrastructure production
Supply Chain SecurityMarvell acquisition ensures established semiconductor security practices
Infrastructure RedundancyEnables multi-rack optical fabrics with high availability design
Optical Domain IsolationSeparate photonic signaling paths reduce electrical domain interference

What Customer Support Options Does Celestial AI Offer?

Channels
Direct contact through celestial.aiMarvell Data Center Group post-acquisitionThrough Marvell IR after close
Hours
Business hours for pre-sales, OEM partner support post-acquisition
Response Time
Enterprise sales cycles, OEM partner SLAs post-Marvell integration
Satisfaction
N/A - pre-commercial hardware platform
Specialized
Hardware integration support for XPU makers and data center architects
Business Tier
Hyperscaler/OEM partner program with dedicated engineering resources
Support Limitations
Pre-revenue stage - support through sales engineering only
No public self-serve documentation available
Customer access limited to qualified hyperscalers/OEMs

What APIs and Integrations Does Celestial AI Support?

API Type
No public APIs found. Celestial AI provides hardware chiplets and photonic fabric technology, not developer-facing software APIs.
Authentication
N/A - No public developer APIs available.
Webhooks
N/A - No public API or webhook support identified.
SDKs
None found. Focus is on hardware integration (UCIe, AXI, UAL protocols) rather than software SDKs.
Documentation
No developer API documentation. Technical docs focus on Photonic Fabric chiplet specs and integration for hardware engineers.
Sandbox
N/A - No developer sandbox available.
SLA
N/A for APIs. Enterprise hardware support through partners/hyperscalers.
Rate Limits
N/A - Bandwidth specs: 14.4 Tbps (Gen1), 28.8 Tbps (Gen2) per chiplet.
Use Cases
Hardware-level: XPU-to-XPU scale-up interconnects, in-network memory for AI clusters. Targets hyperscalers building multi-rack AI systems.

What Are Common Questions About Celestial AI?

Photonic Fabric is an optical interconnect platform that will solve the Memory Wall problem. This allows you to provide a scalable memory environment of tens of TB’s and a terabyte class of bandwidth at nano second latencies and single digit pJ/bit power between compute nodes. It will allow you to create XPU-to-XPU scale-up networks for multi-rack AI clusters.

Photonic Fabric will give you 2x better power efficiency, longer reach (>50m), and 10x better bandwidth (16 Tbps per chiplet vs 1.6T ports) than traditional interconnects. It will also allow for co-packaging directly with XPUs freeing up die space for more HBM memory.

PFLink chiplet (14.4 Tbps Gen1, UCIe-A interface), PFSwitch (low-latency scale-up switch), and Photonic Fabric Appliances with 33TB memory capacity. Will meet HBM3E/HBM4 bandwidth needs.

Building the next generation of AI data centers for hyperscalers and their ecosystem partners. Multiple hyperscalers are engaged for commercial use of scale-up optical interconnects.

Marvell has announced that its acquisition will be an accelerator for the development of scale-up connectivity for multi-rack AI systems. The first generation PF chips are expected to be co-packaged with custom XPUs and switches.

Yes, four tape-outs have been completed. The first generation PF Link chiplets provide electrical/optical components and support up to 16 Tbps bandwidth. They are currently engaged with the hyperscalers for the deployment at the commercial scale.

Protocol-Adaptive; AXI, UAL, HBM/DDC, CXL, and proprietary protocols. Compatible with standard packaging processes through use of UCIe-A D2D interface.

Single-digit pJ/bit of exceptionally low power, nano-second latency and thermal stability make possible co-packaging with multi-kW XPUs. It provides 12.5x improvement in speed of performance for 10T parameter DLRM models.

Is Celestial AI Worth It?

Celestial AI’s Photonic Fabric is a significant advancement in optical interconnects for AI scale-up network applications. This advancement addresses the Memory Wall with unparalleled bandwidth, latency and power efficiency. The recent acquisition by Marvell positions the company perfectly for the upcoming multi-rack AI data center boom. Innovation in hardware focuses on hyperscalers developing next decade AI infrastructure.

Recommended For

  • Hyperscalers are developing rack-scale AI cluster architecture (XPUs in the thousands).
  • XPU vendors need optical solutions for scaling their XPU products.
  • AI infrastructure teams addressing HBM bandwidth bottlenecks.
  • Data Center Architects are planning to deploy models that require >10T parameters.

!
Use With Caution

  • Companies not yet prepared for the integration of chiplet/co-packaged optics.
  • Smaller AI deployments (less than rack-scale) that can get by using copper.
  • Teams that need a product immediately — focusing on custom deployments.

Not Recommended For

  • Software Only AI Developers — hardware interconnect solution.
  • Budget constrained deployments — enterprise/hyperscaler pricing.
  • Legacy Data Centers that do not plan to adopt XPU/multi-rack architectures.
Expert's Conclusion

Celestial AI (via Marvell) is critical to hyperscalers as they build out their next generation multi-rack AI system architecture that requires optical scale-up interconnectivity.

Best For
Hyperscalers are developing rack-scale AI cluster architecture (XPUs in the thousands).XPU vendors need optical solutions for scaling their XPU products.AI infrastructure teams addressing HBM bandwidth bottlenecks.

What do expert reviews and research say about Celestial AI?

Key Findings

Photonic Fabric is developed for AI scale-up networks, and has been used in the delivery of chiplets capable of 14.4-28.8 Tbps with HBM3E/HBM4 bandwidth at nanosecond latency. This technology was recently acquired by Marvell for use in multi-rack AI data centers, as it addresses the "Memory Wall" problem, and enables a 12.5 times increase in performance for large AI models.

Data Quality

Good - detailed technical specifications from official website and Marvell acquisition announcement. Customer traction confirmed with hyperscalers. Limited pricing/public API details as hardware-focused enterprise solution.

Risk Factors

!
Risks associated with roadmap/integration due to recent acquisition by Marvell
!
Maturity issues surrounding hardware chiplets; four tapeouts have occurred, however, all are custom implementations
!
Commercial-scale dependency on hyperscalers
!
Still emerging co-packaged optics ecosystem
Last updated: February 2026

What Additional Information Is Available for Celestial AI?

Marvell Acquisition

Marvell has announced that they will acquire Celestial AI to accelerate the development of scale-up connectivity solutions for future generation multi-rack AI data centers. The first commercial implementation of this technology is expected with custom XPUs and scale-up switches.

Technology Leadership

Four tapeouts have completed for PFLink chiplets. Generation one provides 16 Tbps bandwidth per chiplet (10x current 1.6T ports). Generation two is expected to double the bandwidth to 28.8 Tbps with full HBM3E bandwidth.

Industry Partnerships

Engaged with multiple hyperscalers for the commercialization of this technology. A partnership with Samsung was highlighted for their HBM, DDR, and advanced packaging capabilities.

Performance Claims

Photonic Fabric Appliances provide 33 TB of memory capacity and 115 Tbps of switching capability. This results in the potential for up to 71 percent XPU CapEx/power savings and a 12.5 times speed increase for 10T parameter DLRM models.

Technical Differentiation

Only solution that can maintain thermal stability when directly co-packaging with multi-kW XPUs. Provides the ability to utilize die edge space for additional HBM, protocol-adaptive (AXI/UAL/CXL) and >50 m reach for multi-rack clusters.

What Are the Best Alternatives to Celestial AI?

  • NVIDIA NVLink: Proprietary high-speed GPU interconnect designed for scale-up applications. Industry standard, however, uses copper-based interconnects with limitations in terms of reach and bandwidth compared to optical photonics. Best for NVIDIA GPU cluster deployments where extreme rack-scale expansion is not required.
  • Broadcom CPO (Co-Packaged Optics): Co-packaged optical engines for pluggable transceivers and switches. A more established ecosystem, however, results in higher power/latency than Celestials chiplet based approach. Better suited for scale-out networking applications rather than XPU-to-XPU scale-up applications.
  • AMD Infinity Fabric: Fabric to connect EPYC CPUs and GPU clusters that is scalable. Interconnect is designed specifically for AMD ecosystems. Is limited in its scalability as a result of being bound by the physical limitations of copper versus the reach and bandwidth of optical solutions. Ideal for AMD-centric deployments. (amd.com)
  • Intel UALink: The standard for scale-up interconnects for AI developed by the UALink Consortium (Meta, Intel, AMD). A standard for protocol supported by Celestial but implemented as an electrical interconnect versus the optical implementation in Celestial. Ideal for scale-up that does not require replacement of the full-scale up network and has compatibility with the protocol. (ualinkconsortium.org)
  • Ayara Memory Fabric: Optical memory pooling solution for CXL-based disaggregated memory in AI clusters. Focus on memory as opposed to the full-stack compute and memory fabric of Celestial. Ideal for expanding memory in an AI cluster environment using existing hardware without replacing the entire scale-up network. (ayara.com)

Expert Reviews

📝

No reviews yet

Be the first to review Celestial AI!

Write a Review

Similar Products