Electronic-photonic accelerator architecture

Acculux Systems

Optical AI compute built for the memory-wall era: a high-bandwidth photonic data plane, ASIC control/readout, and validated recovery for noisy optical signals.

DRC-clean photonic layout path
Zero global-route overflow
Zero detailed-route violations
Large-scale readout validation
99.9924% full-loop recovery
Platform

Compute architecture where light carries the burden and silicon keeps it disciplined.

HC-1 combines a silicon-photonic optical plane with routed digital control/readout and learned recovery for analog optical noise. The architecture targets AI workloads constrained by memory traffic, interconnect, power, and cost.

The public-facing story is simple: photonics supplies massive parallel optical movement, the ASIC layer supplies control and framing, and recovery software closes the gap between analog behavior and digital correctness.

Optical plane

409.6 Tb/s modeled optical data plane

A package-scale model validates high-bandwidth optical movement while preserving a clear path toward manufacturing and system closure.

ASIC control

Routed and timed control/readout implementation

The digital side exposes the control, calibration, telemetry, packetization, and bandwidth-monitoring paths needed to turn photonics into a controllable accelerator.

Recovery

99.9924% strict token match after noise recovery

The current full-loop path recovers noisy photonic readout into digital token outputs with a 99.9924% strict-match result, demonstrating near-perfect token agreement while keeping proprietary recovery internals private.

Evidence

Validated evidence across photonics, digital implementation, interface mapping, and recovery.

Silicon photonic tile

DRC-clean photonic layout evidence with exported layout data and a clear migration path.

Digital implementation

Routed and timed control/readout implementation evidence for the electronic side of the system.

ASIC/PIC interface

Explicit mappings for register control, optical readout, calibration transactions, packet framing, and recovery handoff.

Photonic readout dataset

Readout modeling with optical, thermal, calibration, detector/readout, quantization, crosstalk, and stress-condition coverage.

End-to-end harness

Integrated validation loop binding hardware-facing transactions, optical readout behavior, recovery inference, and 99.9924% strict-match recovered outputs.

Power sweep

Current optical-launch sweep anchors the system-power model in a 300-500 W engineering class.

Comparison

HC-1 is positioned against rack-scale AI systems on optical bandwidth, power class, and cost direction.

System Bandwidth metric Tb/s Base W Tb/s/W
Single HC-1 chip architecture HC-1 optical data plane 409.6 400 1.024
NVIDIA DGX B200 NVLink scale-up bandwidth 115.2 14,300 0.00806
NVIDIA GB200 NVL72 NVLink scale-up bandwidth 1,040 120,000 0.00867
NVIDIA HGX Rubin NVL8 NVLink scale-up bandwidth 230.4 25,222 0.00913

HC-1 values reflect the current engineering model and validated implementation stack. The cost model remains a BOM objective pending supplier, package, test, manufacturing, and yield closure.

Articles

Short technical articles on the compute wall, optical acceleration, and the Acculux validation path.

Article 01

The GPU scaling wall is economic before it is theoretical

Why AI infrastructure pressure shows up as power, memory, networking, and capital intensity.

Article 02

Why optical compute wants an ASIC beside it

Photonics can move and transform signals at scale, while electronics bring control, timing, and validation.

Article 03

What Acculux built

A high-level view of the HC-1 implementation stack without exposing proprietary implementation details.

Article 04

What the HC-1 power sweep actually says

The validated optical-launch point, the current system-power class, and what the sweep says about practical closure.

Article 05

Why hardware companies should evaluate foundry paths beyond TSMC

Why foundry selection should stay open until the right digital, package, and execution path is clear.

Article 06

We built GPU DRC because CPU checks were too slow

How a multi-hour verification bottleneck became a seconds-scale replay on real ASIC layout data.

Contact

Technical diligence, strategic capital, and partner conversations.

Contact Acculux