Happy to announce that I’ve taken a meaningful step toward bridging the gap between AI and circuit/device modeling. It is my absolute pleasure to introduce Ψ-HDL (pronounced Psi-HDL), my Physics Structure-Informed Hardware Description Language framework. This work builds directly on the Ψ-NN (Physics structure-informed neural network) discovery framework introduced by Liu et al. (Nature Communications, 2025).
After months of intensive development and validation, primarily enabled by the Ψ-NN paper mentioned earlier also here, my Ψ-HDL framework is now published in the IEEE Access journal under the research paper title „Ψ-HDL: Physics Structure-Informed Neural Networks for Hardware Description Language Generation„.
I built Ψ-HDL because I was tired of the same two bad choices. Traditional compact device modeling demands weeks or months of manual equation derivation, and the resulting models are often too specific to generalize. Closed-box (black-box) neural networks might fit data beautifully, but they’re useless in SPICE and provide zero physical insight. I needed an end-to-end bridge that turns physics-informed learning into simulator-deployable compact models.
This only became practical recently because physics-informed neural networks (PINNs) reached broad maturity around 2019 (me implementing them for the first time while working for the German Aerospace Center), the Ψ-NN structure extraction method was published in late October 2025, providing the key missing ingredient for automatic structure discovery, and my 4090 GPU acceleration makes the clustering tractable for realistic device sizes. In fact, my scalability experiments confirmed that training time scales linearly (O(n)) with network size, maintaining over 96.5% compression efficiency even when scaling up to 500 neurons.
The pipeline has three core components that I carefully integrated: (1) PINNs, which embed conservation laws directly into the training process; (2) knowledge distillation with relation-aware regularization, which compresses networks by up to 99.8%; and (3) direct Verilog-A translation, which maps discovered structures into circuit-ready behavioral equations.
The memristor case study is the aspect I am most satisfied with. I generated synthetic data from a voltage-controlled resistive switching model to validate the proof of concept. The physics-informed teacher PINN achieves an MAE of 1.09×10^-4 A, improving over the industry-standard VTEAM baseline (1.53×10^-4 A) by 28.7%. The compressed Ψ-HDL student trades some accuracy for compactness and interpretability: it reduces from 3,360 parameters to just 13 cluster centers (96 tied parameters in the reconstructed network), achieving test MAE 3.65×10^-4 A. This reflects an explicit design choice: Ψ-HDL prioritizes physics constraints, interpretability, and simulator-deployable compact representations rather than maximizing interpolation accuracy on clean synthetic data. Figure 10 in the paper shows the teacher PINN closely matching the three I-V hysteresis loops; the compressed model remains useful while prioritizing compactness. The structure-extracted network converged 2.1 times faster than a standard PINN. When I look at the clustered parameters, I can actually see the four voltage-state coupling modes, three transport mechanisms, and separate current/state pathways that make up the device physics. To give a rough sense of implementation cost, I also derived an order-of-magnitude hardware estimate for this model in 65nm CMOS, ~1,850 μm² and ~12 mW (back-of-the-envelope, not post-layout).
I validated the framework across three other diverse problems. For the Burgers equation, Ψ-HDL automatically discovered odd-function symmetry, reducing 4,920 parameters to 12 while achieving 2.3× better accuracy than standard PINNs. The Laplace equation showed even-function properties with 99.8% compression and 54× accuracy improvement. Even the SNN XOR circuit I tested (which I introduced last year in this research paper) demonstrated how minimal topologies emerge naturally, mapping to a tiny 6-memristor crossbar that fits within just 120 μm² with a power consumption of 200 μW.
What makes this immediately practical is the direct HDL translation. I focused on Verilog-A because it’s the industry standard for analog behavioral modeling, and the generated code is clean, human-readable, and compiles without modification. Across the case studies, the generated models are typically on the order of 45–145 lines of code. The framework is designed to be extensible, so adding VHDL-AMS or SystemVerilog-AMS backends is straightforward if the community needs them.
I also verified the stability of the clustering algorithm by training with five different random seeds, achieving a coefficient of variation of 13.7% for MAE and 0.3% for compression, confirming that the structure discovery is robust and not an artifact of initialization.
To demonstrate readiness for experimental deployment, I conducted extensive noise robustness analysis. Even at a 6.5 dB signal-to-noise ratio, performance only degraded by 16%. The physics-informed constraints serve as powerful regularizers, preventing overfitting and keeping predictions within physically plausible bounds. Compare that to pure data-driven models that fail catastrophically when pushed beyond their training domain. To quantify this, I ran an ablation study removing the physics constraints (λ=0); the unconstrained model produced over 450 state violations, predicting physically impossible states, and suffered a 70-fold increase in extrapolation error.
The implications stretch far beyond memristors. I deliberately designed the framework to adapt automatically to different device physics without requiring architectural changes. I demonstrated this by training separate models on oxide-based (13 clusters), phase-change (8 clusters), and organic polymer (11 clusters) memristors. Each discovered a different structure that reflects its underlying switching mechanism.
My cross-validation analysis supported this distinction: the model exhibited higher error specifically on the device’s initial „forming cycle“ compared to steady-state cycles, demonstrating that it successfully identified the distinct physical regime of the forming process rather than just memorizing data points.
I’ve been thinking about what this means for my own future work and the broader hardware design community.
This isn’t about replacing expert modelers; it’s about amplifying their productivity. Instead of spending weeks or months deriving equations, you focus on designing physics-informed loss terms and collecting quality data. The Psi-HDL framework handles the tedious task of translating into circuit-ready code.
In my paper, I outline several extensions I’m particularly excited about, including planned experimental validation with printed Ag/PMMA:PVA/ITO devices. Transfer learning across device families could accelerate the modeling of new material systems. Uncertainty quantification would help with yield-aware design. Symbolic regression post-processing may discover closed-form analytical equations from the clustered parameters.
Right now, Ψ-HDL excels for devices where quasi-static operation is a valid assumption, with high-frequency dynamics and explicit temperature coupling being next to be addressed on my 2026 roadmap.




Neueste Kommentare