pages 1-10

Superconnectors: A Latency Insensitive Approach to SFQ Design

Publication typeProceedings Article
Publication date2024-12-16
Abstract
Superconducting logic offers the potential for incredibly high speed, low power computation, due to its gate level clocking and a lack of resistive losses. This results in large, statically scheduled pipelines with clock speeds up to 50 GHz, offering orders of magnitude better throughput than modern digital systems. To fully utilize these pipelines data must be very carefully orchestrated both outside of the system and within the pipeline itself. However, memory systems, data dependent operations, and IO introduce timing uncertainty which can cause a significant degradation in throughput and utilization. In the digital domain a rich set of latency insensitive design (LID) principles exist for this problem, but the tight combinational feedback inherent to their operation introduces a new set of challenges when integrated with single flux quanta (SFQ) based designs.We investigate these challenges by examining two classical methods for LID, the ready/valid protocol and LID-1ss. We show how a naive, direct implementation of these protocols removes much of the benefits of superconducting logic. We then explore how LID-1ss can be optimized for SFQ, resulting in better throughput and a simpler design. However, this optimized version still significantly reduces the maximum potential throughput, motivating us to propose Superconnectors: a novel SFQ-specific architecture for LID. Superconnectors leverage passive transmission line buffers, asynchronous race logic control signals, and batched transactions for a hardware design that has minimal impact on the underlying logic and throughput. We then demonstrate Superconnectors with a merge operation on an array of multipliers, introducing 44% less pipeline stall than the optimized LID-1ss approach with no impact to the achievable clock speed of the underlying module.
Found 

Are you a researcher?

Create a profile to get free access to personal recommendations for colleagues and new articles.
Metrics
0
Share