X
Tech

ISSCC 2019 Preview: Moore's Law may be slowing down, but innovation isn't

With or without new leading-edge microprocessors, the demand for faster compute, bigger storage and speedier networks continues to grow. As this year’s conference will show, the industry keeps coming up with new ways to deliver.
Written by John Morris, Contributor

The International Solid-State Circuits Conference, aka ISSCC, that takes place next week in San Francisco will be notable in part for what won't be there. For the first time in recent memory, the annual semiconductor conference--now in its 66th year--will not include any new general-purpose processors, in a sign of how the chip industry is changing. Still there will be no shortage of innovation with bleeding-edge hardware in areas such as AI, 5G wireless, automotive and healthcare.

The lack of any major microprocessor presentations seems to reflect two trends. First, Moore's Law scaling is slowing down as transistors approach fundamental limits. Intel's 10nm Ice Lake won't arrive until the holidays and the foundries are still ramping 7nm chips (with comparable dimensions), so 5nm is still years out. Second, recent gains have come less from general-purpose CPUs and more from specialized accelerators such as GPUs, FPGAs and custom chips known as ASICs. This year's processor session has no shortage of these specialized chips for automotive, robotics, cryptography, graph processing and optimization problems. IBM will also give a talk on Summit at Oak Ridge National Laboratory and Sierra at Lawrence Livermore National Laboratory, which use a combination of Power9 CPUs and Nvidia Tesla V100 GPUs to vault to the current Top500 list of the world's fastest computers.

Facebook's Yann LeCun will open the conference with a talk on the challenges to continue to make progress in AI. Most of the progress in deep learning since the ImageNet contest in 2012 has been in supervised learning, which requires lots of data labeled by humans, or in reinforcement learning, which requires too many trials to be practical for many applications. The main challenge for the next decade, LeCun will argue, will be to build machines that can learn more like humans. This "self-supervised learning" will require much more powerful hardware than we have today, but it could someday result in machines with some level of common sense.

The increase in AI and machine learning workloads has led to mobile SoCs with neural processing units such as Apple's A12 Bionic and Huawei's HiSilicon Kirin 980 for smartphones and other edge devices. In a separate session on machine learning, Samsung will unveil its dual-core neural processor with 1,024 multiply-accumulate (MAC) units designed for its 8nm process which is capable of 6.94 trillion operations per second at 0.8 volts. Samsung says the architecture of its neural processor delivers a 10x speed-up over the previous state-of-the-art, which is tough to verify without more details on the data formats and algorithms, but what is clear is that the performance of neural processors has been growing rapidly as the chart below comparing the progress in machine learning chips since last year's conference illustrates. This year's talks will also include designs that can handle different types of artificial neural networks (including neuromorphic chips for spiking neural nets) and multiple-bit precision to trade off accuracy and throughput.

isscc-2019-neural-networks.jpg

The performance and efficiency of neural network processors is increasing at a rapid rate.

Source: ISSCC 2019

One of the key challenges for AI hardware is keeping these highly-parallel processing engines busy because systems can't read data from memory and write the results back fast enough. This year, ISSCC will include an afternoon forum devoted to memory-centric architectures for AI and machine learning applications with talks by ARM, IBM, Intel, Nvidia, and Samsung among others. At the high-end, faster memory such as High-Bandwidth Memory (HBM) and GDDDR6 are helping to address this and emerging storage-class memories such as STT-MRAM could fill the void between DRAM system memory and flash solid-state storage.

A more novel solution that is the focus of much current research cuts out the data transfer altogether and instead crunches the numbers in the memory array. Some of the CIM (compute in-memory) candidates at this year's conference include ReRAM and SRAM macros. These designs are especially promising for machine learning in edge devices because they have very low latency and are highly efficient in terms of operations per second per watt.


Read More


The conference will also include lots of news on more conventional embedded and standalone memory devices. Both Samsung and TSMC will present 7nm dual-port SRAM bitcells for high-performance applications (dual-port RAM allows multiple reads and writes to take place simultaneously to boost performance). ISSCC will also feature some of the first presentations on the next-generation of DRAM memory to increase bandwidth and reduce power. Samsung will describe its first-generation 10nm-class LPDDR5 (low-power DDR5) device for smartphones and other mobile applications, which is not only faster (7.4Gbps per pin) but also cuts read and write power by 21 percent and 33 percent, respectively, compared to the current LPDDR4X. Rival SK Hynix will present a 16Gb DDR5 chip that operates at 6.4Gbps per pin and cuts power by nearly a third.

SK Hynix will also have an interesting talk on a managed DRAM package that combines eight chips with a controller to reach capacities of 512GB on module. The first 256GB modules based on 16Gb DDR4 chips (up to four per package) are just now hitting the market pushing the capacity in mainstream Xeon Scalable "Cascade Lake" two-socket servers to 6TB.

On the storage side, the introduction of 3D NAND flash memory and three- (TLC) and four-bits-per-cell (QLC) programming is pushing density to new heights. Western Digital (SanDisk) will announce the industry's highest 3D memory stack with 128 layers and the peripheral circuitry under the array resulting in a 512Gb TLC chip. Samsung will also present its latest 512Gb TLC chip while Toshiba's 96-layer device employs QLC to push the density to 1.33Tb per chip or more than 1GB per square millimeter.  

isscc-2019-storage.jpg

With the rapid progress in 3D stacked NAND, flash memory continues to outrun all of the emerging competitors.

Source: ISSCC 2019

High-performance computing, massive cloud data centers and faster 4G and 5G networks are driving demand for faster networks at all levels. This year's conference will include several announcements of several state-of-the art wireline transceivers that use PAM-4 modulation to reach speeds at or above 100Gbps-including three 7nm chips (eSilicon, Huawei and MediaTek) and an IBM 14nm FinFET design that reaches a record 128Gbps. These will help meet demand for faster links within and between data centers.

For wireless wide-area networks, Qualcomm currently has the edge with its X50 modem—which will show up in many of the first 5G phones at Mobile World Congress later this month—but several others are close behind. At ISSCC, Samsung will present a 14nm baseband that supports standalone and non-standalone 5G (as well as 2G, 3G and 4G) and delivers up to 3.15 Gbps down and 1.27Gbps up on a die measuring 38.4 square millimeters. It is part of the Exynos Modem 5100 chipset (which also includes power management chips) that Samsung will be demonstrating at ISSCC. Intel will present a 28nm 5G transceiver for sub-6GHz bands and mmWave bands. It is part of the XMM 8160, a 5G chipset that Intel announced in late 2018. The XMM 8160 supports standalone and non-standalone 5G modes (and 2G, 3G and 4G and 5G) and will be capable of speeds up to 6Gbps. The XMM 8160 ships in the second half of 2019 and replaces Intel's first 5G chipset, the XMM 8060, which it said is "is becoming a development platform."  

isscc-2019-data-rates.jpg

High-performance computing, massive cloud data centers and speedy 4G and 5G networks drive demand for faster networks at all levels. 

Source: ISSCC 2019

It may be taking a lot longer for Intel and others to deliver the next big "tick" in microprocessors, but the industry still has plenty of "tocks" up its sleeves. The demand for faster compute, storage and communications has not slowed, and as this year's ISSCC will illustrate, chipmakers continue to find innovative ways to respond.

Editorial standards