Chip Magic

Sometimes, it just takes a challenge.

After years of predictable and, arguably modest, advances, we’re beginning to witness an explosion of exciting and important new developments in the sometimes obscure world of semiconductors—commonly known as chips.

Thanks to both a range of demanding new applications, such as Artificial Intelligence (AI), Natural Language Processing (NLP) and more, as well as a perceived threat to Moore’s Law (which has “guided” the semiconductor industry for over 50 years to a state of staggering capability and complexity), we’re starting to see an impressive range of new output from today’s silicon designers.

Entirely new chip designs, architectures and capabilities are coming from a wide array of key component players across the tech industry, including Intel, AMD, nVidia, Qualcomm, Micron and ARM, as well as internal efforts from companies like Apple, Samsung, Huawei, Google and Microsoft.

It’s a digital revival that many thought would never come. In fact, just a few years ago, there were many who were predicting the death, or at least serious weakening, of most major semiconductor players. Growth in many major hardware markets had started to slow, and there was a sense that improvements in semiconductor performance were reaching a point of diminishing returns, particularly in CPUs (central processing units), the most well-known type of chip.

The problem is, most people didn’t realize that hardware architectures were evolving and that many other components could take on tasks that were previously limited to CPUs. In addition, the overall system design of devices was being re-evaluated, with a particular focus on how to address bottlenecks between different components.[pullquote]People predicting the downfall of semiconductor makers didn’t realize that hardware architectures were evolving and that many other components could take on tasks that were previously limited to CPUs. [/pullquote]

Today, the result is an entirely fresh new perspective on how to design products and tackle challenging new applications through multi-part hybrid designs. These new designs leverage a variety of different types of semiconductor computing elements, including CPUs, GPUs (graphics processing units), FPGAs (field programmable gate arrays), DSPs (digital signal processors) and other specialized “accelerators” that are optimized to do specific tasks well. Not only are these new combinations proving to be powerful, we’re also starting to see important new improvements within the elements themselves.

For example, even in the traditional CPU world, AMD’s new Ryzen line underwent significant architectural design changes, resulting in large speed improvements over the company’s previous chips. In fact, they’re now back in direct performance competition with Intel—a position AMD has not been in for over a decade. AMD started with the enthusiast-focused R7 line of desktop chips, but just announced the sub-$300 R5, which will be available for use in mainstream desktop and all-in-one PCs starting in April.

nVidia has done a very impressive job of showing how much more than graphics its GPUs can do. From work on deep neural networks in data centers, through autonomous driving in cars, the unique ability of GPUs to perform enormous numbers of relatively simple calculations simultaneously is making them essential to a number of important new applications. One of nVidia’s latest developments is the Jetson TX2 board, which leverages one of their GPU cores, but is focused on doing data analysis and AI in embedded devices, such as robots, medical equipment, drones and more.

Not to be outdone, Intel, in conjunction with Micron, has developed an entirely new memory/storage technology called 3D Xpoint that works like a combination of DRAM—the working memory in devices—and flash storage, such as SSDs. Intel’s commercialized version of the technology, which took over 10 years to develop, is called Optane and will appear first in storage devices for data centers. What’s unique about Optane is that it addresses a performance bottleneck found in most all computing devices between memory and storage, and allows for performance advances for certain applications that will go way beyond what a faster CPU could do.

Qualcomm has proven to be very adept at combining multiple elements, including CPUs, GPUs, DSPs, modems and other elements into sophisticated SOCs (system on chip), such as the new Snapdragon 835 chip. While most of its work has been focused on smartphones to date, the capabilities of its multi-element designs make them well-suited for many other devices—including autonomous cars—as well as some of the most demanding new applications, such as AI.

The in-house efforts of Apple, Samsung, Huawei—and to some degree Microsoft and Google—are also focused towards these SOC designs. Each hopes to leverage the unique characteristics they build into their chips into distinct features and functions that can be incorporated into future devices.

Finally, the company that’s enabling many of these capabilities is ARM, the UK-based chip design house whose chip architectures (sold in the form of intellectual property, or IP) are at the heart of many (though not all) of the previously listed companies’ offerings. In fact, ARM just announced that over 100 billion chips based on their designs have shipped since the company started 21 years ago, with half of those coming in the last 4 years. The company’s latest advance is a new architecture they call DynamIQ that, for the first time, allows the combination of multiple different types and sizes of computing elements, or cores, inside one of their Cortex-A architecture chip designs. The real-world results include up to a 50x boost in AI performance and a wide range of multifunction chip designs that can be uniquely architected and suited for unique applications—in other words, the right kind of chips for the right kind of devices.

The net result of all these developments is an extremely vibrant semiconductor market with a much brighter future than was commonly expected just a few years ago. Even better, this new range of chips portends an intriguing new array of devices and services that can take advantage of these key advancements in what will be exciting and unexpected ways. It’s bound to be magical.

Published by

Bob O'Donnell

Bob O’Donnell is the president and chief analyst of TECHnalysis Research, LLC a technology consulting and market research firm that provides strategic consulting and market research services to the technology industry and professional financial community. You can follow him on Twitter @bobodtech.

10 thoughts on “Chip Magic”

  1. “Growth in many major hardware markets had started to slow, and there was a sense that improvements in semiconductor performance were reaching a point of diminishing returns, particularly in CPUs”

    I would say this it is because of that slowdown in brute force gains that we are seeing so much innovation today. Moore’s law is on its last legs, as the next processor node will require ultra short wavelength processes that have been in development for, what, almost a decade now, and still no assurance that it will be ready in time for the next scheduled node shrink. Plus the gains from process shrinks have become harder and harder to obtain as electron leakage and quantum effects threaten to erode any gains.

    So the industry has turned away from its 40 year path of “shrink the process node and reap the benefits” and is looking for other ways to gain performance.

  2. One way to increase chip density without affecting a geometry is to move to 3D chip stacks. Samsung this last year moved from 32 layer 3D NAND to 48 layer 3D NAND. 3D creates another problem though, since a copper wire needs to have a big enough cross-section to shed heat fast enough. I think a race to 0W will be a new frontier in a chip geometry scaling and an important benchmark in a chip performance.

    https://www.extremetech.com/extreme/222590-an-end-to-scaling-intels-next-generation-chips-will-sacrifice-speed-to-reduce-power

    1. That takedown seems barely cogent though. For example “At IDF the performance claims went from “1000X the endurance of NAND” to “Endurance 3X the drive writes per day”, a 333x performance drop on the most key metric for the technology”. Mmmmm… the two metrics are apples and oranges, where do they get the 333x from ?
      Oh, hey, it’s Charlie Demerjian… entertaining if a bit tiring to read, not a paragon of accuracy.

        1. ??? That’s a reading comprehension issue. “NAND” is one metric, “Drive writes” another, you can’t divide one by the other.

          Basically, bits start dying after lots of writes. You can say “my bits can take 1000x more writes than NAND bits”, and you can say “my 1TB drive will be good for at least 3TB of writes per day over its 5-years rated life”… but you can’t calculate a straight ratio of these 2 measurements.

          Before doing math on stuff, one needs to understand what the “stuff” is….

  3. Speaking of silicon, it seems Qualcomm isn’t using their own cores, but a straight 4xA73 (like Huawei in their Kirin 960, unlike the 820/821), in their newest 835. And moving the “Snapdragon” branding from the CPU the the SoC/Platform.
    Is that a smart move (radios/video/graphics/pictures and low-level stuff are probably an easier target for perf/power improvement than CPU cores) or an acknowledgement of weakness (Apple keeps their huge single-core lead, the midrange cannibalizes the high-end – I got 2xA72 in my $200 Xiaomi – so R&D spend can’t be justified ?).
    How does that tidbit mesh with the rumor about Google doing their own chip; the info about Samsung being barred by licensing from selling their Exynos to OEMs (and maybe soon un-barred)… How is Mediatek faring ? Any chance Xiaomi will get to sell their home-grown chips in the West, or are there patent issues ?

Leave a Reply

Your email address will not be published. Required fields are marked *