Technology

Are smartphones still getting faster? Does Moore’s law still apply?

Are smartphones still getting faster? Does Moore’s law still apply?

Moore’s Law predicts that the number of transistors used on dense integrated circuits will double year on year. The theory belongs to Gordon Moore, who co-founded Intel and who had observed this trend up to that point in 1965.

Often though, this prediction has been extrapolated to apply to all technology. That is to say, that it is generally accepted that the rate of technological improvement should double, year on year. Take a look at the smart phone in your pocket. That device is hugely more powerful than a computer even just a few years old. To put it in perspective, it is millions of times more powerful than the computers at NASA that sent astronauts 356,000km across space from Earth to the Moon. That’s pretty incredible.

The smart phone in your pocket… is millions of times more powerful than the computers at NASA that sent astronauts 356,000km across space from Earth to the Moon

We’ve certainly seen a lot of growth in the past several decades then. And that also explains why we can now play console-quality computer games on the bus or film 4K video when we forgot to bring our ‘big camera’.

But is the hardware in our smartphone really still adhering to Moore’s Law?

The Galaxy S8 arguably doesn’t feel twice as powerful as the S7 or even the S6. The way we use our mobiles hasn’t changed dramatically in the last few years. And you can pretty much do everything you could want to now on a two year old flagship with relatively little compromise. So has smartphone technology finally peaked? Was Moore wrong?

Let’s take a closer look.

Specs now and then

Perhaps an obvious place to start is by looking at specs. How do the numbers in the latest phones stack up against their predecessors?

Seeing as I am a Samsung guy and I have a few lying around, let’s take a look at Sammy’s flagships and see what we find. I’ve also included some benchmark scores from Geekbench here, to demonstrate how these specs perform in the ‘real world’.

There is definite gradual improvement here but however you slice it, there’s no getting around the fact that the specs are not doubling and neither is the performance – at least on paper.

This could come down to manufacturers focusing on other features, rather than blindly adhering to Moore’s Law. Phones don’t just have to be faster year-on-year but also more beautifully constructed, more battery efficient, with higher resolution displays etc. CPU performance simply isn’t the sole priority – which could go some way to explaining why we’re not seeing a ‘doubling’ in improvement here.

But there’s more to it than that, of course.

A little about how CPUs work

Looking at the table above you can see that GHz and performance aren’t really related and looking at GHz alone will certainly give you a skewed picture.

Instructions given to a CPU are generally sequential in nature and will be queued up in a metaphorical ‘pipeline’ for the computer to execute. The clock speed tells you how quickly the CPU is able to fetch and execute each of these instructions. GHz is a measure of how quickly this can be carried out, with one cycle per second being represented as 1 Hertz. So a 2 GHz CPU can execute two billion cycles per second. The higher the GHz then, the more quickly the CPU can carry out its to-do list and the faster it will be able to run code.

But as Gary explains in a lot more detail here, it’s actually a fair bit more complicated than that. That’s because there are various tricks that a CPU can use in order to carry out more instructions per cycle or to carry them out more efficiently. For instance, CPUs will begin fetching their next instructions before their current instructions are complete and by breaking their ‘pipelines’ into multiple stages, this can be carried out more efficiently.

Likewise, an execution engine can be split into two separate units capable of running in parallel. This ‘instruction-level parallelism’ (ILP) means that more than one instruction can be carried out simultaneously.

(This isn’t a perfect system however as certain instructions inherently need to be consecutive!)

These efficiency tricks are often described as making the pipeline ‘wider’ and ‘longer’ and both these things can increase the instructions per cycle. There are limitations here (some tasks need to be carried out sequentially for instance) but this is another way to squeeze more performance out of a chip.

This means that in many cases, a CPU with a slower clock speed can still keep up with a faster one; it is going through fewer revolutions but it is doing more work on every go-around.

A CPU with a slower clock speed can still keep up with a faster one

That’s before we’ve even discussed the benefits (and minor drawbacks) of having multiple cores to juggle tasks, of being more efficient (to save energy, deal with heat and prevent throttling), or the cache (which stores useful information close-to-hand). And we haven’t looked into what the GPU does (handle certain kinds of tasks that are specifically useful for rendering graphics) or how bits and RAM factor in (holding information in memory).

The point is that the overall performance of your device is the result of many smaller elements all working in unison. The CPU is just one small part of the SoC, which is just one small part of the device as a whole.

Nm in chip manufacturing

But remember, what our boy Moore was actually talking about was the number of transistors on a chip.

The more transistors on a CPU, the smarter it is. These are tiny switches that can be arranged to create logic gates and which that way provide the ‘brains’ of your phone. And the more transistors you can fit in a square inch, the more you can pack into a device that needs to fit into your pocket. This is transistor density and it is what the 10 stands for in a 10nm chip. Nm here means ‘nanometers’ and measures half the distance of a single unique cell. The smaller the number, the smaller the cells and the more you can fit into a small space.

So if we look at the Snapdragon 835 from Qualcomm that’s used in the US version of the S8, we see that it uses a 10nm design. This way, it claims to be 35% smaller and 25% more energy efficient compared with its predecessors.

How about the Exynos 8890 seen in the S7 from Samsung themselves? Well, here we have a 14nm chip. The S6’s Exynos 7420 was also 14nm however. These are custom processors but they are all based on the same ARM architecture – again, you can report to Gary for more on that.

Companies like Samsung and TSMC are now in the process of developing 7nm chips (Samsung won the race to 10nm) and TSMC is already looking to build factories for 5 and 3nm chips! The bottom line is that this is another measure of device performance that more closely ties into what Moore was actually talking about. And this is an area where we can still see performance improving at a very rapid rate – even if it’s not quite doubling year on year.

The number of transistors

But just because you can fit more transistors into a smaller space, doesn’t necessarily mean that a chip will have more transistors. That would depend on the chip’s size, apart from anything else! So how many transistors do you find on those CPUs?

Well, the Snapdragon 835 boasts a pretty impressive 3 billion transistors.

The Snapdragon 835 boasts a pretty impressive 3 billion transistors… there’s only just over 7.4 billion people on the planet!

To put that in perspective, the human brain has approximately 100 billion neurons. So you could argue that your phone is 3% as smart as you. But of course it’s a bit more complex than that (you think processors are complicated, you should try the human brain!). Put it another way: there’s only just over 7.4 billion people on the planet!

Unfortunately, this information is not available for all smartphones and there’s no data for the previous Samsung models. While it’s an imperfect test then, let’s take a look at another mobile SoC and see how it stacks up.

The iPhone 5s is sporting the Apple A7, a dual-core chip with one billion transistors – one third of those seen in the much newer S8. The A8 did literally double this with two billion. If we put those in a graph with their Geekbench scores, we get this:

As you can see then, doubling the number of transistors certainly doesn’t necessarily double the real-world performance. But what’s surprising is the relatively small difference in performance between the A7 and A8, despite having double the number of transistors with the same amount of RAM and the same GHz.

Greater transistor density doesn’t necessarily result in greater performance and speed.

So density doesn’t necessarily result in greater performance and speed because manufacturers will sometimes ‘choose’ how best to use all those new transistors and in some cases, they might focus on capabilities that don’t directly correlate with performance. ARM for instance has a system for improving the power efficiency of its SoC’s called ‘big.Little’ – basically using two differently powered cores for lighter and more intensive tasks.

These kinds of features are more interested in heat management and battery life, rather than pure horsepower. This is one reason that GPUs can generally improve at a faster rate than CPUs – because they are much more specialized (although the nm is higher and they do have more heat management to contend with).

It will be very interesting to see how the A11 in the iPhone 8 and X performs, which has 4.3 billion transistors! But don’t consider making the jump to Apple just yet – the Kirin 970 unveiled at IFA 2017 is going to boast an incredible 5.5 billion transistors in order to support on-board AI functions.

Dennard scaling

There’s more to consider. Always more.

Dennard scaling, also referred to as MOSFET scaling, is another law just like Moore’s that is also relevant here. It states that as transistors get smaller, their power density stays constant. This means that the power use should be proportional to the area rather than the number of switches, meaning that not only should we see the number of transistors double year on year at a ‘cost efficient optimum’ but that those transistors should use less power and not get ridiculously hot! As you can see, in order for Moore’s law to be useful to us smartphone consumers, Dennard scaling also needs to hold true.

And it did up until about 2000. But Dennard scaling no longer quite holds true at each lower node, meaning that there is no guarantee that these denser chips will necessarily result in lower power consumption. That’s one more reason that doubling transistors doesn’t double the performance the way it once did.

What does all this mean for me?

So if you want to be strict, then technically Dennard scaling has been broken and Moore’s law doesn’t quite apply in the way it used to. It’s becoming increasingly redundant to think of technology as ‘doubling’ in power, seeing as the reality is far more complex. Not only that, but Moore’s law only ever strictly referred to transistor density, which is a very incomplete measure of device performance. And what many people don’t realize is that Moore himself actually revised his famous law in 1995, to say that the transistor density would double every two years. And the law was always considered an ‘approximation’.

The hardware in your phone is still improving rapidly then, but it’s not quite doubling. Partly because OEMs have their attention and budget elsewhere. Partly because it’s more complex than that.

The Kirin 970 unveiled at IFA 2017 is going to boast an incredible 5.5 billion transistors in order to support on-board AI functions

But don’t feel too bad! Your phone might not be double the speed, or have double the memory, but it is definitely a significant leap from the one that went before and the one before that. And new technologies like mobile VR and 4K screens are likely to push things forward at an even faster rate going forward. There are some beastly phones on the horizon!

Leave a Comment