More than Megahurtz?

G’day,

Back in the day, my first computer was a second hand Mac Classic running at 8 Mega Hertz (MHz).

I frequently ended up in debates with friends regarding the whole Mac Vs PC deal, and whilst I was using old tech at the time, I do recall the argument that clock speeds “aren’t everything”.

But… was that ever actually true? Did Apple actually know something that PC manufactures didn’t? Was a slower Mac processor really more effective than a faster PC? Was I buying into hype? Was I dreaming?

Cheers! :slight_smile:

cosmic

1 Like

not sure to be honest but I do remember a time when it was intel vs motorola, wow now i’m showing my age! I used to have a sticker on my Amiga proudly proclaiming “Intel outside”

There is a difference between RISC and CISC. To sum it up in a nut shell RISC processors are good at focusing on 1 task at a time per clock cycle. CISC processors focus on chewing through a few lines of assembly code at a time and also sacrificing clock cycles at the same time. When you’ve got a relatively slow processor such as a 8mhz 286 as compared to an 8mhz 68k process trying to do too many things at once can bog your CPU down quite substantially.

The advantage of RISC especially with slow processors is that they can focus on doing one thing at a time well. Today with CPUs as they are it really doesn’t matter as a much about bogging your computer down with complex instruction sets.

With RISC performance losing traction before ARM became a thing it got to a point where Intel won out through sheer brute force of CPU speed and it became to hell with doing it the more complex way anyway because we’ve got enough CPU power to do it.

I’d say the real divergence began to occur with Intel Pentium 3 and AMD K7 chips when they began pushing well north of 1ghz and Apple was still stuck with IBM trying to produce 800mhz chips.

I knew that PowerPC was RISC based… I didn’t realise the earlier era was also RISC.

At least that kind of explains the “Well a Mac performs faster with a slower clock speed” arguement we used to spout. :slight_smile:

It was because it dealt with one set of instructions (often handed off to RAM) rather than trying to do 10 things at once. Today with CPUs boosted up to 4ghz with turbo boost and 8 or 16 cores, it doesn’t really matter you can’t really bog a CPU down with tasks until you take it to the extreme end of the spectrum in building a render farm, or using multiple virtual machines, or doing some serious gaming on a 4K screen.

But in reality we’re running into a memory bandwidth issue, this is why we’re moving everything on board, such as the hard drive standard that will eventually replace SATA which is basically PCI-E.
The real bottleneck today is generally on components other than the CPU.

Have to admit, whilst I had a fairly good grasp of how a computer worked back in 1991, I’ve not kept up to date at all!

I do appreciate that there’s various bottlenecks causing limitations in computers… It’s one of the reasons I decided against the Mac Pro 3,1 when recently looking for a “new” computer - I saw that the system bus seemed to have some monumental increase in the 4,1 model.

I wonder how many more years til quantum tech reaches the domestic market… (he says, trying to sound knowledgeable)