Had Intel used the same MacBook Pro that it used in the performance tests, however, it certainly would have been trounced.īenchmarks aren’t everything, and Intel offered more than just scores. (Both Apple machines use the same M1 processor and DRAM.) The two dueling laptops were able to play Netflix videos on battery power for essentially the same amount of time, with a difference of just 6 minutes over 10 hours. More significantly, it also swapped out the MacBook Pro for a MacBook Air, which has a smaller battery. For the battery tests, Intel switched to a commercial Acer laptop with a slightly different Core i7-1165G7 processor. On the Apple side, Intel chose a MacBook Pro running MacOS 14.0.1 and outfitted with Apple’s 3.2-GHz M1 processor and the same 16 GB of LPDDR4-4266. We do know it was running at 3.0 GHz with 16 GB of LPDDR4-4266. For the abovementioned performance tests, Intel used a Core i7-1185G7 processor running Windows 10 Pro “on Intel white-box system.” In other words, we don’t know anything about the motherboard, core logic, disks, or any other hardware that may have been involved. Intel changed both sides of the equation for the battery tests, swapping out both its hardware and its competitor’s. But is it relevant? That’s up to you.įlipping from performance to efficiency, we find some “benchmarketeering” going on. Should a benchmark highlight such progress or hide it under a blanket? Intel’s thrashing of Apple’s M1 in this case is wholly deserved. Plenty of CPU designs get tweaked to improve software performance. Should we use tests deliberately designed to favor one CPU architecture over another? One on hand, optimizing code is A Good Thing, and there’s every reason to design a test to exercise unique parts of the hardware. ![]() And this highlights a fundamental question about benchmarks. But like WebXPRT, Topaz’s software leans heavily on x86-specific architecture extensions. Intel’s most impressive performances came from Topaz Labs’s AI tests. For the record, AMD processors do well on WebXPRT 3, too. It relies heavily on x86 extensions, so guess how that one turned out. The numbers also show browser tests using the WebXPRT 3 benchmark, which was developed by Principled Technologies in cooperation with Intel. Instead, we’re treated to Microsoft Office 365 apps running so-called RUG: Intel’s own definition of “ real-world usage guidelines. Nowhere do we see the standard benchmarks like GeekBench, 3DMark, Cinebench, CoreMark, PassMark, SPECmark, or even the hoary old Dhrystone. Benchmarks show what you want them to show, and Intel’s benchmarks were… unusual. Seems pretty straightforward, right? Surely you can’t fake such large differences in performance, and Intel’s candid willingness to concede a tiny advantage to Apple shows that their heart’s in the right place. Somewhat less expectedly, Intel also took on Apple in a battery-life test, graciously admitting defeat by the barest 1% margin (10 hours, 6 minutes of battery life vs. Well, processors, plural, as we’ll see.Īs you might expect, the Intel-supplied benchmarks showed Intel on top, sometimes outperforming M1 by ratios of 3:1 or even 6:1. And those results were… also pretty good! Then Intel itself weighed in with a very Apple-specific presentation comparing M1 against its own Core i7 processor. ![]() It was equally inevitable that x86 partisans would benchmark equivalent Intel processors. ![]() A home run, then? A big technological win for Apple? Maybe… maybe not. And the results were… pretty good! The M1 seems fast and it appears to deliver good battery life. But better how? Was the Cupertino company simply looking to save money? To reduce power consumption? Improve performance? Gain control over its CPU roadmap? All of the above?Īs soon as M1-based Macs went on sale, people started benchmarking them. What, exactly, are you measuring? Computers do different things differently, so deciding what to measure is just as important – if not more so – than the actual measurements.Īpple clearly felt that designing its own ARM-based chips was better than buying Intel’s x86-based processors. Drag racing a couple of CPUs seems easy, but once you get into it you realize it’s fraught with problems. It’s natural to try to find out which chip is “better,” for some definition of better. You’ll see what you want to see in the benchmark results, and what you take out of the scores is, as Yoda says, “only what you take with you.”Ĭomputer benchmarking, like auto racing, has been around ever since they made the second one. Spoiler alert! It’s a trick question – a Rorschach test. Which chip is faster: Apple’s M1 or Intel’s Core i7? The new or the old? ARM or x86? The mobile gadgets company or the major microprocessor maker?
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |