Apple releases octo-core Mac Pro

GEC: Discuss gaming, computers and electronics and venture into the bizarre world of STGODs.

Moderator: Thanas

User avatar
The Kernel
Emperor's Hand
Posts: 7438
Joined: 2003-09-17 02:31am
Location: Kweh?!

Post by The Kernel »

Praxis wrote: I'm going to complain about performance despite having failed to provide any evidence for such a complaint outside of a quickly-refuted completely invalid optimized memory allocation test.
I'm pretty sure he's referring to this...

Yes it's a G5 test, but it's the OS that is the problem here, not the chip.
Johan De Gelas wrote: "Mac OS X Server starts with Darwin, the same open source foundation used in Mac OS X, Apple's operating system for desktop and mobile computers. Darwin is built around the Mach 3.0 microkernel, which provides features critical to server operations, such as fine-grained multi-threading, symmetric multiprocessing (SMP), protected memory, a unified buffer cache (UBC), 64-bit kernel services and system notifications. Darwin also includes the latest innovations from the open source BSD community, particularly the FreeBSD development community."

While there are many very good ideas in Mac OS X, it reminds me a lot of fusion cooking, where you make a hotch-potch of very different ingredients. Let me explain.
Darwin is indeed the open Source project around the Mach kernel 3.0. This operating system is based around the idea of a microkernel, a kernel that only contains the essence of the operating system, such as protected memory, fine-grained multithreading and symmetric multiprocessing support. This in contrast to "monolithic" operating systems, which have all of the code in a single large kernel.

Everything else is located in smaller programs, servers, which communicate with each other via ports and an IPC (Inter Process Communication) system. Explaining this in detail is beyond the scope of this article (read more here). But in a nutshell, a Mach microkernel should be more elegant, easier to debug and better at keeping different processes from writing in eachother's protected memory areas than our typical "monolithic" operating systems such as Linux and Windows NT/XP/2000. The Mach microkernel was believed to be the future of all operating systems.

However, you must know that applications (in the userspace) need, of course, access to the services of the kernel. In Unix, this is done with a Syscall, and it results in two context switches (the CPU has to swap out one process for another): from the application to the kernel and back.

The relatively complicated memory management (especially if the server process runs in user mode instead of kernel) and IPC messaging makes a call to the Mach kernel a lot slower, up to 6 times slower than the monolithic ones!

It also must be remarked that, for example, Linux is not completely a monolithic OS. You can choose whether you like to incorporate a driver in the kernel (faster, but more complex) or in userspace (slower, but the kernel remains slimmer).

Now, while Mac OS X is based on Mach 3, it is still a monolithic OS. The Mach microkernel is fused into a traditional FreeBSD "system call" interface. In fact, Darwin is a complete FreeBSD 4.4 alike UNIX and thus monolithic kernel, derived from the original 4.4BSD-Lite2 Open Source distribution.

The current Mac OS X has evolved a bit and consists of a FreeBSD 5.0 kernel (with a Mach 3 multithreaded microkernel inside) with a proprietary, but superb graphical user interface (GUI) called Aqua.

Performance problems
As the mach kernel is hidden away deep in the FreeBSD kernel, Mach (kernel) threads are only available for kernel level programs, not applications such as MySQL. Applications can make use of a POSIX thread (a " pthread"), a wrapper around a Mach thread.


This means that applications use slower user-level threads like in FreeBSD and not fast kernel threads like in Linux. It seems that FreeBSD 5.x has somewhat solved the performance problems that were typical for user-level threads, but we are not sure if Mac OS X has been able to take advantage of this.

In order to maintain binary compatibility, Apple might not have been able to implement some of the performance improvements found in the newer BSD kernels.

Another problem is the way threads could/can get access to the kernel. In the early versions of Mac OS X, only one thread could lock onto the kernel at once. This doesn't mean only one thread can run, but that only one thread could access the kernel at a given time. So, a rendering calculation (no kernel interaction) together with a network access (kernel access) could run well. But many threads demanding access to the memory or network subsystem would result in one thread getting access, and all others waiting.

This "kernel locked bottleneck" situation has improved in Tiger, but kernel locking is still very coarse. So, while there is a very fine grained multi-threading system (The Mach kernel) inside that monolithic kernel, it is not available to the outside world.

So, is Mac OS X the real reason why MySQL and Apache run so slow on the Mac Platform? Let us find out... with benchmarks, of course!

........


Mac OS X is incredibly slow, between 2 and 5(!) times slower, in creating new threads, as it doesn't use kernel threads, and has to go through extra layers (wrappers). No need to continue our search: the G5 might not be the fastest integer CPU on earth - its database performance is completely crippled by an asthmatic operating system that needs up to 5 times more time to handle and create threads.

........


The server performance of the Apple platform is, however, catastrophic. When we asked Apple for a reaction, they told us that some database vendors, Sybase and Oracle, have found a way around the threading problems. We'll try Sybase later, but frankly, we are very sceptical. The whole "multi-threaded Mach microkernel trapped inside a monolithic FreeBSD cocoon with several threading wrappers and coarse-grained threading access to the kernel", with a "backwards compatibility" millstone around its neck sounds like a bad fusion recipe for performance.
And before anyone tries to shoot the author, I should remind everyone that Johan De Gelas (formerly of Ace's Hardware) is as good as they come.
User avatar
The Kernel
Emperor's Hand
Posts: 7438
Joined: 2003-09-17 02:31am
Location: Kweh?!

Re: Apple releases octo-core Mac Pro

Post by The Kernel »

Xisiqomelir wrote:Intel must be ecstatic to have Apple as a partner.
The nice thing about Apple for Intel is that they can launch with them with limited quantities and show off a new high end chip without having to worry about keeping Apple stocked up (the number of Mac Pros shipped will always be low).
User avatar
The Kernel
Emperor's Hand
Posts: 7438
Joined: 2003-09-17 02:31am
Location: Kweh?!

Post by The Kernel »

You know, it occurs to me after having had a chance to reread Johan's article on OSX performance after a few years that the problems with threading in OSX could cause some huge problems for Apple down the line now that we are moving towards heavily threaded workstation applications to match the rush of multi-core CPUs.

Time will tell of course, but that article is just scathing on this topic and it doesn't look like an easy fix kind of situation.
User avatar
Stark
Emperor's Hand
Posts: 36169
Joined: 2002-07-03 09:56pm
Location: Brisbane, Australia

Post by Stark »

I don't really understand how low-level multithreading works - can you explain how the OSX architecture causes problems in this kind of role?
User avatar
Xon
Sith Acolyte
Posts: 6206
Joined: 2002-07-16 06:12am
Location: Western Australia

Post by Xon »

OSX (until recently IIRC) had extremely course grained kernel locking. This ment you could have something touching the network/file IO and something touching anything else without blocking and that was about it. Regardless of how many CPU cores in the system.

Unix style applications make the, unsupported, assuption that thread/proccess creation is dirt cheap. So traditional style unix applications would create new threads/processes on demand instead of using threadpools, and the vastly higher startup costs would be crippling.

Basicly, poorly thought out applications are punished when the implicit assuptions they rely on are false. Film at 11.
"Okay, I'll have the truth with a side order of clarity." ~ Dr. Daniel Jackson.
"Reality has a well-known liberal bias." ~ Stephen Colbert
"One Drive, One Partition, the One True Path" ~ ars technica forums - warrens - on hhd partitioning schemes.
User avatar
Durandal
Bile-Driven Hate Machine
Posts: 17927
Joined: 2002-07-03 06:26pm
Location: Silicon Valley, CA
Contact:

Post by Durandal »

Before I address whatever nonsense RThurmont has posted (I haven't read it yet, but I'm sure it'll be funny), I'm going to focus on Johan De Gelas' article.
The Kernel wrote:Darwin is indeed the open Source project around the Mach kernel 3.0. This operating system is based around the idea of a microkernel, a kernel that only contains the essence of the operating system, such as protected memory, fine-grained multithreading and symmetric multiprocessing support. This in contrast to "monolithic" operating systems, which have all of the code in a single large kernel.
Wrong, wrong, wrong. XNU is a monolithic kernel. A kernel's basic responsibilities are as follows:
  • Process management
  • Memory management
  • Device management
  • Handling system calls
A kernel that handles all of these in its process is a monolithic kernel. A kernel that delegates some of these responsibilities to other processes is a micro-kernel; that is a kernel that has a limited subset of the functionality that a monolithic kernel provides.

You could describe XNU as a hybridized kernel, but it's definitely more monolithic. I just wanted to clear this up, since so many people hear "micro-kernels are slow" and "Mac OS X uses a micro-kernel" and conclude "Mac OS X must be slow".

The author actually addresses this further down in the article, but he obviously didn't make the distinction clear enough so that anyone who read his article wouldn't come away with the wrong impression.
As the mach kernel is hidden away deep in the FreeBSD kernel, Mach (kernel) threads are only available for kernel level programs, not applications such as MySQL. Applications can make use of a POSIX thread (a " pthread"), a wrapper around a Mach thread.
Every user-land thread in Mac OS X has a one-to-one relationship with a kernel thread. This is simply incorrect.

Now, as to the MySQL benchmarks, MySQL on Mac OS X performs an F_FULLSYNC while the Linux MySQL does not. This means that the OS will write all pending data to disk and empty the disk drive's caches onto the platters. This is obviously a slow process, orders of magnitude slower than reading from RAM and then writing to a database. The Mac OS X MySQL does this so that, if the power goes out in the middle of a transaction, you won't get a partially-written transaction and an indication that the transaction succeeded.

Other things ...

It's my understanding that the Linux kernel "prefers" process creation to thread creation. That's why you'll see a lot of Linux daemons spawn off sub-processes to do tasks that could just as easily be done in a thread. I've heard that the Linux kernel schedules processes very well and threads not as well. So servers and such are written to take advantage of this. I don't really know this for sure, but it seems to fit with what I've seen.

Thread creation and process creation are not the same, regardless of what the AnandTech article says. I don't know why a guy who's apparently a CPU architect would screw up this detail. There is more overhead in creating a process as opposed to creating a thread, and when he says that he's measuring the time it takes to create a "thread of control" and includes fork() time, it's just completely nonsensical. You fork() to create processes, not threads. On OS X, you create threads by going through MPServices in Carbon, POSIX threads or NSThreads. (You could also just directly create a Mach thread, but POSIX thread creation, I believe, is just macro'ed to a Mach thread creation function.)

What doesn't make sense is why their MySQL benchmark had one process with 60 threads, yet they said that they were measuring fork()/exec() times. If they were doing that on OS X but not Linux, well of course OS X would be slower.

Now, does all of this mean that Mac OS X is just as fast as Linux at everything? No. But the AnandTech article focuses on a certain class of applications: database and web server stuff. Does this apply to RThurmont's bitching and moaning? Not particularly. Can Mac OS X be slower than other operating systems at certain things? Yes. Can it be faster at certain things? Absolutely. In short, I don't really see why the AnandTech article is applicable here. It's 2 years old, and the scene has changed significantly. Just saying "Mac OS X is slow" without any sort of qualification is pointless and nebulous. You might say it's slow at database transactions or web serving, but things like multi-threaded OpenGL have turned it into a beast for OpenGL work.

Okay, that's all for now. I don't really see a point in responding to RThurmont's idiocy. What can you say to someone who thinks all software should be open source and free and that no one should ever make any money from it?
Damien Sorresso

"Ever see what them computa bitchez do to numbas? It ain't natural. Numbas ain't supposed to be code, they supposed to quantify shit."
- The Onion
User avatar
Praxis
Sith Acolyte
Posts: 6012
Joined: 2002-12-22 04:02pm
Contact:

Post by Praxis »

The Kernel wrote:
Praxis wrote: I'm going to complain about performance despite having failed to provide any evidence for such a complaint outside of a quickly-refuted completely invalid optimized memory allocation test.
I'm pretty sure he's referring to this...

Yes it's a G5 test, but it's the OS that is the problem here, not the chip.
Which does nothing but prove that Linux makes a better server, something no one in this thread disputed.

To quote the article; Workstation, yes, server, no.

Linux is the better server, undisputably. Better desktop? Not necessarily.
User avatar
The Kernel
Emperor's Hand
Posts: 7438
Joined: 2003-09-17 02:31am
Location: Kweh?!

Post by The Kernel »

Praxis wrote:
The Kernel wrote:
Praxis wrote: I'm going to complain about performance despite having failed to provide any evidence for such a complaint outside of a quickly-refuted completely invalid optimized memory allocation test.
I'm pretty sure he's referring to this...

Yes it's a G5 test, but it's the OS that is the problem here, not the chip.
Which does nothing but prove that Linux makes a better server, something no one in this thread disputed.

To quote the article; Workstation, yes, server, no.

Linux is the better server, undisputably. Better desktop? Not necessarily.
I wouldn't dispute that, I use a Mac regularly at work and I don't have a serious issue with the performance. I pull my hair out regularly because of goddamn compatibility issues with Windows (most related to Word documents and Photoshop) but it's usually worth the headache.
User avatar
Praxis
Sith Acolyte
Posts: 6012
Joined: 2002-12-22 04:02pm
Contact:

Post by Praxis »

Destructionator XIII wrote:
Xisiqomelir wrote:Maybe I'm reading this a little too critically, but it seems to imply that any random person is equally likely to prefer either a Linux GUI or Aqua, which I disagree with.
Oh no, I am saying for certain people, a Linux GUI can be much, much better than what the Macintosh or Windows has to offer. It is a small set of people, sure, but it has something to offer.
While we're on the subject of GUI...
I've seen Linux distros with better consistency.


EDIT:

I wouldn't dispute that, I use a Mac regularly at work and I don't have a serious issue with the performance. I pull my hair out regularly because of goddamn compatibility issues with Windows (most related to Word documents and Photoshop) but it's usually worth the headache.

I wouldn't dispute that, I use a Mac regularly at work and I don't have a serious issue with the performance. I pull my hair out regularly because of goddamn compatibility issues with Windows (most related to Word documents and Photoshop) but it's usually worth the headache.
Haven't had Photoshop issues, what problems have you come across?
The majority of Word problems I have are weird formatting. I'm hoping Office 2008 fixes this...I feel compelled to upgrade for the Universal Binary.
User avatar
Durandal
Bile-Driven Hate Machine
Posts: 17927
Joined: 2002-07-03 06:26pm
Location: Silicon Valley, CA
Contact:

Post by Durandal »

RThurmont wrote:No. However, I do consider bashing competitors to be unethical, and certainly also lacking in class.
Bashing competitors is unethical?
The FTC actually used to prohibit comparative advertising like Apple's, so every company's advertising was Microsoft-style, in that it did not acknowledge the existence of competition. I think that this was a much better approach all around, as it forced companies to focus on the actual reasons to buy their product, as opposed to allowing them to spread (in many cases false) FUD about their competitors.
Apple hasn't spread fear, uncertainly and doubt about Windows. They've said that Windows is insecure and annoying but happens to be good for business applications. The ads acknowledge where Windows's strengths are. Oooh Apple makes fun of their competitor. What's next? The buyers of "the leading brand" detergent get all upset at Wisk's commercials? Give it a rest.
Note, that, by the way, since I am not a distributor of Linux (unless you count a non-workable distro I built as a toy using rPath), that I can bash OS X freely without being a hypocrite at this point.
No, you can just bash it while sounding like an idiot.
Open source according to the OSI, but not Free Software according to the FSF. The APSL requires that all changes be submitted upstream to Apple, which, in my opinion, is obnoxious.
So the GPL pimps are saying that a non-GPL license enforced by a company that actually dares to make money off its software isn't "true" open source? I'm shocked, because clearly they'd have no agenda at all. :roll:

The GPL and FSF zealots need to grow the fuck up. You don't have an inherent right to my source code, period. It's my intellectual property. Not yours, not Richard Stallman's and not the OSS community's. If I feel that my software will benefit me and become better through open source, I'll license it under whatever god damn license I want. And it's not ethically wrong for me to choose a non-GPL license.
Damien Sorresso

"Ever see what them computa bitchez do to numbas? It ain't natural. Numbas ain't supposed to be code, they supposed to quantify shit."
- The Onion
User avatar
Crayz9000
Sith Apprentice
Posts: 7329
Joined: 2002-07-03 06:39pm
Location: Improbably superpositioned
Contact:

Post by Crayz9000 »

phongn wrote:
Ace Pace wrote:The Ars Technica GodBox is by far fucking better and cheaper.
It also "only" has four cores vice eight. For CPU-intensive loads the new Mac Pro is better and nobody in their right mind builds their own computers for production use.
I take it, then, that Google is definitely out of its collective mind. ;)
A Tribute to Stupidity: The Robert Scott Anderson Archive (currently offline)
John Hansen - Slightly Insane Bounty Hunter - ASVS Vets' Assoc. Class of 2000
HAB Cryptanalyst | WG - Intergalactic Alliance and Spoof Author | BotM | Cybertron | SCEF
User avatar
Crayz9000
Sith Apprentice
Posts: 7329
Joined: 2002-07-03 06:39pm
Location: Improbably superpositioned
Contact:

Post by Crayz9000 »

Durandal wrote: Other things ...

It's my understanding that the Linux kernel "prefers" process creation to thread creation. That's why you'll see a lot of Linux daemons spawn off sub-processes to do tasks that could just as easily be done in a thread. I've heard that the Linux kernel schedules processes very well and threads not as well. So servers and such are written to take advantage of this. I don't really know this for sure, but it seems to fit with what I've seen.

Thread creation and process creation are not the same, regardless of what the AnandTech article says. I don't know why a guy who's apparently a CPU architect would screw up this detail. There is more overhead in creating a process as opposed to creating a thread, and when he says that he's measuring the time it takes to create a "thread of control" and includes fork() time, it's just completely nonsensical. You fork() to create processes, not threads. On OS X, you create threads by going through MPServices in Carbon, POSIX threads or NSThreads. (You could also just directly create a Mach thread, but POSIX thread creation, I believe, is just macro'ed to a Mach thread creation function.)
Don't mind me, I'm just pointing this out...

NPTL is a kernel/glibc patch that's now mainstream in both. It's still not the default in most distros, but in the ones that do have it, it results in a marked performance improvement in any program set to take advantage of threads.

(Which, since I'm running Gentoo on my home box, isn't very hard to enable.)
A Tribute to Stupidity: The Robert Scott Anderson Archive (currently offline)
John Hansen - Slightly Insane Bounty Hunter - ASVS Vets' Assoc. Class of 2000
HAB Cryptanalyst | WG - Intergalactic Alliance and Spoof Author | BotM | Cybertron | SCEF
User avatar
Pu-239
Sith Marauder
Posts: 4727
Joined: 2002-10-21 08:44am
Location: Fake Virginia

Post by Pu-239 »

AFAIK, NPTL is the default on all major distros (at least Debian (even stable) and derivatives, and RHEL) , and has been around since the early 2.6 days. Basically if you run 'ps ax' and don't see multiple processes per app (eg, for firefox), it's enabled- not applicable for Apache since it runs everything in seperate processes anyway, due to the tendency for certain modules to be non-threadsafe, eg php.

ah.....the path to happiness is revision of dreams and not fulfillment... -SWPIGWANG
Sufficient Googling is indistinguishable from knowledge -somebody
Anything worth the cost of a missile, which can be located on the battlefield, will be shot at with missiles. If the US military is involved, then things, which are not worth the cost if a missile will also be shot at with missiles. -Sea Skimmer


George Bush makes freedom sound like a giant robot that breaks down a lot. -Darth Raptor
User avatar
phongn
Rebel Leader
Posts: 18487
Joined: 2002-07-03 11:11pm

Post by phongn »

Crayz9000 wrote:I take it, then, that Google is definitely out of its collective mind. ;)
Google's server farm machines these days are effectively custom-designed boxes; they aren't exactly a generic whitebox. I would further suspect that Google pays some ISV or OEM to produce them.
Pu-239 wrote:AFAIK, NPTL is the default on all major distros (at least Debian (even stable) and derivatives, and RHEL) , and has been around since the early 2.6 days. Basically if you run 'ps ax' and don't see multiple processes per app (eg, for firefox), it's enabled- not applicable for Apache since it runs everything in seperate processes anyway, due to the tendency for certain modules to be non-threadsafe, eg php.
NPTL has indeed been the default threading system for Linux for some time. It is pretty good, actually.

That said, Durandal is correct regarding Linux and processes; for a very long time Linux did not have real thread support and that strongly biased Linux applications towards process - and not thread - creation. Process creation is fairly cheap in Linux. OTOH, it is more expensive on some other platforms (for example, Apache on Windows suffered from severe performance issues as process creating is expensive on NT, while thread creation is cheap).
User avatar
Ace Pace
Hardware Lover
Posts: 8456
Joined: 2002-07-07 03:04am
Location: Wasting time instead of money
Contact:

Post by Ace Pace »

On licensing:

Wouldn't it be easier to just use this license for anything 'open source'? :lol:


Note to people without a sense of humour: This is a joke.
Brotherhood of the Bear | HAB | Mess | SDnet archivist |
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Post by Starglider »

As an AI researcher, I'd love a fast eight-core dedicated machine: the 8-core Mac Pro would actually have slightly more raw compute power than the whole compute cluster I'm using right now. There is one serious problem with it though; it uses fully buffered DIMMs (high latency, power hogs) at a relatively slow 667 Mhz. For the specific code I'm working on, main memory bandwidth and latency are both critically important (in different sections - cache size helps too, but in other sections). Thus I'm waiting for AMD to release the new quad core Opterons and Athlon FXs; the later in particular should slot into 4x4 motherboards to enable eight cores with eight gigs of quad channel, very low latency, 1066 MHz+ memory. I wouldn't use that setup in a production server of course, but it would be great as a development machine.
User avatar
Seggybop
Jedi Council Member
Posts: 1954
Joined: 2002-07-20 07:09pm
Location: USA

Post by Seggybop »

Regarding those, does anyone know when the AMD Barcelona core is actually being released?
my heart is a shell of depleted uranium
User avatar
Ace Pace
Hardware Lover
Posts: 8456
Joined: 2002-07-07 03:04am
Location: Wasting time instead of money
Contact:

Post by Ace Pace »

Seggybop wrote:Regarding those, does anyone know when the AMD Barcelona core is actually being released?
Hoping for a Q2 early Q3 release last I heard.

We might also be looking at a unified release of Barcelona, the new socket, R600 line, etc. which means 'who the fuck knows'
Brotherhood of the Bear | HAB | Mess | SDnet archivist |
RThurmont
Jedi Master
Posts: 1243
Joined: 2005-07-09 01:58pm
Location: Desperately trying to find a local restaurant that serves foie gras.

Post by RThurmont »

The GPL and FSF zealots need to grow the fuck up. You don't have an inherent right to my source code, period. It's my intellectual property. Not yours, not Richard Stallman's and not the OSS community's. If I feel that my software will benefit me and become better through open source, I'll license it under whatever god damn license I want. And it's not ethically wrong for me to choose a non-GPL license.
I, and 95% of the people I've met in the open source community, believe that authors of software should have the right to determine what license it is published on. However, we have the right to criticize software based on said licensing. The fact of the matter is that proprietary software has a large number of demonstrated disadvantages compared to open source software in terms of cost, flexibility and security. The only software markets where I see the proprietary model as being really warranted are in terms of speciality applications for specific niches, and games. I view all other categories of software as commodities, and when you think about it, the open source model is perfect for a commoditized economic environment, in that it facilitates maximum competition and maximum user value.

The attitude that software must somehow be proprietary in order to be profitable is also complete BS, as Novell has managed to return to profitability riding Linux's coat tails, Mandriva's core Linux business has been profitable since the 90s (the bankruptcy in 2003 was due to a failed diversification iniative), Red Hat is a profitable $500 million+ company, and Cygnus was profitable for many years before being acquired by Red Hat, having built a great business model around porting the GCC compiler to different platforms.

Also,
They've said that Windows is insecure and annoying but happens to be good for business applications. The ads acknowledge where Windows's strengths are.
I would argue that they completely fail to do so, and additionally, imply several faults with Windows where none exist. For example, none of the ads mention that Windows is a vastly superior gaming platform to OS X (from the standpoint of cost and availibility), while a number of them insinuate that Windows has severe problems in terms of compatibility with third party consumer electronics products such as digital cameras and webcams; while this may be true for Vista, it is certainly not the case for XP Pro. I could write a 10,000 word essay on the inaccuracies and untruths contained within Apple's current advertising campaign.
"Here's a nickel, kid. Get yourself a better computer."
User avatar
Xisiqomelir
Jedi Council Member
Posts: 1757
Joined: 2003-01-16 09:27am
Location: Valuetown
Contact:

Post by Xisiqomelir »

RThurmont wrote:I could write a 10,000 word essay on the inaccuracies and untruths contained within Apple's current advertising campaign.
My popcorn is ready! Please indulge us.

EDIT: For ease of reference, here is your source material
User avatar
Shroom Man 777
FUCKING DICK-STABBER!
Posts: 21222
Joined: 2003-05-11 08:39am
Location: Bleeding breasts and stabbing dicks since 2003
Contact:

Post by Shroom Man 777 »

Octo-core sounds stupid, it needs to be punched.

Okay, it has eight cores, does it mean computing power is exponentially higher than normal PCs? Because my *fapfap* Dual Core doesn't really seem so much better than a normal PC.
Image "DO YOU WORSHIP HOMOSEXUALS?" - Curtis Saxton (source)
shroom is a lovely boy and i wont hear a bad word against him - LUSY-CHAN!
Shit! Man, I didn't think of that! It took Shroom to properly interpret the screams of dying people :D - PeZook
Shroom, I read out the stuff you write about us. You are an endless supply of morale down here. :p - an OWS street medic
Pink Sugar Heart Attack!
User avatar
phongn
Rebel Leader
Posts: 18487
Joined: 2002-07-03 11:11pm

Post by phongn »

Shroom Man 777 wrote:Okay, it has eight cores, does it mean computing power is exponentially higher than normal PCs? Because my *fapfap* Dual Core doesn't really seem so much better than a normal PC.
Given the proper applications, you can get up to 8X the speed (probably more like 6-7X maximum). Most consumer applications cannot take advantage of even 2-way CPUs (nevermind 8-way)
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Post by Starglider »

Okay, it has eight cores, does it mean computing power is exponentially higher than normal PCs?
No, it means raw compute power is eight times higher (ignoring memory and I/O bottlenecks) than a single core processor of the same type and clock speed. Otherwise it would be called something like 'super exponential processor XL', not to mention being physically impossible (semi-mythical general purpose quantum processors excepted).
Because my *fapfap* Dual Core doesn't really seem so much better than a normal PC.
Clearly for you 'aggregate computing power' (not the same thing as linear computing speed) and 'better' are not closely correlated, probably because you're using applications that either aren't multithreaded or aren't amenable to parallelisation in the first place. Multithreaded programming isn't that hard with modern tools, but it adds development time, risk and expense and many programmers seem unable to get their heads around it for some reason, so it'll be a while before most software can really take advantage of multi-core. Some of us however are using fully multithreaded applications and do see a near linear increase in performance and utility with more cores.

Incidentally, and while I remember, this is why Data's line stating that he has 'a linear compute speed of 60 trillion operations per second' may be a lot more impressive than people make it out to be. 60 teraflops of parallel computing power isn't actually that much; it would put Data in 6th place on the late 2006 list of world supercomputers. 60 teraflops of single threaded, serial computing power is a big deal, as we currently have no real idea how to make a processor run at 60000 GHz effective clock speed. Out of universe I'm pretty sure the writers phrased it that way because it sounded good, but interpreted literally that's pretty damn impressive. There are sci-fi computers that can do even faster linear computation, but not many - and note that a few problems just don't parallelise at all, while many others parallelise very badly.
Last edited by Starglider on 2007-04-06 04:52pm, edited 1 time in total.
User avatar
Instant Sunrise
Jedi Knight
Posts: 945
Joined: 2005-05-31 02:10am
Location: El Pueblo de Nuestra Señora la Reina de los Angeles del Río de Porciúncula
Contact:

Post by Instant Sunrise »

Of course there are the BIG applications that are HEAVILY multi-threaded, and the main reason that most people use Macs:
  • Adobe Photoshop.
  • Final Cut Pro.
  • AVID Media Composer.
  • 3D Studio Max
Of course, most of those applications have been multi-threaded for a while to take advantage of multiple physical processors.
Hi, I'm Liz.
Image
SoS: NBA | GALE Force
Twitter
Tumblr
User avatar
salm
Rabid Monkey
Posts: 10296
Joined: 2002-09-09 08:25pm

Post by salm »

skyman8081 wrote:Of course there are the BIG applications that are HEAVILY multi-threaded, and the main reason that most people use Macs:
  • Adobe Photoshop.
  • Final Cut Pro.
  • AVID Media Composer.
  • 3D Studio Max
Of course, most of those applications have been multi-threaded for a while to take advantage of multiple physical processors.
3dsMax is not available for Macintosh.
Post Reply