nice article on parallel processor limitations

SLAM: debunk creationism, pseudoscience, and superstitions. Discuss logic and morality.

Moderator: Alyrium Denryle

User avatar
dragon
Sith Marauder
Posts: 4151
Joined: 2004-09-23 04:42pm

nice article on parallel processor limitations

Post by dragon »

Kind of a interesting take on the limitation of Moore's Law and the new parallel processor routes.
When Anwar Ghuloum came to work at Intel in 2002, the company was supreme among chip makers, mainly because it was delivering processors that ran at higher and higher speeds. "We were already at three gigahertz with Pentium 4, and the road map called for future clock speeds of 10 gigahertz and beyond," recalls Ghuloum, who has a PhD from Carnegie Mellon and is now one of the company's principal engineers. In that same year, at Intel's developer conference, chief technology officer Pat Gelsinger said, "We're on track, by 2010, for 30-gigahertz devices, 10 nanometers or less, delivering a tera-instruction of performance." That's one trillion computer instructions per second.

But Gelsinger was wrong. Intel and its competitors are still making processors that top out at less than four gigahertz, and something around five gigahertz has come to be seen, at least for now, as the maximum feasible speed for silicon technology.

It's not as if Moore's Law--the idea that the number of transistors on a chip doubles every two years--has been repealed. Rather, unexpected problems with heat generation and power consumption have put a practical limit on processors' clock speeds, or the rate at which they can execute instructions. New technologies, such as spintronics (which uses the spin direction of a single electron to encode data) and quantum (or tunneling) transistors, may ultimately allow computers to run many times faster than they do now, while using much less power. But those technologies are at least a decade away from reaching the market, and they would require the replacement of semiconductor manufacturing lines that have cost many tens of billions of dollars to build.

So in order to make the most of the technologies at hand, chip makers are taking a different approach. The additional transistors predicted by Moore's Law are being used not to make individual processors run faster but to increase the number of processors inside a chip. Chips with two processors--or "cores"--are now the desktop standard, and four-core chips are increasingly common. In the long term, Intel envisions hundreds of cores per device.

But here's the thing: while the hardware problem of overheating chips lends itself nicely to the hardware solution of multicore computing, that solution gives rise in turn to a tricky software problem. How do you program for multiple processors? It's Anwar Ghuloum's job to figure that out, with the help of programming groups he manages in the United States and China.

Microprocessor companies take a huge risk in adopting the multicore strategy. If they can't find easy ways to write software for the new chips, they could lose the support of software developers. This is why Sony's multicore PlayStation 3 game machine was late to market and still has fewer game titles than its competitors.

The Problem with Silicon
For the first 30 years of microprocessor development, the way to increase performance was to make chips that had smaller and smaller features and ran at higher and higher clock speeds. The original Apple II computer of 1977 used an eight-bit processor that ran at one megahertz. The PC standard today is a 64-bit chip running at 3.6 gigahertz--effectively, 28,800 times as fast. But that's where this trajectory seems to end. By around 2002, the smallest features that could be etched on a chip using photolithography had shrunk to 90 nanometers--a scale at which unforeseen effects caused much of the electricity pumped into each chip to simply leak out, making heat but doing no work at all. Meanwhile, transistors were crammed so tightly on chips that the heat they generated couldn't be absorbed and carried away. By the time clock speeds reached five gigahertz, the chip makers realized, chips would get so hot that without elaborate cooling systems, the silicon from which they were made would melt. The industry needed a different way to improve performance.

Because of the complex designs that high-speed ­single-core chips now require, multiple cores can deliver the same amount of processing power while consuming less electricity. Less electricity generates less heat. What's more, the use of multiple cores spreads out whatever heat there is.

Most computer programs, however, weren't designed with multiple cores in mind. Their instructions are executed in a linear sequence, with nothing happening in parallel. If your computer seems to be doing more than one thing at a time, that's because the processor switches between activities more quickly than you can comprehend. The easiest way to use multiple cores has thus been through a division of labor--for example, running the operating system on one core and an application on another. That doesn't require a whole new programming model, and it may work for today's chips, which have two or four cores. But what about tomorrow's, which may have 64 cores or more?

Revisiting Old Work
Fortunately, says Leslie Valiant, a professor of computer science and applied mathematics at Harvard University, the fundamentals of parallelism were worked out decades ago in the field of high-performance computing--which is to say, with supercomputers. "The challenge now," says Valiant, "is to find a way to make that old work useful."

The supercomputers that inspired multicore computing were second-generation devices of the 1980s, made by companies like Thinking Machines and Kendall Square Research. Those computers used off-the-shelf processors by the hundreds or even thousands, running them in parallel. Some were commissioned by the U.S. Defense Advanced Research Projects Agency as a cheaper alternative to Cray supercomputers. The lessons learned in programming these computers are a guide to making multicore programming work today. So Grand Theft Auto might soon benefit from software research done two decades ago to aid the design of hydrogen bombs.

In the 1980s, it became clear that the key problem of parallel computing is this: it's hard to tear software apart, so that it can be processed in parallel by hundreds of processors, and then put it back together in the proper sequence without allowing the intended result to be corrupted or lost. Computer scientists discovered that while some problems could easily be parallelized, others could not. Even when problems could be parallelized, the results might still be returned out of order, in what was called a "race condition." Imagine two operations running in parallel, one of which needs to finish before the other for the overall result to be correct. How do you ensure that the right one wins the race? Now imagine two thousand or two million such processes.

"What we learned from this earlier work in high-performance computing is that there are problems that lend themselves to parallelism, but that parallel applications are not easy to write," says Marc Snir, codirector of the Universal Parallel Computing Research Center (UPCRC) at the University of Illinois at Urbana-Champaign. Normally, programmers use specialized programming languages and tools to write instructions for the computer in terms that are easier for humans to understand than the 1s and 0s of binary code. But those languages were designed to represent linear sequences of operations; it's hard to organize thousands of parallel processes through a linear series of commands. To create parallel programs from scratch, what's needed are languages that allow programmers to write code without thinking about how to make it parallel--to program as usual while the software figures out how to distribute the instructions effectively across processors. "There aren't good tools yet to hide the parallelism or to make it obvious [how to achieve it]," Snir says.

To help solve such problems, companies have called back to service some graybeards of 1980s supercomputing. David Kuck, for example, is a University of Illinois professor emeritus well known as a developer of tools for parallel programming. Now he works on multicore programming for Intel. So does an entire team hired from the former Digital Equipment Corporation; in a previous professional life, it developed Digital's implementation of the message passing interface (MPI), the dominant software standard for multimachine supercomputing today.

In one sense, these old players have it easier than they did the last time around. That's because many of today's multicore applications are very different from those imagined by the legendary mainframe designer Gene Amdahl, who theorized that the gain in speed achievable by using multiple processors was limited by the degree to which a given program could be parallelized.

Computers are handling larger volumes of data than ever before, but their processing tasks are so ideally suited to parallelization that the constraints of Amdahl's Law--described in 1967--are beginning to feel like no constraints at all. The simplest example of a massively parallel task is the brute-force determination of an unknown password by trying all possible character combinations. Dividing the potential solutions among 1,000 processors can't help but be 1,000 times faster. The same goes for today's processor-intensive applications for encoding video and audio data. Compressing movie frames in parallel is almost perfectly efficient. But if parallel processing is easier to find uses for today, it's not necessarily much easier to do. Making it easier will require a concerted effort from chip makers, software developers, and academic computer scientists. Indeed, Illinois's UPCRC is funded by Microsoft and Intel--the two companies that have the most to gain if multicore computing succeeds, and the most to lose if it fails.

Inventing New Tools
If software keeps getting more complex, it's not just because more features are being added to it; it's also because the code is built on more and more layers of abstraction that hide the com­plexity of what programmers are really doing. This is not mere bloat: programmers need abstractions in order to make basic binary code do the ever more advanced work we want it to do. When it comes to writing for parallel processors, though, programmers are using tools so rudimentary that James Larus, director of software architecture for the Data Center Futures project at Microsoft Research, likens them to the lowest-level and most difficult language a programmer can use.

"We couldn't imagine writing today's software in assembly language," he says. "But for some reason we think we can write parallel software of equal sophistication with the new and critical pieces written in what amounts to parallel assembly language. We can't."

That's why Microsoft is releasing parallel-programming tools as fast as it can. F#, for example, is Microsoft's parallel version of the general-purpose ML programming language. Not only does it parallelize certain functions, but it prevents them from interacting improperly, so parallel software becomes easier to write.

Intel, meanwhile, is sending Ghuloum abroad one week per month to talk with software developers about multicore architecture and parallel-programming models. "We've taken the philosophy that the parallel-programming 'problem' won't be solved in the next year or two and will require many incremental improvements--and a small number of leaps--to existing languages," ­Ghuloum says. "I also tend to think we can't do this in a vacuum; that is, without significant programmer feedback, we will undoubtedly end up with the wrong thing in some way

In both the commercial and the open-source markets, other new languages and tools either tap the power of multicore processing or mask its complexity. Among these are Google's MapReduce framework, which makes it easier to run parallel computations over clusters of computers, and Hadoop, an open-source implementation of MapReduce that can distribute applications across thousands of nodes. New programming languages like Clojure and Erlang were designed from the ground up for parallel computing. The popular Facebook chat application was written partly in Erlang.

Meanwhile, MIT spinoff Cilk Arts can break programs written in the established language C++ into "threads" that can be executed in parallel on multiple cores. And St. Louis-based Appistry claims that its Enterprise Application Fabric automatically distributes applications for Microsoft's .Net programming framework across thousands of servers without requiring programmers to change a single line of their original code.

The Limits of Multicore Computing

Just as Intel's dream of 10- and 30-gigahertz chips gave way to the pursuit of multicore computing, however, multicore itself might be around for a matter of years rather than decades. The efficiency of parallel systems declines with each added processor, as cores vie for the same data; there will come a point at which adding an additional core to a chip will actually slow it down. That may well set a practical limit on the multicore strategy long before we start buying hundred-core PCs.

Does it matter, though? While there may be applications that demand the power of many cores, most people aren't using those applications. Other than hard-core gamers, few people are complaining that their PCs are too slow. In fact, Microsoft has emphasized that Windows 7, the successor to the troubled ­Windows Vista, will use less processing power and memory than Vista--a move made necessary by the popularity of lower-power mobile computing platforms and the expected migration of PC applications to Internet-based servers. A cynic might say that the quest for ever-increasing processing power is strictly commercial--that semiconductor and computer companies, software vendors, and makers of mobile phones need us to buy new gizmos.

So what's the downside if multicore computing fails? What is the likely impact on our culture if we take a technical zig that should have been a zag and suddenly aren't capable of using all 64 processor cores in our future notebook computers?

"I can't wait!" says Steve Wozniak, the inventor of the Apple II. "The repeal of Moore's Law would create a renaissance for software development," he claims. "Only then will we finally be able to create software that will run on a stable and enduring platform."

"In schools," says Woz, "the life span of a desk is 25 years, a textbook is 10 years, and a computer is three years, tops. Which of these devices costs the most to buy and operate? Why, the PC, of course. Which has residual value when its useful life is over? Not the PC--it costs money to dispose of. At least books can be burned for heat. Until technology slows down enough for computing platforms to last long enough to be economically viable, they won't be truly intrinsic to education. So the end of Moore's Law, while it may look bad, would actually be very good."

Robert X. Cringely has written about technology for 30 years. He is the author of Accidental Empires: How the Boys of Silicon Valley Make Their Millions, Battle Foreign Competition, and Still Can't Get a Date.
link
"There are very few problems that cannot be solved by the suitable application of photon torpedoes
User avatar
Elessar
Padawan Learner
Posts: 281
Joined: 2004-10-06 02:56pm
Location: Toronto, ON

Re: nice article on parallel processor limitations

Post by Elessar »

It's a pretty good article that introduces the issues with concurrent programming to a layman audience. I don't like how the article blurs performance with speed though, and this:
Rather, unexpected problems with heat generation and power consumption have put a practical limit on processors' clock speeds, or the rate at which they can execute instructions.
It's a terrible misconception that shouldn't be made in an article of this nature.

Clock speed and instruction processing aren't directly related in such simple terms. The vast majority of an instruction's processing time is due to disk access. The further away from a processor's registers (internal memory), the longer it takes for a single instruction to finish. Each type of cache (L1, L2, memory and disk) has an increasingly longer time to retrieve information for the instruction to execute work on.

So most processors use this wait time to work on other instructions. Combine this with the way circuitry works (flip-flops propagating stored signals) and the result was a pipeline where instructions would be broken down into very tiny chunks, and the same circuitry would be working on multiple instructions at the same time. While a single instruction could a long time to finish, the overall number of instructions completed in any given time was very high as the CPU's idle time was minimized. This led into all sorts of cool computer sciences, like branch prediction (what would be the most likely pieces of information to be used next) so that the CPU would spend very little time waiting for disk access.

The longer the pipeline, the shorter each stage, the faster the clock speed. Yet the number of instructions executed per clock is much less than 1.

I suppose I can't expect too much technical detail from a pundit like Cringley. Hell, I'm handwaving and simplifying a whole lot in my explanation above... yet still, it irks me to have someone suggest that FASTER CLOCK = MORE INSTRUCTIONS. You'd think the megahertz myth has been busted by now.
User avatar
Dave
Jedi Knight
Posts: 901
Joined: 2004-02-06 11:55pm
Location: Kansas City, MO

Re: nice article on parallel processor limitations

Post by Dave »

"I can't wait!" says Steve Wozniak, the inventor of the Apple II. "The repeal of Moore's Law would create a renaissance for software development," he claims. "Only then will we finally be able to create software that will run on a stable and enduring platform."
I've been thinking this for a while now. Why can't we get back to writing good programs instead of brute forcing everything because we can? Why can't we take a year off from the "HURRAH PROGRESS" of coming out with a new version, and just squash bugs? I'll gladly forgo a year of software progress to see major improvements in stability, security, compatibility, etc. across the board. Heck, that might be a good idea for society as a whole: once a decade, sit down and clear out all the crud and driftwood and junk in all kinds of systems, from political to corporate to economic and everything else.
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Re: nice article on parallel processor limitations

Post by Starglider »

dragon wrote:Intel and its competitors are still making processors that top out at less than four gigahertz,
Untrue. IBM's top end POWER6 595 processor runs at 5.0 GHz. However it is an in-order core.
something around five gigahertz has come to be seen, at least for now, as the maximum feasible speed for silicon technology.
Overclockers go faster - that doesn't guarentee that it will be practical for stock speeds to go that high, but frankly 5 GHz strikes me as rather conservative. 10 GHz I'd agree with, it doesn't look like we're going to get past that in a general purpose CPU with current silicon technology.
But here's the thing: while the hardware problem of overheating chips lends itself nicely to the hardware solution of multicore computing, that solution gives rise in turn to a tricky software problem. How do you program for multiple processors?
The current answer is that you hire decent programmers who actually understand concurrency. Unfortunately these tend to be expensive and implementing custom parallelism solutions tends to make the project take longer to develop and debug. So really the question is 'how can we move to pervasively parallel applications without escalating development costs'.
"There aren't good tools yet to hide the parallelism or to make it obvious [how to achieve it]," Snir says.
Oh I don't know, there's Erlang, which is actually rather good, plus Occam was pretty cool in its day. Of course those are too hard to learn for the VB/PHP crowd.
When it comes to writing for parallel processors, though, programmers are using tools so rudimentary that James Larus, director of software architecture for the Data Center Futures project at Microsoft Research, likens them to the lowest-level and most difficult language a programmer can use.
They aren't that bad. Threads are already a fair abstraction and decent languages (e.g. Java) come with a whole raft of prebuilt concurrency primitives. It's certainly true that some more syntactic sugar over fork/join semantics could help, and that a lot of libraries could be more threaded and thread-friendly.

You can probably guess what I think the best solution would be, but that that requires analysis of possible system state paths at the global level, which is generally computationally infeasible for standard compiler technology. Current auto-parallelisation only works on code that's nearly or purely functional, which is great at the local level but actually inhibits parallelism at the system level (I love to tease functional evangelists about the 'stack state bottleneck').
New programming languages like Clojure and Erlang were designed from the ground up for parallel computing. The popular Facebook chat application was written partly in Erlang.
New? Erlang was developed in the mid-80s.
The efficiency of parallel systems declines with each added processor, as cores vie for the same data; there will come a point at which adding an additional core to a chip will actually slow it down. That may well set a practical limit on the multicore strategy long before we start buying hundred-core PCs.
Well, it's a cost benefit thing. At some point the benefit of the extra cores will be so marginal that it will actually make economic sense to start putting artificial diamond substrates, nanotube interconnects, on-chip optronics etc into production. Those technologies have the potential to restart the clock speed race.
"I can't wait!" says Steve Wozniak, the inventor of the Apple II. "The repeal of Moore's Law would create a renaissance for software development," he claims. "Only then will we finally be able to create software that will run on a stable and enduring platform."
Bah. Pussy :)
Until technology slows down enough for computing platforms to last long enough to be economically viable, they won't be truly intrinsic to education. So the end of Moore's Law, while it may look bad, would actually be very good."
The problem here isn't Moore's law. It's bloated applications and planned obsolescence that make computers obsolete despite a 1 GHz Athlon being quite sufficient for 90% of educational and office applications. Linux based solutions are already addressing this.
User avatar
Xon
Sith Acolyte
Posts: 6206
Joined: 2002-07-16 06:12am
Location: Western Australia

Re: nice article on parallel processor limitations

Post by Xon »

The major problem with multi-thread/process programming is there are many things which are fundementally linear. Without radical redesigning the theory you are using, it is simply imposible to extract more than trivial parallization. Embarrassing parallization problems are largely worthless for desktop computing.

Modern OSs and CPUs already extract what is basicly the most amount of automatic general purpose parallization by supporting pre-emptive multitasking. Offloading IO tasks and allowing IO devices Direct Memory Access into other threads, automatically offloading checksum/SSL comuptations to hardware(if avaliable), etc.

Simply hiring better programmers is not all that is required. Parallization computing is fundamentally unsolved problem domain, the last 30-50 years of computing research really has not made any significant breakthroughs and it looks to be that way for a while to come.
"Okay, I'll have the truth with a side order of clarity." ~ Dr. Daniel Jackson.
"Reality has a well-known liberal bias." ~ Stephen Colbert
"One Drive, One Partition, the One True Path" ~ ars technica forums - warrens - on hhd partitioning schemes.
User avatar
sketerpot
Jedi Council Member
Posts: 1723
Joined: 2004-03-06 12:40pm
Location: San Francisco

Re: nice article on parallel processor limitations

Post by sketerpot »

It's instructive to consider how much speedup you can get with an infinite number of processors (disregarding communications bandwidth and latency issues). Take comparison-based sorting, for example. On a single processor, there's no way to sort an array of n elements in less than O(n lg n) time in general. Given an infinite number of processors, you can do it in O(lg n) time -- but no better.

Or consider calculating a fractal, where you parallelize at the level of the pixel. You farm each pixel out to a processor, and then wait for the results to come back. The run-time of this algorithm is going to be the time it takes to calculate the last pixel. There's a big speedup, sure, but it's finite.

Check out Amdahl's law for more detailed information.
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Re: nice article on parallel processor limitations

Post by Starglider »

Xon wrote:The major problem with multi-thread/process programming is there are many things which are fundementally linear. Without radical redesigning the theory you are using, it is simply imposible to extract more than trivial parallization. Embarrassing parallization problems are largely worthless for desktop computing.
I disagree with that. Firstly, most 'desktop computation' is already fast enough, for stuff like office software we're at the point of deliberately searching for CPU-intensive bells and whistles to sell new hardware. Secondly, for the kind of things that actually do stress CPUs and take a frustratingly long time to complete, there is still plenty of parallisation potential. We're not talking about farming everything out to thousands of minimalist GPU processors here, that's still a niche, we're talking about getting decent utilisation out of 4 to 16 general purpose cores. In my experience this is mostly a programmer skill and project scope issue, and better automation to help with that will be difficult but not impossible.
Parallization computing is fundamentally unsolved problem domain, the last 30-50 years of computing research really has not made any significant breakthroughs and it looks to be that way for a while to come.
To be fair, very little of that effort has been spent on thread-level parallelisation within applications. A huge amount of research went into developing reliable multitasking operating systems, out-of-order processors, vector-capable ISAs and the various special-purpose low-level tricks you mentioned. Threading didn't really get any attention until the 70s and even then it was a tiny niche that grew slowly through the 80s and 90s. Major effort didn't start to get thrown at the within-process medium-grained parallelism problems until the second half of the 90s, and it didn't start to become relevant to consumer applications until just a few years ago.
User avatar
Akkleptos
Jedi Knight
Posts: 643
Joined: 2008-12-17 02:14am
Location: Between grenades and H1N1.
Contact:

Re: nice article on parallel processor limitations

Post by Akkleptos »

Regarding the problem with core multiplicity, why can't each core host a particular process or application, if we're going to have a bunch of them? I'm far from being any kind of expert, but I'm thinking that would probably help stability a lot, especially if processes or applications use a delimited RAM space rather than having them all haphazzardly share all of it. That way, current instruction processing would more easily retain its "linear" characteristics, and the need for highly trained parallellist programmers would be significantly avoided.

Also, do we really need OS's with 3D desktop views, windows, animations, fancy vissual effects and the like, when in most cases we could do pretty much the same with a text-only screen + strictly necessary graphics? Are we all so addicted to the cosmetics of an OS?

Hey, how much better and faster even our bloody cellphones would work if they had close-to plain text interfaces?

And...
Starglider wrote:The problem here isn't Moore's law. It's bloated applications and planned obsolescence that make computers obsolete despite a 1 GHz Athlon being quite sufficient for 90% of educational and office applications. Linux based solutions are already addressing this.
Rightly so! Really, why can't we have commercial software applications that are carefully and efficiently programmed, instead of having big companies periodically forcing upon us newer versions so bloated (by nearly useless features or just shoddy programming) that they end up eating up whatever RAM, processor cycles -and resources in general- your machine may have (no matter how fast the processor or big the RAM)? Why?

I agree with many here, in that I think we should do without the superfluous and keep from unnecessarily escalating computing power by means of not throwing more stuff to do at them just because they can handle they current workload well.
Life in Commodore 64:
10 OPEN "EYES",1,1
20 GET UP$:IF UP$="" THEN 20
30 GOTO BATHROOM
...
GENERATION 29
Don't like what I'm saying?
Take it up with my representative:
User avatar
General Zod
Never Shuts Up
Posts: 29211
Joined: 2003-11-18 03:08pm
Location: The Clearance Rack
Contact:

Re: nice article on parallel processor limitations

Post by General Zod »

Akkleptos wrote: Also, do we really need OS's with 3D desktop views, windows, animations, fancy vissual effects and the like, when in most cases we could do pretty much the same with a text-only screen + strictly necessary graphics? Are we all so addicted to the cosmetics of an OS?
Text only interfaces are not accessible to the average person, and making things easier to use is one of the benefits of having a graphical interface. Besides which, arguing that we don't "need" a few aesthetic niceties in an operating system when there's literally thousands of applications that depend on 3d algorithms and modeling is ridiculous. They require minimal processing power to add, so why not include them if they make the interface more friendly to work with?
"It's you Americans. There's something about nipples you hate. If this were Germany, we'd be romping around naked on the stage here."
User avatar
Sarevok
The Fearless One
Posts: 10681
Joined: 2002-12-24 07:29am
Location: The Covenants last and final line of defense

Re: nice article on parallel processor limitations

Post by Sarevok »

General Zod wrote:
Akkleptos wrote: Also, do we really need OS's with 3D desktop views, windows, animations, fancy vissual effects and the like, when in most cases we could do pretty much the same with a text-only screen + strictly necessary graphics? Are we all so addicted to the cosmetics of an OS?
Text only interfaces are not accessible to the average person, and making things easier to use is one of the benefits of having a graphical interface. Besides which, arguing that we don't "need" a few aesthetic niceties in an operating system when there's literally thousands of applications that depend on 3d algorithms and modeling is ridiculous. They require minimal processing power to add, so why not include them if they make the interface more friendly to work with?
While I agree about being a text only user interface being clumsy he had a point about over the top cosmetic adjustments to GUIs. Take Vista's GUI for example. It is just unnecessary overkill for people who just use computers for school or office work. Why would people who don't play games require a 3D graphics accelerator card with tons of memory and fancy shader model support just to run typical spreadsheets and wordprocessors ?
I have to tell you something everything I wrote above is a lie.
User avatar
DaveJB
Jedi Council Member
Posts: 1917
Joined: 2003-10-06 05:37pm
Location: Leeds, UK

Re: nice article on parallel processor limitations

Post by DaveJB »

Sarevok wrote:Why would people who don't play games require a 3D graphics accelerator card with tons of memory and fancy shader model support just to run typical spreadsheets and wordprocessors ?
Um... they don't. Aero3D runs practically as well on 2005-vintage Intel integrated graphics as it does on a Radeon 4870, and there's no real performance overheads involved. Not to mention that Vista Home Basic doesn't include Aero3D at all, and the other versions can easily step back to a traditional 2D UI. I agree that some elements of the Vista UI are superfluous - though hopefully Windows 7 is going to address this - but it's not exactly like we're at the stage where we need masses of horsepower just to run the UI yet (admittedly Vista is a performance hog, but that's not really related to the UI).
User avatar
Mad
Jedi Council Member
Posts: 1923
Joined: 2002-07-04 01:32am
Location: North Carolina, USA
Contact:

Re: nice article on parallel processor limitations

Post by Mad »

Akkleptos wrote:Regarding the problem with core multiplicity, why can't each core host a particular process or application, if we're going to have a bunch of them? I'm far from being any kind of expert, but I'm thinking that would probably help stability a lot, especially if processes or applications use a delimited RAM space rather than having them all haphazzardly share all of it.
It would offer no gains and would be less efficient than what we have now. Modern operating systems manage memory and processing cycles, and do it well.

Modern operating systems do not allow applications to access memory outside what they've been assigned, so there would be no stability gain and all kinds of wasted memory and other problems. Most programs use dynamic memory allocation, and memory use can vary drastically from one run to another. A graphics program, for example, isn't going to use nearly as much memory for editing a thumbnail as it would for editing an 8 megapixel photograph. That just can't be preallocated for each run of the software.

If you were to force that invalid write to be within the boundaries of the application's designated memory in an attempt to avoid crashing the application and losing your unsaved work, then you've just increased the chance that your data becomes corrupted, then saved to disk and then your saved work is lost. Either way, the application has a bug and something bad could happen.
That way, current instruction processing would more easily retain its "linear" characteristics, and the need for highly trained parallellist programmers would be significantly avoided.
That would be effectively no different than writing single-threaded programs on multi-core systems, as can be done today. The problem is making a single application run faster, which increasingly means utilizing more than 1 processor or core.
Also, do we really need OS's with 3D desktop views, windows, animations, fancy vissual effects and the like, when in most cases we could do pretty much the same with a text-only screen + strictly necessary graphics? Are we all so addicted to the cosmetics of an OS?
No, we don't need most of them. However, the graphics card should be doing most of the work there, so it doesn't really affect the performance of the machine.

Fancy visual effects also have the potential to increase productivity if done properly. I'm not saying we're there yet, but some things, like thumbnail images of applications when Alt-Tabbing or hovering over a taskbar item are helpful.
Hey, how much better and faster even our bloody cellphones would work if they had close-to plain text interfaces?
Probably not very much. How often are you utilizing 100% of your mobile phone's processor while fancy interface elements are going off? What real benefit would you gain?
Later...
User avatar
General Zod
Never Shuts Up
Posts: 29211
Joined: 2003-11-18 03:08pm
Location: The Clearance Rack
Contact:

Re: nice article on parallel processor limitations

Post by General Zod »

Sarevok wrote: While I agree about being a text only user interface being clumsy he had a point about over the top cosmetic adjustments to GUIs. Take Vista's GUI for example. It is just unnecessary overkill for people who just use computers for school or office work. Why would people who don't play games require a 3D graphics accelerator card with tons of memory and fancy shader model support just to run typical spreadsheets and wordprocessors ?
Who said anything about graphics cards? There's a reason Vista comes in different flavors, and if you don't play games or use any type of 3d based program (people who think gaming is the only purpose for high-end graphics cards are idiots), then there's hundreds of computers with integrated graphics sets out there that are far cheaper. Honestly, this is a ridiculous argument when you consider how customizable computers really are. Besides, past a certain threshold it doesn't matter how much you strip down the OS to its bare essentials, if it's not making use of all the memory you have then it's a waste to have that much memory in there.
"It's you Americans. There's something about nipples you hate. If this were Germany, we'd be romping around naked on the stage here."
User avatar
Sarevok
The Fearless One
Posts: 10681
Joined: 2002-12-24 07:29am
Location: The Covenants last and final line of defense

Re: nice article on parallel processor limitations

Post by Sarevok »

DaveJB wrote:
Sarevok wrote:Why would people who don't play games require a 3D graphics accelerator card with tons of memory and fancy shader model support just to run typical spreadsheets and wordprocessors ?
Um... they don't. Aero3D runs practically as well on 2005-vintage Intel integrated graphics as it does on a Radeon 4870, and there's no real performance overheads involved. Not to mention that Vista Home Basic doesn't include Aero3D at all, and the other versions can easily step back to a traditional 2D UI. I agree that some elements of the Vista UI are superfluous - though hopefully Windows 7 is going to address this - but it's not exactly like we're at the stage where we need masses of horsepower just to run the UI yet (admittedly Vista is a performance hog, but that's not really related to the UI).
A 2005 "vintage" graphics chip is still quite a monster. Win 95 had perfectly usable GUI even on 2 MB of VRAM. Why would you increase the requirments just because more sophisticated hardware exists ?
Who said anything about graphics cards? There's a reason Vista comes in different flavors, and if you don't play games or use any type of 3d based program (people who think gaming is the only purpose for high-end graphics cards are idiots), then there's hundreds of computers with integrated graphics sets out there that are far cheaper. Honestly, this is a ridiculous argument when you consider how customizable computers really are. Besides, past a certain threshold it doesn't matter how much you strip down the OS to its bare essentials, if it's not making use of all the memory you have then it's a waste to have that much memory in there.
Not you are failing to grasp the point. The point is if you want you could make a tic tac toe game that needs 256 MB of ram to run you can make one with suitable amounts of bloatware. Then 2 years down the line you release Tic Tac Toe version 2.0 that needs 512 MB of ram. The new version has even more shinies and cosmetic improvements but you are still playing same Tic Tac Toe that could work with 4 kilobytes of memory even. This is what is happening with software. They are adding more sparklies every year and making software more inefficient. So even for typing or sending an email you need to upgrade every few years. Software is getting slower while computers are getting faster. I am not saying this is bad. If you are happy with your uber quad corez blazing through bloatware coded by monstrously inefficient methods then good for you.
I have to tell you something everything I wrote above is a lie.
User avatar
General Zod
Never Shuts Up
Posts: 29211
Joined: 2003-11-18 03:08pm
Location: The Clearance Rack
Contact:

Re: nice article on parallel processor limitations

Post by General Zod »

Sarevok wrote: Not you are failing to grasp the point. The point is if you want you could make a tic tac toe game that needs 256 MB of ram to run you can make one with suitable amounts of bloatware. Then 2 years down the line you release Tic Tac Toe version 2.0 that needs 512 MB of ram. The new version has even more shinies and cosmetic improvements but you are still playing same Tic Tac Toe that could work with 4 kilobytes of memory even. This is what is happening with software. They are adding more sparklies every year and making software more inefficient. So even for typing or sending an email you need to upgrade every few years. Software is getting slower while computers are getting faster. I am not saying this is bad. If you are happy with your uber quad corez blazing through bloatware coded by monstrously inefficient methods then good for you.
This argument is mind numbingly stupid. What the fuck is the point in having all this memory if it's just sitting there and not doing anything? Ram that isn't being used is being wasted, and your argument relies on the assumption that the only improvements made to a given program are visual, which is grossly ignorant. The simple fact is that operating systems have to be designed to handle more than a few specialty apps with room for growth and exploiting future hardware advancement, which generally means they will have features that not everyone is going to use. But so what? It's not as if there's a lack of choice out there, or that everyone has to buy a top of the line gaming system. Your whole point reeks of an appeal to incredulity.
"It's you Americans. There's something about nipples you hate. If this were Germany, we'd be romping around naked on the stage here."
User avatar
DaveJB
Jedi Council Member
Posts: 1917
Joined: 2003-10-06 05:37pm
Location: Leeds, UK

Re: nice article on parallel processor limitations

Post by DaveJB »

Sarevok wrote:A 2005 "vintage" graphics chip is still quite a monster.
Not a 2005-era Intel graphics chip (more specifically, a 945G), which is what I was talking about. The only way such a chip could be described as a "monster" would be if it were being described by someone who's never seen anything more powerful than a Voodoo 2.
Win 95 had perfectly usable GUI even on 2 MB of VRAM. Why would you increase the requirments just because more sophisticated hardware exists ?
If Aero3D was the only GUI that Vista offered, then you would have a point. But it isn't, and you don't. You can step back to a 2D Aero-esque desktop, or even a Win2K style desktop, which would probably function just fine with a 2MB graphics card.
User avatar
Xon
Sith Acolyte
Posts: 6206
Joined: 2002-07-16 06:12am
Location: Western Australia

Re: nice article on parallel processor limitations

Post by Xon »

Sarevok wrote: A 2005 "vintage" graphics chip is still quite a monster. Win 95 had perfectly usable GUI even on 2 MB of VRAM.
No it didn't. Windows 95 didn't even use VRAM for anything, so the difference between none and 2mb is a waste of time. The over all minium system requirements was 4mb, and powerful machines at the time had 16mb but that was crippling if you actually tried todo more than one thing at a time.

The number of consessions to get the entire thing working, where an utter mess which compromised the OS considerably. There is a reason Win95 was an unstable OS, and it was not because Microsoft where incompedent.
Why would you increase the requirments just because more sophisticated hardware exists ?
Because the newer more sophisticated hardware is cheaper, and will be supported longer into the future than the older slower hardware.
"Okay, I'll have the truth with a side order of clarity." ~ Dr. Daniel Jackson.
"Reality has a well-known liberal bias." ~ Stephen Colbert
"One Drive, One Partition, the One True Path" ~ ars technica forums - warrens - on hhd partitioning schemes.
User avatar
Akkleptos
Jedi Knight
Posts: 643
Joined: 2008-12-17 02:14am
Location: Between grenades and H1N1.
Contact:

Re: nice article on parallel processor limitations

Post by Akkleptos »

Mad wrote:Modern operating systems manage memory and processing cycles, and do it well. Modern operating systems do not allow applications to access memory outside what they've been assigned, so there would be no stability gain and all kinds of wasted memory and other problems. Most programs use dynamic memory allocation, and memory use can vary drastically from one run to another. <snip>
If you were to force that invalid write to be within the boundaries of the application's designated memory in an attempt to avoid crashing the application and losing your unsaved work, then you've just increased the chance that your data becomes corrupted, then saved to disk and then your saved work is lost. Either way, the application has a bug and something bad could happen.
Thanks for the explanation. I had always thought application-delimited memory would be a good idea. I hadn't imagined it would have that many undesirable consecuences.
Mad wrote:Fancy visual effects also have the potential to increase productivity if done properly. I'm not saying we're there yet, but some things, like thumbnail images of applications when Alt-Tabbing or hovering over a taskbar item are helpful.
Indeed. Those I'd definitely go for. But not a dancing, swishing, dashing animated window upon maximising or minimising.
General Zod wrote:Text only interfaces are not accessible to the average person, and making things easier to use is one of the benefits of having a graphical interface.
True, but even GeOS for the Commodore 64 was quite usable a GUI, and it ran on a computer with less than 64 Kb (yes, kilobytes!). MacOS and Windows have offered usable GUI solutions since the days of 386 processors and 32 Mb RAM. The fact is one doesn't need 3D desktops (a-la-Ubuntu, not that Aero-crap), animated-everything, and so on, if you're going to write documents, read and send emails, browse the web or play a handful of light games.
General Zod wrote:Besides which, arguing that we don't "need" a few aesthetic niceties in an operating system when there's literally thousands of applications that depend on 3d algorithms and modeling is ridiculous. They require minimal processing power to add, so why not include them if they make the interface more friendly to work with?
I beg to differ. It's not ridiculous in that if I don't do 3D modelling, FX, CG art, digital film editing and production, heavy gaming, complex mathematical graphics visualisation, etc., then I don't really need my machine to handle -and devote resources to- lots of graphics just to show the GUI. Besides, lots of animations, shades and flashes can make ot confusing for many people.
Sarevok wrote:The point is if you want you could make a tic tac toe game that needs 256 MB of ram to run you can make one with suitable amounts of bloatware. Then 2 years down the line you release Tic Tac Toe version 2.0 that needs 512 MB of ram. The new version has even more shinies and cosmetic improvements but you are still playing same Tic Tac Toe that could work with 4 kilobytes of memory even. This is what is happening with software. They are adding more sparklies every year and making software more inefficient. So even for typing or sending an email you need to upgrade every few years. Software is getting slower while computers are getting faster.
Exactly. That's the point. But then:
General Zod wrote:This argument is mind numbingly stupid. What the fuck is the point in having all this memory if it's just sitting there and not doing anything? Ram that isn't being used is being wasted, and your argument relies on the assumption that the only improvements made to a given program are visual, which is grossly ignorant.
So, if I for some reason I were to have 10 spare tyres for my car, does that mean I should try and find a way to roll on all 10 of them at once, so as to not be wasteful? By such standards, Sarevok's hypothetical noughts-and-crosses (tic-tac-toe) 2.0 and its later upgrades would be just the thing for you. I do want my system to have more than enough resources for when I want to, say, edit and produce digital video, but I don't want every-single-bloody application to take over almost all of the memory and processor cycles and shoot my machine's performance to hell.

I'm the kind of bloke who likes to keep a bunch of application windows open at once, and Alt-Tab back and forth as needed. I've been doing that since, say, Win95 (hell! 3.11!). Every now and then, I opened an app-too-many and the thing crashed, of course. But it worked fine most of the time. But if I try to do that on my current system, I just know it WILL crash, eventually. I've been told I should get a new processor, more RAM, etc. And I understand. However, If I'm doing essentially the same stuff with my machine, then... why? Really, why? Now it's getting harder and harder to do the same, on computers that run at speeds an order of magnitude faster and with 32 times the RAM than what we had back then. That's why I ask.
Sarevok wrote:Why would you increase the requirments just because more sophisticated hardware exists ?
I'd say because progress and advance is desirable in computer games, CAD, film editing, FX and other graphic-intensive programs, but of course, one should have the choice whether to get and pay for the computing power and hardware to do that, if one needs it, or not.
Xon wrote:Because the newer more sophisticated hardware is cheaper, and will be supported longer into the future than the older slower hardware.
Well, yes, in principle. Of course, there's always the possibility of getting stuck with a piece of hardware that nobody in a few years will care about -let alone develop software of hardware for it (e.g. IBM's MCA, PC/1 ports, RS232, light pens, etc.)
Life in Commodore 64:
10 OPEN "EYES",1,1
20 GET UP$:IF UP$="" THEN 20
30 GOTO BATHROOM
...
GENERATION 29
Don't like what I'm saying?
Take it up with my representative:
User avatar
General Zod
Never Shuts Up
Posts: 29211
Joined: 2003-11-18 03:08pm
Location: The Clearance Rack
Contact:

Re: nice article on parallel processor limitations

Post by General Zod »

Akkleptos wrote: I beg to differ. It's not ridiculous in that if I don't do 3D modelling, FX, CG art, digital film editing and production, heavy gaming, complex mathematical graphics visualisation, etc., then I don't really need my machine to handle -and devote resources to- lots of graphics just to show the GUI. Besides, lots of animations, shades and flashes can make ot confusing for many people.

If you don't need it, then fine. But don't try and pretend that everyone else wants, needs or can get by with the same thing. I'd also like to see an actual source for this bizarre claim of yours that lots of animations and shades can make things confusing for a lot of people.
So, if I for some reason I were to have 10 spare tyres for my car, does that mean I should try and find a way to roll on all 10 of them at once, so as to not be wasteful? By such standards, Sarevok's hypothetical noughts-and-crosses (tic-tac-toe) 2.0 and its later upgrades would be just the thing for you. I do want my system to have more than enough resources for when I want to, say, edit and produce digital video, but I don't want every-single-bloody application to take over almost all of the memory and processor cycles and shoot my machine's performance to hell.

This is quite possibly the most mind numbingly stupid analogy I've heard all week. Ram in your machine that's not doing anything is NOT analogous to spare parts. It would be more equivalent to having an engine that doesn't make the most out of its mileage because the owner is too stupid or too cheap to install a good fuel injection system.
I'm the kind of bloke who likes to keep a bunch of application windows open at once, and Alt-Tab back and forth as needed. I've been doing that since, say, Win95 (hell! 3.11!). Every now and then, I opened an app-too-many and the thing crashed, of course. But it worked fine most of the time. But if I try to do that on my current system, I just know it WILL crash, eventually. I've been told I should get a new processor, more RAM, etc. And I understand. However, If I'm doing essentially the same stuff with my machine, then... why? Really, why? Now it's getting harder and harder to do the same, on computers that run at speeds an order of magnitude faster and with 32 times the RAM than what we had back then. That's why I ask.

I'm not sure what your point is here, except you seem to think that we should completely cease all development on computers entirely just so you don't have to upgrade your hardware ever. Quite frankly your argument is nonsensical. I just bought a fairly decent laptop a few days ago, and I've had absolutely zero problem running every high-end app or game I could throw at it without crashing whatsoever. So the insane thought that adding more things makes it more likely to crash is simply bizarre and reeks of someone who doesn't know what the hell he's talking about. Unless you're trying to make high end programs run on outdated and ancient hardware; in which case what else did you expect?
I'd say because progress and advance is desirable in computer games, CAD, film editing, FX and other graphic-intensive programs, but of course, one should have the choice whether to get and pay for the computing power and hardware to do that, if one needs it, or not.
Either you're being intentionally dishonest or you're incredibly retarded. You already have this choice. The sheer fact that there's hundreds of distros of Linux suitable for all sorts of operating systems in addition to hundreds of pre-built PCs suitable for any kind of Windows environment out there for relatively little money makes this a complete and total non argument.
Well, yes, in principle. Of course, there's always the possibility of getting stuck with a piece of hardware that nobody in a few years will care about -let alone develop software of hardware for it (e.g. IBM's MCA, PC/1 ports, RS232, light pens, etc.)

So what? Most people are able to cope with this and have no trouble buying new components to replace them. Unless they're either cheapskates or so poor as to be unable to afford it, in which case you probably have more important things to worry about than replacing computer components.
"It's you Americans. There's something about nipples you hate. If this were Germany, we'd be romping around naked on the stage here."
User avatar
Akkleptos
Jedi Knight
Posts: 643
Joined: 2008-12-17 02:14am
Location: Between grenades and H1N1.
Contact:

Re: nice article on parallel processor limitations

Post by Akkleptos »

Akkleptos wrote: I beg to differ. It's not ridiculous in that if I don't do 3D modelling, FX, CG art, digital film editing and production, heavy gaming, complex mathematical graphics visualisation, etc., then I don't really need my machine to handle -and devote resources to- lots of graphics just to show the GUI. Besides, lots of animations, shades and flashes can make ot confusing for many people.
General Zod wrote: If you don't need it, then fine. But don't try and pretend that everyone else wants, needs or can get by with the same thing. I'd also like to see an actual source for this bizarre claim of yours that lots of animations and shades can make things confusing for a lot of people.
Of course not. Some people do want or need the extra visual doodads. And, regarding the lots of animations and shades, I should probably have said "distracting" rather than "confusing".
Akkleptos wrote:So, if I for some reason I were to have 10 spare tyres for my car, does that mean I should try and find a way to roll on all 10 of them at once, so as to not be wasteful? By such standards, Sarevok's hypothetical noughts-and-crosses (tic-tac-toe) 2.0 and its later upgrades would be just the thing for you. I do want my system to have more than enough resources for when I want to, say, edit and produce digital video, but I don't want every-single-bloody application to take over almost all of the memory and processor cycles and shoot my machine's performance to hell.
General Zod wrote: This is quite possibly the most mind numbingly stupid analogy I've heard all week. Ram in your machine that's not doing anything is NOT analogous to spare parts. It would be more equivalent to having an engine that doesn't make the most out of its mileage because the owner is too stupid or too cheap to install a good fuel injection system.
No, it's not. Okay, how about we substitute tyres for HP? Should I run my car at breakneck speed at all times just because otherwise I would be "wasting" precious HP that is already there, sitting under the bonnet of my car? Should I put stuff in all of my trousers' pockets because otherwise I would be "wasting" the valuable space?
Akkleptos wrote: I'm the kind of bloke who likes to keep a bunch of application windows open at once, and Alt-Tab back and forth as needed. I've been doing that since, say, Win95 (hell! 3.11!). Every now and then, I opened an app-too-many and the thing crashed, of course. But it worked fine most of the time. But if I try to do that on my current system, I just know it WILL crash, eventually. I've been told I should get a new processor, more RAM, etc. And I understand. However, If I'm doing essentially the same stuff with my machine, then... why? Really, why? Now it's getting harder and harder to do the same, on computers that run at speeds an order of magnitude faster and with 32 times the RAM than what we had back then. That's why I ask.
General Zod wrote:I'm not sure what your point is here, except you seem to think that we should completely cease all development on computers entirely just so you don't have to upgrade your hardware ever. Quite frankly your argument is nonsensical. I just bought a fairly decent laptop a few days ago, and I've had absolutely zero problem running every high-end app or game I could throw at it without crashing whatsoever. So the insane thought that adding more things makes it more likely to crash is simply bizarre and reeks of someone who doesn't know what the hell he's talking about. Unless you're trying to make high end programs run on outdated and ancient hardware; in which case what else did you expect?
Not at all. My point there was precisely that if I do pretty much the same things with my system as I did some 10 years ago, why do the browser, the wordprocessor, the mp3 player, the spreadsheet, the notebook, the JPG viewer etc have to guzzle up so many more resources than they did back then? I'd understand it from the browser, after all, the www includes more bells and whistles now, but the rest of them? And, no, I don't want to have to buy a recent model computer just to pretty much do things that were done nicely with a Pentium MMX at 200MHz. That's the point, right there.
General Zod wrote:Either you're being intentionally dishonest or you're incredibly retarded. You already have this choice. The sheer fact that there's hundreds of distros of Linux suitable for all sorts of operating systems in addition to hundreds of pre-built PCs suitable for any kind of Windows environment out there for relatively little money makes this a complete and total non argument.
Sure. Like most users know how to install and use Linux, and can tell from just reading the specs just which system is better suited to their needs as accountants, students, real-estate agents, etc.
Akkleptos wrote:Well, yes, in principle. Of course, there's always the possibility of getting stuck with a piece of hardware that nobody in a few years will care about -let alone develop software of hardware for it (e.g. IBM's MCA, PC/1 ports, RS232, light pens, etc.)
General Zod wrote:So what? Most people are able to cope with this and have no trouble buying new components to replace them. Unless they're either cheapskates or so poor as to be unable to afford it, in which case you probably have more important things to worry about than replacing computer components.
I'm assuming that by "most people" you actually mean "most people in the US", or "most of the people I know". What about students? Kids who don't get enough of an allowance to save up for decent components? NGOs trying to provide remote villages with computers and internet? Governments of poor countries wanting to provide schools with relatively modern, usable equipment? The cheap bastards! Even so, for companies it can be a great loss if they bought certain components by the thousands only to later have them be considered obsolete. That has never got anyone fired, ever... right?
Life in Commodore 64:
10 OPEN "EYES",1,1
20 GET UP$:IF UP$="" THEN 20
30 GOTO BATHROOM
...
GENERATION 29
Don't like what I'm saying?
Take it up with my representative:
User avatar
General Zod
Never Shuts Up
Posts: 29211
Joined: 2003-11-18 03:08pm
Location: The Clearance Rack
Contact:

Re: nice article on parallel processor limitations

Post by General Zod »

Akkleptos wrote: No, it's not. Okay, how about we substitute tyres for HP? Should I run my car at breakneck speed at all times just because otherwise I would be "wasting" precious HP that is already there, sitting under the bonnet of my car? Should I put stuff in all of my trousers' pockets because otherwise I would be "wasting" the valuable space?
I already gave a better comparison. If you can't read it's not my problem.
Not at all. My point there was precisely that if I do pretty much the same things with my system as I did some 10 years ago, why do the browser, the wordprocessor, the mp3 player, the spreadsheet, the notebook, the JPG viewer etc have to guzzle up so many more resources than they did back then? I'd understand it from the browser, after all, the www includes more bells and whistles now, but the rest of them? And, no, I don't want to have to buy a recent model computer just to pretty much do things that were done nicely with a Pentium MMX at 200MHz. That's the point, right there.
Have you seen a modern spreadsheet or word processing program in the last 5 years? The sheer amount of functionality added to them is staggering compared to the basic setup you'd find in, say, 1990. Just because you don't use all of those features doesn't mean there isn't a wide variety of people who do.
Sure. Like most users know how to install and use Linux, and can tell from just reading the specs just which system is better suited to their needs as accountants, students, real-estate agents, etc.
You do understand there's a reason that most companies tend to consider people who customize their own PCs as "advanced users", yes? The fact that many functional machines for these purposes can be bought pre-built for a few hundred dollars makes me think you're largely ignorant of how things are actually done in reality.
I'm assuming that by "most people" you actually mean "most people in the US", or "most of the people I know". What about students? Kids who don't get enough of an allowance to save up for decent components? NGOs trying to provide remote villages with computers and internet? Governments of poor countries wanting to provide schools with relatively modern, usable equipment? The cheap bastards! Even so, for companies it can be a great loss if they bought certain components by the thousands only to later have them be considered obsolete. That has never got anyone fired, ever... right?
By "most people" I mean "anyone with a job that pays more than minimum wage". If you fall into categories where you have to choose between upgrading your components or, say, paying the rent, then I already addressed this. Chances are you have more substantial things to be worrying about than whether or not you can afford to buy new computer equipment. Given that many companies will offer bulk discounts to governments or educational facilities, your argument is even more nonsensical.
"It's you Americans. There's something about nipples you hate. If this were Germany, we'd be romping around naked on the stage here."
TempestSong
Youngling
Posts: 67
Joined: 2008-12-29 05:26pm

Re: nice article on parallel processor limitations

Post by TempestSong »

Akkleptos wrote:Not at all. My point there was precisely that if I do pretty much the same things with my system as I did some 10 years ago, why do the browser, the wordprocessor, the mp3 player, the spreadsheet, the notebook, the JPG viewer etc have to guzzle up so many more resources than they did back then? I'd understand it from the browser, after all, the www includes more bells and whistles now, but the rest of them? And, no, I don't want to have to buy a recent model computer just to pretty much do things that were done nicely with a Pentium MMX at 200MHz. That's the point, right there.
I think you've forgotten just how slow computers were back in the day. Back in 1997 when I ran my old IBM with a 200MHz K6 and 24MB of RAM, it took about 15 seconds to load up a browser instance, and that was either AOL's internal browser or Internet Explorer 3.0. Nowadays, I can open Firefox in just under 1 second on my desktop; on my 900MHz Celeron Mobile EEE netbook with 1GB RAM, about 3-5 seconds, maybe slightly more if there's something else going on in the background. In addition, Microsoft Word 95 took about 20-30 seconds to fully load; on my EEE, it takes 5-10 seconds to load Office Word 2007.

It's easy to get spoiled by current technology, but to say that general requirements have gone up is only partially true. As Zod said, functionality has been added, but not enough to completely eclipse computers from a relatively recent generation (my EEE is weaker than my desktop from 2000, but as mentioned it runs Word 2007 quite fine).
User avatar
Xon
Sith Acolyte
Posts: 6206
Joined: 2002-07-16 06:12am
Location: Western Australia

Re: nice article on parallel processor limitations

Post by Xon »

Akkleptos wrote: So, if I for some reason I were to have 10 spare tyres for my car, does that mean I should try and find a way to roll on all 10 of them at once, so as to not be wasteful? By such standards, Sarevok's hypothetical noughts-and-crosses (tic-tac-toe) 2.0 and its later upgrades would be just the thing for you. I do want my system to have more than enough resources for when I want to, say, edit and produce digital video, but I don't want every-single-bloody application to take over almost all of the memory and processor cycles and shoot my machine's performance to hell.
RAM does not work that way. There is no performance penalty or even energy cost between having usable data in memory compared to having nothing. The CPU does have bandwidth limits, but those are in the order of gigabytes per second for even quite old DDR1 ram, compared to almost 4-8gb/s for DDR2 and ~8-17gb/s for DDR3.
I'm the kind of bloke who likes to keep a bunch of application windows open at once, and Alt-Tab back and forth as needed. I've been doing that since, say, Win95 (hell! 3.11!). Every now and then, I opened an app-too-many and the thing crashed, of course. But it worked fine most of the time. But if I try to do that on my current system, I just know it WILL crash, eventually. I've been told I should get a new processor, more RAM, etc. And I understand. However, If I'm doing essentially the same stuff with my machine, then... why? Really, why? Now it's getting harder and harder to do the same, on computers that run at speeds an order of magnitude faster and with 32 times the RAM than what we had back then. That's why I ask.
Win2k/WinXP/Vista line do not crash if you open several hundred to even thousands of applications. The limitation is purely in virtual memory, which is the sum total of your pagefile and physical memory. Win9x just falls over. Applications tend to fall over if they suddenly can't get more memory, which is one of the reasons you should always have a page file.

Hell, my desktop Vista box has something like 54 processes and ~800 threads currently. My fileserver has ~80 processes and just over 1000 threads, this isn't even counting the Virtual Machines it has running on it.

Also, the answer is the hard disk. Hard disks simply have not advanced as fast as the rest of the computer and have always been the slowest aspect of it. If you really want to improve multitasking performance; throw more spindles at it. A seperate hard disk(s) for OS/apps/data will have massive performance improvements.

No, multipule partitions on the same disk do not count. Infact, the make the problem worse.
Well, yes, in principle. Of course, there's always the possibility of getting stuck with a piece of hardware that nobody in a few years will care about -let alone develop software of hardware for it (e.g. IBM's MCA, PC/1 ports, RS232, light pens, etc.)
You completely miss the point. Newer versions of the same thing, are simply faster/better versions as far as the OS cares or knows.
"Okay, I'll have the truth with a side order of clarity." ~ Dr. Daniel Jackson.
"Reality has a well-known liberal bias." ~ Stephen Colbert
"One Drive, One Partition, the One True Path" ~ ars technica forums - warrens - on hhd partitioning schemes.
User avatar
Sarevok
The Fearless One
Posts: 10681
Joined: 2002-12-24 07:29am
Location: The Covenants last and final line of defense

Re: nice article on parallel processor limitations

Post by Sarevok »

General Zod wrote: This argument is mind numbingly stupid. What the fuck is the point in having all this memory if it's just sitting there and not doing anything? Ram that isn't being used is being wasted, and your argument relies on the assumption that the only improvements made to a given program are visual, which is grossly ignorant. The simple fact is that operating systems have to be designed to handle more than a few specialty apps with room for growth and exploiting future hardware advancement, which generally means they will have features that not everyone is going to use. But so what? It's not as if there's a lack of choice out there, or that everyone has to buy a top of the line gaming system. Your whole point reeks of an appeal to incredulity.
If you write wasteful programs you keep feeding the MOAR monster. The IT industry will never be satisfied with how much faster hardware becomes available. I don't wanna buy some 16 core 7.2 GHZ machine ten years from now to send an email. But neither do I want to stick to rusting computers. I want my future PC to respond way faster than current ones. Most computers I used still slow down or freeze while just doing the "casual computer user" thingy. Opening a few firefox, ms word, msn windows and not having the computer slow down - is that a sin to ask ?
I have to tell you something everything I wrote above is a lie.
User avatar
General Zod
Never Shuts Up
Posts: 29211
Joined: 2003-11-18 03:08pm
Location: The Clearance Rack
Contact:

Re: nice article on parallel processor limitations

Post by General Zod »

Sarevok wrote: If you write wasteful programs you keep feeding the MOAR monster. The IT industry will never be satisfied with how much faster hardware becomes available. I don't wanna buy some 16 core 7.2 GHZ machine ten years from now to send an email. But neither do I want to stick to rusting computers. I want my future PC to respond way faster than current ones. Most computers I used still slow down or freeze while just doing the "casual computer user" thingy. Opening a few firefox, ms word, msn windows and not having the computer slow down - is that a sin to ask ?
Just because a newer program will not run smoothly on your hardware does not mean it is being wasteful. This is backwards reasoning.
"It's you Americans. There's something about nipples you hate. If this were Germany, we'd be romping around naked on the stage here."
Post Reply