Akkleptos wrote:Regarding the problem with core multiplicity, why can't each core host a particular process or application, if we're going to have a bunch of them?
It can. Good operating systems try to maintain a stable core/process mapping (Vista doesn't, but Windows always lags Unix implementing core OS features). This usually helps with performance because it means the core-local cache is more likely to contain useful data.
I'm far from being any kind of expert, but I'm thinking that would probably help stability a lot,
It has no effect on stability.
especially if processes or applications use a delimited RAM space rather than having them all haphazzardly share all of it.
Virtual memory has been a 'solved problem' for decades and is for all practical intents and purposes 100% effective at isolating user-space applications from each other's memory usage.
That way, current instruction processing would more easily retain its "linear" characteristics, and the need for highly trained parallellist programmers would be significantly avoided.
This is a level confusion. The current problem is harnessing parallelism within a particular application. The multi-app parallelism issues are pretty much solved and have been for at least a decade (even for behind-the-curve stuff like Windows).
Also, do we really need OS's with 3D desktop views, windows, animations, fancy vissual effects and the like, when in most cases we could do pretty much the same with a text-only screen + strictly necessary graphics? Are we all so addicted to the cosmetics of an OS?
The cost of UI sugar is negligable in a new PC. The 3D hardware is there anyway, you might as well use it. I agree that it's annoying when a shiny 3D interface is
mandatory, for no good reason, but there's no reason to avoid them either (except when it sucks up dev time better spent on the actual program).
Hey, how much better and faster even our bloody cellphones would work if they had close-to plain text interfaces?
Plain text interfaces confuse most normal humans. We use GUIs for a reason.
Really, why can't we have commercial software applications that are carefully and efficiently programmed, instead of having big companies periodically forcing upon us newer versions
Because no one wants to pay extra for the former, whereas people will pay regularly for the later.
Mad wrote:It would offer no gains and would be less efficient than what we have now. Modern operating systems manage memory and processing cycles, and do it well.
With the exception of Vista's stupid random process/core bouncing I'd agree. Of course work is ongoing in all major kernels to tweak this further, and there are some areas such as virtualisation and large page usage that are still improving fast, but mostly it's a solved problem.
Sarevok wrote:The point is if you want you could make a tic tac toe game that needs 256 MB of ram to run you can make one with suitable amounts of bloatware. Then 2 years down the line you release Tic Tac Toe version 2.0 that needs 512 MB of ram. The new version has even more shinies and cosmetic improvements but you are still playing same Tic Tac Toe that could work with 4 kilobytes of memory even. This is what is happening with software. They are adding more sparklies every year and making software more inefficient.
The kind of bloat you're talking about is usually library and runtime bloat. The application programmers themselves aren't churning out exponentially more code each year, but they're linking in ever more crap. I personally use as few libraries and as few layers as possible, but I am constantly fighting (in the professional sense) people whose entire thought process seems to 'MOAR DESIGN PATTERNS == GOOD, MOAR ABSTRACTION LAYERS == GOOD, MOAR OPEN SOURCE LIBRARIES USED == GOOD'. At least half the Apache foundation's software seems to be designed on those principles. Of course one solution is the automated programming tech I'm working on.
Akkleptos wrote:True, but even GeOS for the Commodore 64 was quite usable a GUI
Umm no. It was impressive that the programmers got it to work at all, but it was pretty much useless for any practical purpose. The Mac (original) and Amiga GUIs were ok, certainly better than Windows 3.1, but frankly Win95+Office95 really did establish a baseline for decent usable computer GUIs (of course some Unix/X based systems had been there for years, but those were rarities with small user bases).
So, if I for some reason I were to have 10 spare tyres for my car, does that mean I should try and find a way to roll on all 10 of them at once, so as to not be wasteful?
Why the hell would you be carrying 10 tires around with no possible use in the first place?
Yes, if memory is there, it should be used if possible. Otherwise what would be the point of installing it in the computer? The idea of writing efficient programs is that the memory can be used for useful things rather than wasteful things, but it definitely should be used. It's not like
using more of your memory takes more power or wears it out somehow.
Every now and then, I opened an app-too-many and the thing crashed, of course. But it worked fine most of the time. But if I try to do that on my current system, I just know it WILL crash, eventually.
No it won't, unless you run out of space in your swap/page file, which frankly you have to try really hard to do these days. It will just slow down, but if you're only actually using one or two applications at once the swapping should be minimal. It's not like we still have only a few MB of memory and are constantly thrashing disk, I don't think I've ever even used the swap on my workstation (to be fair it does have 12Gb of RAM).
General Zod wrote:I'd also like to see an actual source for this bizarre claim of yours that lots of animations and shades can make things confusing for a lot of people.
That's a real issue which you'll cover if you take a HCI (Human Computer Interation) course. I can't think of any real OS that suffers from that problem though - unsurprising as they all have huge amounts of testing these days. Usually you only see it from idiot web site designers, particularly in those all-flash sites that were popular a few years back.
Should I run my car at breakneck speed at all times just because otherwise I would be "wasting" precious HP that is already there, sitting under the bonnet of my car?
Please stop making idiotic car analogies. Yes, if there were no safety or fuel consumption issues, then I
would drive my car at its top speed all the time because then I'd get where I wanted to go faster. Everyone else would too. Anyway most of these analogies do more harm than good.
Not at all. My point there was precisely that if I do pretty much the same things with my system as I did some 10 years ago, why do the browser, the wordprocessor, the mp3 player, the spreadsheet, the notebook, the JPG viewer etc have to guzzle up so many more resources than they did back then?
Because
most people are using more powerful computers and those resources are used a) to deliver an incrementally better experience and b) to make people feel like they got some value out of buying a new PC. If you don't like it use a stipped down version of OpenOffice. Or hell just run Office 2000, last time I checked it still opened virtually every filetype of interest.
General Zod wrote:Have you seen a modern spreadsheet or word processing program in the last 5 years? The sheer amount of functionality added to them is staggering compared to the basic setup you'd find in, say, 1990. Just because you don't use all of those features doesn't mean there isn't a wide variety of people who do.
To be fair, the in-memory footprint of most modern large programs could be improved by better modularisation (Java, C# etc give you a lot of free help with this over C++ but there's still manual work to do). This isn't usually done (or when it is done, load-on-demand isn't properly implemented e.g. PDF viewer) because it isn't a good use of developer time compared to more pressing requirements. Hardly any mainstream users actually care about memory requirements these days.
TempestSong wrote:I think you've forgotten just how slow computers were back in the day. Back in 1997 when I ran my old IBM with a 200MHz K6 and 24MB of RAM, it took about 15 seconds to load up a browser instance, and that was either AOL's internal browser or Internet Explorer 3.0. Nowadays, I can open Firefox in just under 1 second on my desktop
A lot of that will be the hard disk; modern hard disks have a transfer rate about 100 times faster than mid-90s ones. Storage performance really has improved very fast, in every dimension except latency (and even that has improved quite a bit).
Xon wrote:To use another horrible car analogy, cars require maintance. So do fucking computers. Except computers are easily orders of magnitude more complex.
Not necessarily. A lot of fixed-function servers are maintenance free. A lot of home users never have to perform any maintenance on their systems, if they never uninstall anything, don't install junk and don't catch viruses. There is no real reason for computers to require maintenance, most cases where they do are lingering software reliability issues that the industry just hasn't gotten around to fixing yet. SSDs will remove one major source of mechanical failure and if consumers can be satisfied with CPUs and PSUs low-power enough to cool passively then there will be literally no moving parts.