Page 1 of 1

Seemingly ignorant MS blogger

Posted: 2003-12-21 01:33am
by Pu-239
Not stupid, just ignorant about the latest developments in *nix- appears to be an MS developer, knows more than I do. Interesting though.

http://mikedimmick.blogspot.com/

Linux guru: Move quickly to new kernel (via Ian)

I'm not going to comment heavily on this, but basically the intention is to immediately shift the current stable version of the Linux kernel into security-patch-only mode, as soon as the new version 2.6.0 is released. That immediately puts users of the older version at a disadvantage. Also, historically, the first few releases of a new kernel have been poor - I recall that early 2.2 releases had terrible problems with disk corruption on some IDE drives.

Of course, you've got the source - you could back-port changes from the new kernel to the old one. If you're a programmer. And you're familiar with the kernel. And you have the time and the inclination. Oh, and the kernel developers haven't completely changed the interface to that part of the kernel. And you'd have to do that every time the main kernel got updated. No apt-get or rpm for you.

Now, OK, Microsoft hasn't released a full Service Pack for Windows NT 4.0 since October 1999, a full four months before Windows 2000 was released. But the software is still supported, and fixes are still being produced for it, more than seven years after initial release. It's just about to go out of mainstream support.

Windows 2000 has already had a service pack released after the release of Windows Server 2003 (SP4, released in June 2003), and it appears that a service pack 5 is planned (although no release date has been announced). We might expect SP5 to include some of the same security measures as XP Service Pack 2, although that could be wishful thinking on my part.

CNet reports that the reason this information came to light is that Silicon Graphics wanted to include their XFS journalled file system in 2.4, but it's only just completed. The original decision was that it wouldn't be included - after all, there are already three journalled file systems in Linux.

The trouble is, two of them - ext3 and ReiserFS - are widely regarded as a joke - they tend to lose data, or still require a lengthy fsck when rebooting. Keenspot lost many days of comics - particularly from Keenspace - due to ReiserFS on Linux 2.5. They also lost months of forum posts.

I'll admit I hadn't heard of IBM's JFS until reading this article. Maybe it actually works.

I'll just note here that if you want to add a new file system to Windows, you can get hold of the Installable File System Kit, which currently costs $899. Microsoft isn't yet guaranteeing that file systems written now will work on Longhorn, but I believe that third-party file systems written for NT 4.0 work all the way up to Windows Server 2003 using the same binary. If you want to add a new device driver, the Windows Driver Developers' Kit is free, apart from handling charges.

Oh yeah, and Windows NT has had a journalled file system since the beginning (NTFS).
Um... that's why if you are looking for reliability you fetch backports from your distro :roll: - and the commercial distros aren't switching over for a while anyway. Besides, 2.6.0 has gone over extensive testing anyway. And "keenspot" was stupid to save important stuff on a DEVELOPMENT kernel anyway. :roll:
Keeping in the vein of the last post (sorta): eWeek: 2004: The Year Linux Grows Up (or Blows Up) (hey, I like my title better)

Dear gods, I hope not.

To me, Linux represents stagnation – an inability for the computer market to see past Unix. For many h4xx0rs, Unix is venerated in the same way that the Founding Fathers are venerated in the US: They (It) Can Do No Wrong.

The thing is, Unix was designed for systems where all the hardware was known and available at boot time, and recompiling your kernel to add a driver was acceptable. That simply isn't true now. Files could never be bigger than 4GB, 2038 was 60 years in the future, and handling 1 transaction a second was fine. Those assumptions don't hold true either.

I've been using Windows XP family systems for about four years now, and I don't see any need to, or have incentive to, change. I have tried various releases of Linux and found them uniformly awful.

I don't believe that Microsoft is capable of locking in by extending the server and client simultaneously (if that's even what they're doing, and even if they are trying to do so). I believe that the history of the software market bears it out - quality products succeed, even if priced higher than lower-quality products. Trying to undercut Microsoft is normally an exercise in futility - not because they're predatory or aggressive, or have more resources (though that helps) but because their product has succeeded because it meets a customer need. The only way you can make that work is if you reduce your costs of development and shipping...

...which is where we came in. I believe that the Open Source model can never equal the best of the commercial developers (which some teams at MS are). But that argument will have to wait for another day.

For myself, I can cope with supporting my Mum on Windows; I found it hard to support other CS/EE students on Linux.
Erm... the guy appears to have never heard about kernel modules (yes, you have to recompile, but not if you stick w/ RH or SuSE default kernels, and the drivers can still be closed-source like nVidias- also , and I remembered FAT32 had a max file size of 4GB (if he wants to use obsolete file systems as a basis to point out flaws)- this was a problem when I was trying to make a massive loopback device to extend my space limited ReiserFS partition (not an issue anymore, since I resized the whole partition). And max file size on modern kernels is measured in TERAbytes. :roll: So I can say the windows/dos developers had no foresight :roll: . Doesn't Linux also have the ability to use more than 4GB of memory on 32 bit systems (individual apps cant though, right?, and the limit includes swap, right?), while Windows is still constrained to 4?

Reading back through one of my entries on 1 December, I realise that I've talked about journalled file systems, but didn't explain what one is.

A journalled file system is one which records (on disk) the changes it's about to make before doing so. If the power fails, or an error occurs, while it's making the changes, it can then either reverse the changes made, or read forwards through the log to complete the changes. This allows the system to ensure that its changes are consistent.

These features allow the file system to be both fast, caching writes until a lot of changes can be made at once, and also reliable. Classic UNIX file systems cache writes and rely on a checking tool, fsck (File System ChecKer), to fix the mistakes that happen. Windows 98 FAT and FAT32 work in the same way, relying on scandisk to sort out the corrupted disk. VMS and other reliable systems use serialised access to file system structures: only one process can modify disk structures at a time, but this is s-l-o-w.

One thing that isn't often explained about JFSs is that only the disk structure is journalled. User data is not necessarily preserved. The file will be the right length, but might not have the right contents. If you need transactional behaviour (operations are either completely performed or completely rolled back) you need to implement this yourself. Windows NT does this for its registry, which is why you're less likely to get a trashed registry on this system (and indeed, this is safer than multiple individual configuration files).

According to the big honking Longhorn architecture chart, Longhorn will gain built-in support for transactional access to files, and general transaction support.

Users of Pocket PCs might be surprised to learn that the file system implemented in the device's RAM is also transactional, as is access to the device's registry and property databases. AFAIK, Palm is not. However, the Pocket PC implementation has a 'feature' - changes to a property database appear not to be committed until you close the last handle to it. The CEDB OLE DB provider (which most PPC developers know as 'Pocket Access') doesn't close handles until you've Released the last Connection reference. If the device is suspended with an outstanding Connection open, the device rolls back and loses all your changes.

Unsurprisingly, we don't use ADOCE at work.
Ext3FS mouted w/ data=journaled provides best integrity, but people don't use it because it is slow- all data is written to journal before filesystem. Running sync periodically w/ other filesystems help. Supposedly BSD is better at this.