Contents:

¶: Greetings From Heather Stern
(?)How can I turn on pc into two (effecively)?
(?)H/W detection in Debian ? --or--
There's More Than One Way To Detect It
TMT1WTDI: not just for perl hackers anymore
(?)Regarding the paper entitled "Maintainability of the Linux Kernel" --or--
Linux Kernel Maintainability: Bees Can't Fly
but a Hurd of them might go Mach speed...
(?)routing to internet from home . Kernel 2.4

(¶) Greetings from Heather Stern

Whew! One thing I can say, there was a lot of good stuff this month. There's so many goo things to say and I just can't edit them all.

But don't you worry. We've got something for everyone this month. Newbies can enjoy a list of a bunch of apps designed to help setup be a little more fun (or at minimum, a little less headache). The intelligencia can see what the Gang thinks of some academic notions for the future of kernels. And everyone hungering for more about routing has something keen to get their teeth into. Experimenters... nice trick with two monitors, here.

In the world of Linux there's more to politicking than just the DMCA guys trying to get us to stop ever looking at "their" precious copyrighted works ever again. Among the Linux kernel folk there's snatches here and there of an ongoing debate about source code control systems. You see, bitkeeper has the power to do grand things... but for people who have not decided that they hate CVS, it's a bit of a pain to pull out small patches. For people who don't qualify to use bitkeeper under their only-almost-free license (you can't use it if you work for someone who produces a competing sourcecode control system, if I read things right ... thus anyone who works for RH shouldn't, et al.) this is a bad thing.

For that matter I'm a bit of a programmer myself, but if I'm going to even glance in the kernel's direction, I need much smaller peices to chew on, and I really didn't want to spend the better part of a month learning yet another source system. (Not being paid for doing so, being a guiding factor in this case.) I had to thrash around the net quite a bit to get a look at the much smaller portion of the whole.

So some of the kernel gang wrote some scripts to help them with using the somewhat friendly web interface (folks, these definitions of "friendly" still need a lot of work) and Larry threatened to close down bkweb if that bandwidth hit got too high. In my opinion, just about the worst thing he could have said at that moment - it highlights why people are trying to escape proprietary protocols - they want features, but Linux folk, having tasted the clean air of freedom, don't want to be locked indoors just because a roof over their code's head is good to have at times.

Don't get me wrong. Giant public mirrors of giant public projects are great things, and as far as I can tell Bitkeeper is still committed to a friendly hosting of the 2.5.x kernel source tree, among a huge number of other projects. Likewise Sourceforge. But we also need ways to be sure that the projects themselves can outlast the birth and death of companies, friendships, or the interest of any given individual to be a part of the project. The immortality of software depends on the right to copy it as much as you need to and store it anywhere or in any form you like. If the software you are using isn't immortal in this sense then neither are the documents, plans, hopes, or dreams that you store in it. More than the "viral freedom" clauses in the GPL or the "use it anywhere, just indemnify us for your dumb mistakes" nature of the MIT and BSDish licenses, this is the nature of the current free software movement. And you can quote me on that.

Readers, if you have any tales of your own escapes from proprietary environments into Linux native software, especially any where it has made your life a little more fun, then by all means, we'd love to see your articles and comments. Thank you, and have a great springtime.


(?) How can I turn on pc into two (effecively)?

From Chris Gibbs

Answered By Jimmy O'Regan, Jim Dennis

(?) Hi ya,

I have a dual headed system. I am not really happy with xinerama cause having a different resolution on each monitor does not make sense for me, and having two seperate Desktops for a single X session seems limiting. Neither solution works well for apps like kwintv.

But this is linux! I don't just want to have cake and eat it I want the factory that makes it! What I really want is to have a ps2 mouse and keyboard associated with one monitor and associate a usb mouse and keyboard with the other monitor and have ability not just to run X from each, but to have text mode available also.

Idea also being I could have text mode session and X session at the same time, that way I can have kwintv fullscreen and play advmame in svga mode full screen at the same time ;-)

So how do I initialise the second video card (one pci, one agp) so I can make it tty2 monitor or similar?

(!) [Jimmy] Google
http://www.google.com/linux?hl=en&lr=&ie=UTF-8&oe=utf-8&q=two+keyboards+two+mice+two+keyboards&btnG=Google+Search
came up with these links: http://www.ssc.com/pipermail/linux-list/1999-November/028191.html http://www.linuxplanet.com/linuxplanet/tutorials/3100/1

(?) Am I greedy or wot?

(!) [Jimmy] Nah, cost effective. "Able to maximise the potential of sparse resources". Some good CV-grade B.S.

(?) These links are to articles about X, I already know I can have X however I want it accross the monitors. Thats easy...

What I want is seperate text mode consoles, so at risk of repeating myself how do I initialise the second video card for text mode (not for X) and how do I associate it with specific tty's

(!) [Jimmy] Well, you could set up the first set for the console and use the second for X Okay, not what you asked :). So, to your actual question.
The device should be /dev/fb1, or /dev/vcs1 and /dev/vcsa1 on older kernels. You should have better luck with a kernel with Framebuffer support - according to the Linux Console Project (http://linuxconsole.sourceforge.net) there's hotplug support & multiple monitor support. The Framebuffer HOWTO has a section on setting up two consoles (http://www.tldp.org/HOWTO/Framebuffer-HOWTO-14.html). The example focuses on setting up dual headed X again, but it should contain what you need - "an example command would be "con2fb /dev/fb1 /dev/tty6" to move virtual console number six over to the second monitor. Use Ctrl-Alt-F6 to move over to that console and see that it does indeed show up on the second monitor."
(!) [JimD] It's serendipitous that yhou should ask this question since I just came across a slightly dated article on how to do this:
http://www.linuxplanet.com/linuxplanet/tutorials/3100/1
Some of the steps in this process might be unnecessary in newer versions of XFree86 and the kernel. I can't tell you for sure as I haven't tried this. Heck, I haven't even gotten around to configuring a dual headed Xinerama system, yet.

(?) There's More Than One Way To Detect It

TMT1WTDI: not just for perl hackers anymore

From Joydeep Bakshi

Answered By Rick Moen, Dave Bechtel, Heather Stern

(!) [Heather] All this is in response to last month's Help Wanted #1

(?) 1) kudzu is the DEFAULT H/W detection tool in RH & harddrake in MDK. is there anything in debian?

(!) [Rick] As usual, the Debian answer is "Sure, which ones do you want?"
discover
Hardware identification system (thank you, Progeny Systems, Inc.), for various PCI, PCMCIA, and USB devices.
(!) [Dave]
apt-get update; apt-get install discover

(' apt-cache search discover ': )
discover - hardware identification system
discover-data - hardware lists for libdiscover1
libdiscover-dev - hardware identification library development files
libdiscover1 - hardware identification library
(!) [Heather] Worthwhile to also search on the words "detect" and "config" and "cfg" since many of the configurators or their helper apps have those words in their package names.

(?) discover only detects the h/w, but kudzu does one task extra that is it also configure the h/w. do u have any info. whether the latest version of discover do this auto-config. ? ( I am in debian 3.0).

(!) [Rick] I'm unclear on what you mean by "configure the hardware". Discover scans the PCI, USB, IDE, PCMCIA, and SCSI buses. (Optionally, it scans ISA devices, and the parallel and serial ports.) It looks (by default) for all of these hardware types at boot time: bridge cdrom disk ethernet ide scsi sound usb video. Based on those probes, it does appropriate insmods and resetting of some device symlinks.
What problem are you trying to solve?
(!) [Heather] For many people there's a bit of a difference between "the machine notices the hardware" and "my apps which want to use a given piece of hardware work without me having to touch them." In fact, finishing up the magic that makes the second part happen is the province of various apps that help configure XFree86 (SaX2/SuSE, Xconfigurator/RedHat, XF86Setup and their kindred) - some of which are better at having that magical "just works" feeling than others. Others are surely called on by the fancier installation systems too. Thus Rick has a considerable list below.
For ide, scsi, cdrom it all seems rather simple; either the drives work, or they don't. I haven't seen any distros auto-detect that I have a cd burner and do any extra work for that, though.
PCMCIA and USB are both environments that are well aware of the hot swapping uses they're put to - generally once your cardbus bridge and usb hub types are detected everything else goes well. or your device is too new to have a driver for its part of the puzzle. You must load up (or have automatically loaded by runlevels) the userland half of the sypport, though. (package names: pcmcia-cs, usbmgr)
There are apps to configure X and one can hope that svgalib "just works" on its own since it has some effort to video detection built-in. If you don't like what you get, try using a framebuffer enabled kernel, then tell X to use the framebuffer device - slower, but darn near guaranteed to work. svgalib will spot your framebuffer and use it. My favorite svgalib app is zgv, and there are some games that use it too.
I know of no app which is sufficiently telepathic to decide what your network addresses should be, the first time through. However, if you're a mobile user, there are a number of apps that you can train to look for your few favorite hosting gateways and configure the rest magically from there, using data you gave them ahead of time. PCMCIA schemes can also be used to handle this.
(!) [Rick]
kudzu, kudzu-vesa
Hardware-probing tool (thank you, Red Hat Software, Inc.) intended to be run at boot time. Requires hwdata package. kudzu-vesa is the VBE/DDC stuff for autodetecting monitor characteristics.
mdetect
Mouse device autodetection tool. If present, it will be used to aid XFree86 configuration tools.
printtool
Autodetection of printers and PPD support, via an enhanced version of Red Hat Software's Tk-based printtool. Requires the pconf-detect command-line utility for detecting parallel-port, USB, and network-connected printers (which can be installed separately as package pconf-detect).
read-edid
Hardware information-gathering tool for VESA PnP monitors. If present, it will be used to aid XFree86 configuration tools.
(!) [Heather] Used alone, it's an extremely weird way to ask the monitor what its preferred modelines are. Provided your monitor is bright enough to respond with an EDID block, the results can then be used to prepare an optimum X configuration. I say "be used" for this purpose because the results are very raw and you really want one of the apps that configure X to deal with this headache for you. Trust me - I've used it directly a few times.
(!) [Rick]
sndconfig
Sound configuration (thank you, Red Hat Software, Inc.), using isapnp detection. Requires kernel with OSS sound modules. Uses kudzu, aumix, and sox.
(!) [Dave] BTW, Knoppix also has excellent detection, and is also free and Debian-based: ftp://ftp.uni-kl.de/pub/linux/knoppix
(!) [Heather] Personally I found his sound configuration to be the best I've encountered; SuSE does a pretty good job if your card is supported under ALSA.
When you decide to roll your own kernel, it's critical to doublecheck which of the three available methods for sound setup you're using, so that you can compile the right modules in - ALSA, OSS, or kernel-native drivers. Debian's make-kpkg facility makes keeping extra packages that depend directly on kernel parts - like pcmcia and alsa - able to keep in sync with your customizations, by making it easy for you to prepare the modules .deb file to go with your new kernel.
(!) [Rick]
hotplug
USB/PCI device hotplugging support, and network autoconfig.
nictools-nopci
Diagnostic and setup tools for many non-PCI ethernet cards
nictools-pci
Diagnostic and setup tools for many PCI ethernet cards.
mii-diag
"A little tool to manipulate network cards" (examines and sets the MII registers of network cards).

(?) 2) I have installed kudzu in debian 3.0 , but it is not running as a service. it needs to execute the command kudzu manually.

(!) [Rick] No, pretty much the same thing in both cases. You're just used to seeing it run automatically via a System V init script in Red Hat. If you'd like it to be done likewise in Debian, copy /etc/init.d/skeleton to /etc/init.d/kudzu and modify it to do kudzu stuff. Then, use update-rc.d to populate the /etc/rc?.d/ runlevel directories.

(?) Finally the exact solution. I was searching 4 this looong. Rick, can't understand howto give u thanks. take care.

(?) moreover it couldn't detect my epson C21SX printer. but under MDK 9.0 kudzu detected the printer .

(!) [Heather] Perhaps it helpfully informed you what it used to get the printer going? Many of the rpm based systems are using CUPS as their print spooler; it's a little smoother under cups than some of its competitors, to have it auto-configure printers by determining what weird driver they need under the hood. My own fancy Epson color printer needed gimp-print, which I used the linuxprinting.org "foomatic" entries to link into my boring little lpd environment happily. Some printers are supported directly by ghostscript... which you will need anyway, since many GUI apps produce postscript within their "print" or "print to file" features.
(!) [Rick] Would that be an Epson Stylus C21SX? I can't find anything quite like that name listed at:
http://www.linuxprinting.org/printer_list.cgi?make=Epson
I would guess this must be a really new, low-end inkjet printer.
The version of kudzu (and hwdata) you have in Debian's stable branch (3.0) is probably a bit old. That's an inherent part of what you always get on the stable branch. If you want versions that are a bit closer to the cutting edge, you might want to switch to the "testing" branch, which is currently the one named "sarge". To do that, edit /etc/apt/sources.list like this:
deb http://http.us.debian.org/debian testing main contrib non-free
deb http://non-us.debian.org/debian-non-US testing/non-US main contrib non-free
deb http://security.debian.org testing/updates main contrib non-free
deb http://security.debian.org stable/updates main contrib non-free
Then, do "apt-get update && apt-get dist-upgrade". Hilarity ensues. ;->
(OK, I'll be nice: This takes you off Debian-stable and onto a branch with a lower commitment on behalf the Debian project to keep everything rock-solid, let alone security-updated. But you might like it.)

(?) a nice discussion. thanks a lot.

(!) [Rick] All of those information items are now in my cumulative Debian Tips collection, http://linuxmafia.com/debian/tips . (Pardon the dust.)

(?) ok, thanks a lot. u have clarified evrything very well. now I must not have any prob. regarding auto-detection in deb..

Great site !

(!) [Heather] For anyone looking at this and thinking "Oy, I don't already have debian installed, can I avoid this headache?" - Yes, you probably can, for a price. While debian users from both commercial and homegrown computing environments alike get the great upgrade system, this is where getting one of the commercial variants of Debian can be worth the bucks for some people. Note that commercial distros usually come with a bunch of software which is definitely not free - and not legal to copy for your pals. How easy they make it to seperate out what you could freely tweak, rewrite, or give away varies widely.
Libranet
http://www.libranet.com
Canadian company, text based installer based on but just a little more tuned up than the generic debian one. Installs about a 600 MB "base" that's very usable then offers to add some worthwhile software kits on your first boot.
Xandros
http://www.xandros.com
The current bearer of the torch that Corel Linux first lit. Reviews about it sing its newbie-friendly praises.
Lindows
http://www.lindows.com
Mostly arriving pre-installed in really cheap Linux machines near you in stores that you just wouldn't think of as computer shops. But it runs MSwin software out of the box too.
Progeny
http://www.progenylinux.com
More into offering professional services for your corporate or perhaps even industrial Linux needs than particularly a distribution anymore, they committed their installer program to the auspices of the Debian project. So it should be possible for someone to whip up install discs that use that instead of the usual geek-friendly textmenu installer.
If you find any old Corel Linux or Stormix discs lying around, they'll make an okay installer, provided your video card setup is old enough for them to deal with. After they succeed you'll want to poke around, see what they autodetected, takes some notes, then upgrade the poor beasties to current Debian.
In a slightly less commercial vein,
Knoppix
http://www.knopper.net/knoppix
[Base page in German, multiple languages available] while not strictly designed as a distro for people to install, has great hardware detection in its own accord, and a crude installer program available. At minimum, you can boot from its CD, play around a bit, and take notes now that it has detected and configured itself. A runs-from-CD distribution. If you can't take the hit from downloading a 700 MB CD all at once - it takes hours and hours on my link, and I'm faster than most modems - he lists a few places that will sell a recent disc and ship it to you.
Good-Day GNU-Linux
http://ggl.good-day.net
LWN's pointer went stale but this is where it moved to; the company produced sylpheed and has some interesting things bundled in this. It also looks like they preload notebooks, but I can't read japanese to tell you more.
And of course the usual Debian installer discs.
Anytime you can ask a manufacturer to preload linux - even if you plan to replace it with another flavor - let them. You will tell them that you're a Linux and not a Windows user, and you'll get to look at the preconfiguration they put in. If they had to write any custom drivers, you can preserve them for your new installation. Likewise whatever time they put into the config files.
There's a stack more at the LWN Distributions page (http://old.lwn.net/Distributions) if you search on the word Debian, although many are localize, some are specialty distros, and a few are based on older forms of the distro.

(?) Linux Kernel Maintainability: Bees Can't Fly

but a Hurd of them might go Mach speed...

From Beth Richardson

Answered By Jim Dennis, Jason Creigton, Benjamin A. Okopnik, Kapil Hari Paranjape, Dan Wilder, Pradeep Padala, Heather Stern

Hello,

I am a Linux fan and user (although a newbie). Recently I read the paper entitled "Maintainability of the Linux Kernel" (http://www.isse.gmu.edu/faculty/ofut/rsrch/papers/linux-maint.pdf) in a course I am enrolled in at Colorado State University. The paper is essentially saying that the Linux kernel is growing linearly, but that common coupling (if you are like me and cannot remember what kind of coupling is what - think global variables here.) is increasing at an exponential rate. Side note, for what it is worth - the paper was published in what I have been told is one of the "most respected" software journals.

I have searched around on the web and have been unable to find any kind of a reply to this paper from a knowledgeable Linux supporter. I would be very interested in hearing the viewpoint from the "other side" of this issue!

Thanks for your time, Beth Richardson

(!) [JimD] Basically it sounds like they're trying to prove that bees can't fly.
(Traditional aerodynamic theories and the Bernoulli principle can't be used to explain how bees and houseflies can remain aloft; this is actually proof of some limitations in those theories. In reality the weight of a bee or a fly relative to air density means that insect can do something a little closer to "swimming" through the air --- their mass makes air relatively viscous to them. Traditional aerodynamic formulae are written to cover the case where the mass of the aircraft is so high vs. air density that some factors can be ignored.).
I glanced at the article, which is written in typically opaque academic style. In other words, it's hard to read. I'll admit that I didn't have the time to analyze (decipher) it; and I don't have the stature of any of these researchers. However, you've asked me, so I'll give my unqualified opinion.
Basically they're predicting that maintainance of the Linux kernel will grow increasing difficult over time because a large number of new developments (modules, device drivers, etc) are "coupled" (depend on) a core set of global variables.

(?) [Jason] Wouldn't this affect any OS? I view modules/device drives depending on a core as a good thing, when compared to the alterative, which is depending on a wide range on varibles. (Or perheps the writers have a different idea in mind. But what other alterative to depending a core would there be other than depending on a lot of things?)

(!) [Ben] You said it yourself further down; "micro-kernel". It seems to be the favorite rant of the ivory-tower CS academic (by their maunderings shall ye know them...), although proof of this world-shattering marvel seems to be long in coming. Hurd's Mach kernel's been out, what, a year and more?
(!) [Kapil] Here comes a Hurd of skeletons out of my closet! Being a very marginal Hurd hacker myself, I couldn't let some of the remarks about the Hurd pass. Most of the things below have been better written of elsewhere by more competent people (the Hurd Wiki for example, http://hurd.gnufans.org) but here goes anyway...
The Mach micro-kernel is what the Hurd runs on the top of. In some ways Hurd/Mach is more like Apache/Linux. Mach is not a part of the Hurd. The newer Hurd runs over the top of a version of Mach built using Utah's "oskit". Others have run the "Hurd" over "L-4" and other micro-kernels.
The lack of hardware and other support in the existing micro-kernels is certainly one of things that is holding back the common acceptance of the Hurd. (For example neither "mach" nor "oskit" have support for my video card--i810--for which support came late in the Linux 2.2 series).
Now, if only Linux was written in a sufficiently "de-coupled" way to allow the stripping away of the file-system and execution system, we would have a good micro-kernel already! The way things are, the "oskit" guys are perenially playing catch-up to incorporate Linux kernel drivers. Since these drivers are not sufficiently de-coupled they are harder to incorporate.
(!) [JimD] This suggests that the programming models are too divergent in some ways. For each class of device there are a small number of operations (fops, bops, ioctl()s) that have to be supported (open, seek, close, read, write, etc). There are relatively few interactions with the rest of the kernel for most of this (which is why simple device driver coding is in a different class from other forms of kernel hacking).
The hardest part of device driver coding is getting enough information from a vendor to actually implement each required operation. In some cases there are significant complications for some very complex devices (particularly in the case of video drivers; which, under Linux sans framebuffer drivers, are often implemented in user space by XFree86.)
It's hard to imagine that any one device driver would be that difficult to port from Linux to any other reasonable OS. Of course the fact that there are THOUSANDS of device drivers and variants within each device driver does make it more difficult. It suggestst the HURD needs thousands (or at least hundreds) of people working on the porting. Obiviously, if five hundred HURD hackers could crank out a device driver every 2 months for about a year --- they'd probably be caught up with Linux device driver support.
Of course I've only written one device driver for Linux (and that was a dirt simple watchdog driver NAS system motherboard) and helped on a couple more (MTD/flash drivers, same hardware). It's not so much "writing a driver" as plugging a few new values into someone else's driver, and reworking a few bits here or there.
One wonders if many device drivers could be consolidated into some form of very clever table-driven code. (Undoubtedly what the UDDI movement of a few years ago was trying to foist on everyone).
(!) [Kapil] One the other side Linux "interferes too much" with user processes making Hurd/Linux quite hard and probably impossible---but one can dream...
(!) [JimD] Linux was running on Mach (mkLinux) about 5 years ago. I seem to recall that someone was running a port of Linux (or mkLinux) on an L4 microkernel about 4 years ago (on a PA RISC system if I recall correctly).
It's still not HURD/Linux --- but, as you say, it could (eventually) be.
Linux isn't really monolithic, but it certainly isn't a microkernel. This bothers purists; but it works.
Future releases of Linux might focus considerably more on restructing the code, providing greater modularity and massively increasing the number of build-time configuration options. Normal users (server and workstation) don't want more kernel configuration options. However, embedded systems and hardware engineers (especially for the big NUMA machines and clustering system) need them. So the toolchain and build environment for the Linux kernel will have to be refined.
As for features we don't have yet (in the mainstream Linux kernel): translucent/overlay/union filesystems, transparent process checkpoint and restore, true swapping (in addition to paging, might come naturally out of checkpointing), network console, SSI (system system image) HA clustering (something like VAX clusters would be nice from what I hear), and the crashdump, interactive debuggers, trace toolkit, dprobes and other stuff that was "left out" of 2.5 in the later stages before the feature freeze last year.
I'm sure there are things I'm forgetting and others that I've never even thought of. With all the journaling, EAs and ACLs, and the LSM hooks and various MAC (mandatory access contol) mechanisms in LIDS, SELinux, LOMAC, RSBAC and other patches, we aren't missing much that was ever available in other forms of UNIX or other server operating systems. (The new IPSec and crypto code will also need considerable refinement).
After that, maybe Linux really will settle down to maintenance; to optimization, restructuring, consolidation, and dead code removal. Linus will might find that stuff terminally boring and move on to some new project.

(?) [JimD] What else is there to add the kernel?

(!) [Pradeep] Like my advisor says, Every thing that is never thought before. :-) Lot of people feel the same about systems research. I am planning to specialize in systems. What do you guys think about systems research? Is is as pessimistic as Rob Pike sounds? http://www.cs.bell-labs.com/who/rob/utah2000.pdf
(!) [Dan] Some would say, "streams". (he ducks!)
(!) [JimD] LiS is there for those that really need it. It'll probably never be in the mainstream kernel. However, I envision something a like a cross between the Debian APT system and the FreeBSD ports system (or LNX-BBCs Gar or Gentoo's source/package systems) for the kernel.
In this case some niche, non-mainstream kernel patches would not be included in Linus' tarball, but hooks would be found in a vendor augmented kbuild (and/or Makefile collection) that could present options for many additional patches (like the FOLK/WOLK {Fully,Working} OverLoaded Kernel). If you selected any of these enhancements then the appropriate set of patches would be automatically fetched and applied, and any submenus to the configuration dialog would appear.
Such a system would have the benefit of allowing Linus to keep working exactly as he does now, keeping pristine kernels, while making it vastly easier for sysadmins and developers to incorporate those patches that they want to try.
If it was done right it would be part of UnitedLinux, Red Hat, and Debian. There would be a small independent group that would maintain the augmented build system.
The biggest technical hurdle would be patch ordering. In some cases portions of some patches might have to be consolidated into one or more patches that exist solely to prevent unintended dependency loops. We see this among Debian GNU/Linux patches fairly often --- though those are binary package dependencies rather than source code patch dependencies. We'd never want a case where you had to include LiS patches because the patch maintainer applied it first in his/her sequence and one of its changes became the context for another patch --- some patch that didn't functionally depend on LiS but only seemed to for context.
I think something like this was part of Eric S. Raymond's vision for his ill-fated CML2. However, ESR's work wasn't in vain; a kbuild system in C was written and will be refined over time. Eventually it may develop into something with the same features that he wanted to see (though it will take longer).
As examples of more radical changes that some niches might need or want in their kernels: there used to be a suite of 'ps' utilities that worked without needing /proc. The traditional ps utils worked by walking through /dev/kmem traversing a couple of data structures there. I even remember seeing another "devps" suite, which provided a simple device interface alternative to proc. The purpose of this was to allow deeply embedded, tightly memory constrained kernels to work in a smaller footprint. These run applications that have little or no need for some of the introspection that is provided by /proc trees, and have only the most minimal process control needs. It may be that /proc has become so interwoven into the Linux internals that a kernel simply can't be built with out it (that the build option simply affects whether /proc is visible from userspace). These embedded systems engineers might still want to introduce a fair number of #defines to optionally trim out large parts of /proc. Another example is the patch I read about that effectively refines the printk macro as a C comment; thus making a megabyte (uncompressed) of prink()' calls disappear in the pre-processor pass.
These are things that normal users (general purpose servers and workstations) should NOT mess with. Things that would break a variety of general purpose programs. However, they can be vital to some niches. I doubt we'll ever see Linux compete with eCOS on the low end; but having a healthy overlap would be good.

(?) [JimD] Are there any major 32 or 64 bit processors to which Linux has not been ported?

(!) [Ben] I don't mean to denigrate the effort of the folks that wrote Hurd, but... so what? Linux serenely rolls on (though how something as horrible, antiquated, and useless as a monolithic kernel can hold its head up given the existence of The One True Kernel is a puzzle), and cooked spaghetti still sticks to the ceiling. All is (amazingly) still right with the world.
(!) [Jason] You know, every time I get to thinking about what the Linux kernel should have, I find out it's in 2.5. Really. I was thinking, Linux is great but it needs better security, more than just standard linux permissions. Then I look at 2.5: Linux Security Modules. Well, we need a generic was to assign attributes to files, other then the permission bits. 2.5 has extened attribues (name:value pairs at the inode level) and extended POSIX ACLs.
(!) [Ben] That's the key, AFAIC; a 99% solution that's being worked on by thousands of people is miles better than a 100% solution that's still under development. It's one of the things I love most about Linux; the amazing roiling, boiling cauldron of creative ideas I see implemented in each new kernel and presented on Freshmeat. <grin> The damn thing's alive, I tell ya.
(!) [Kapil] What you are saying is true and is (according to me) the reason why people will be running the Hurd a few years from now!
The point is that many features of micro-kernels (such as a user-process running it's own filesystem and execution system a la user-mode-linux) are becoming features of Linux. At some point folks will say "Wait a minute! I'm only using the (micro) kernel part of Linux as root. Why don't I move all the other stuff into user space?" At this point they will be running the Hurd/Linux or something like it.
Think of the situation in 89-91 when folks on DOS or Minix were jumping through hoops in order to make their boxes run gcc and emacs. Suddenly, the hoops could be removed because of Linux. The same way the "coupled" parts of Linux are preventing some people from doing things they would like to do with their system. As more people are obstructed by those parts---voila Linux becomes (or gives way to) a micro-kernel based system.
Didn't someone say "... and the state shall wither away".
(!) [Heather] Yes, but it's been said:
"Do not confuse the assignment of blame with the solution to the problem. In space, it is far more vital to fix your air leak than to find the man with the pin." - Fiona L. Zimmer
Problems as experienced by sysadmins and users are not solely the fault of designs or languages selected to write our code in.
...and also:
"Established technology tends to persist in the face of new technology." - G. Blaauw, one of the designers of System 360
...not coincidentally, at least in our world, likely to persist inside "new" technology, as well, possibly in the form of "intuitive" keystrokes and "standard" protocols which would not be the results if designs were started fresh. Of course truly fresh implementations take a while to complete, which brings us back to the case of the partially completed Hurd environment very neatly.
(!) [JimD] Thus any change to the core requires an explosion of changes to all the modules which depended upon it. They are correct (to a degree). However they gloss over a few points (lying with statistics).
First point: no one said that maintaining and developing kernels should be easy. It is recognized as one of the most difficult undertakings in programming (whether it's an operating system kernel or an RDBMS "engine" --- kernel). "Difficult" is subjective. It falls far short of "impossible."
Second point: They accept it as already proven that "common" coupling leads to increasing numbers of regression faults (giving references to other documents that allege to prove this) and then they provide metrics about what they are calling common coupling. Nowhere do they give an example of one variable that is "common coupled" and explain how different things are coupled to it. Nor do they show an example of how the kernel might be "restructured with common coupling reduced to a bare minimum" (p.13).
So, it's a research paper that was funded by the NSF (National Science Foundation). I'm sure the authors got good grades on it. However, like too much academic "work" it is of little consequence to the rest of us. They fail to show a practical alternative and fail to enlighten us.
Mostly this paper sounds like the periodic whining that used to come up on the kernel mailing list: "Linux should be re-written in C++ and should be based on an object-oriented design." The usual response amounts to: go to it; come back when you want to show us a working prototype.

(?) [Jason] Couldn't parts of the kernel be written in C, and others in C++? (okay, technically it would probably all be C++ if such a shift did occur, but you can write C in a C++ compiler just fine. Right? Or maybe I just don't know what I'm talking about.)

(!) [Pradeep] There are many view points to this. But why would you want to rewrite parts of it in C++?
Popular answer is: C++ is object-oriented, it has polymorphism, inheritance etc. Umm, I can do all that in C and kernel folks have used those methods extensively. The function pointers, gotos may not be as clean as real virtual functions and exception handling. But those C++ features come with a price. The compilers haven't progressed enough to deliver the performance equivalent to hand-written C code.
(!) [Dan] At one point, oh, maybe it was in the 1.3 kernel days, Linus proposed moving kernel development to C++.
The developer community roundly shot down the idea. What you say about C++ compilers was true in spades with respect to the g++ of those days.
(!) [Pradeep] What is the status of g++ today? I still see a performance hit when I compile my C programs with g++. Compilation time is also a major factor. g++ takes lot of time to compile especially with templates.
(!) [JimD] I'm sure that the authors would argue that "better programming and design techniques" (undoubtedly on their aggenda for their next NSF grant proposal) would result in less of this "common" coupling and more of the "elite" coupling. (Personally I have no problem coupling with commoners --- just do it safely!)
As for writing "parts" of Linux in C++ --- there is the rather major issue of identifier mangling. In order to support polymorphism and especially function overloading and over-riding, C++ compilers have to modify the identifiers in their symbol tables in ways that C compiler never have to do. As a consequence of this it is very difficult to link C and C++ modules. Remember, loadable modules in Linux are linkable .o files. It just so happens that they are dynmically loaded (a little like some .so files in user space, through the dlopen() API --- but different because this is kernel space and you can't use dlopen() or anything like it).
I can only guess about how bad this issue would be but a quick perusal of the first C++ FAQ that could find on the topic:
http://users.utu.fi/sisasa/oasis/cppfaq/mixing-c-and-cpp.html
... doesn't sound promising.
(!) [JimD] I'm also disappointed that the only quotations in this paper were the ones of Ken Thompson claiming that Linux will "not be very successful in the long run" (repeated TWICE in their 15 page paper) and that Linux is less reliable (in his experience) than MS Windows.

(?) [Jason] I'm reminded of a quote: "Linux is obsolete" -- Andrew Tanenbaum. He said this in the (now) famous flame-war between himself and Linus Torvalds. His main argument was the micro-kernels are better than monolithic kernels and thus Linux was terribly outdated. (His other point was that linux wasn't portable.) BTW, I plan to get my hands on some Debian/hurd (Or is that "GNU/hurd"? :-) ) CDs so I can see for myself what the fuss over micro-kernels is all about.

(!) [JimD] Run MacOS X --- it's a BSD 4.4 personality over a Mach microkernel.
(And is more mature than HURD --- in part because a significant amount of the underpinnings of MacOS X are NeXT Step which was first released in the late '80s even before Linux).
(!) [Ben] To quote Debian's Hurd page,

...............

The Hurd is under active development, but does not provide the performance and stability you would expect from a production system. Also, only about every second Debian package has been ported to the GNU/Hurd. There is a lot of work to do before we can make a release.

...............

Do toss out a few bytes of info if you do download and install it. I'm not against micro-kernels at all; I'm just slightly annoyed by people whose credentials don't include the Hard Knocks U. screaming "Your kernel sucks! You should stab yourself with a plastic fork!" My approach is sorta like the one reported in c.o.l.: "Let's see the significant benefits."
(!) [JimD] These were anecdotal comments in an press interview --- they were not intended to be delivered with scholastic rigor. I think it weakens the paper considerably (for reasons quite apart from my disagreement with the statements themselves).
What is "the Long run?" Unix is a hair over 30 years old. The entire field of electronic computing is about 50 or 60 years old. Linux is about 12 years old. Linux is still growing rapidly and probably won't peak in marketshare for at last 5 to 10 years. Thus Linux could easily last longer than proprietary forms of UNIX did. (This is not to say that Linux is the ultimate operating system. In 5 to 10 years there is likely to be an emerging contender like EROS (http://www.eros-os.org ) or something I've never heard of. In 15 to 20 years we might be discussing a paper that quotes Linus Torvalds as saying: "I've read some of the EROS code, and it's not going to be a success in the long run."
(We won't even get into the criteria for "success" in Ken Thompson's comment --- because I think that Linux' current status is already been a huge success by the standards of it's advocates and to the chagrin of it's detractors. By many accounts Linux is already more "successful" than UNIX --- having been installed on more systems than all UNIX predecessors combined --- an installation base that is only recently rivaled by MacOS X in the UNIX world)

(?) routing to internet from home . Kernel 2.4

From Jose Avalis

Answered By Faber Fedor, Jason Creighton, Benjamin A. Okopnik, John Karns

Hi guys and thanks in advance fro your time. I'm Joe From Toronto.

I have this scenario at home.


3 WS with Winxx
1 Linux redhat 7.3
1 DSL Connection (Bell / Sympatico)

I would like to use the linux machine as a router for the internal PC> Could you help me with that, please???

(!) [Ben] OK, I'll give it a shot. You have read and followed the advice in the IP-Masquerade HOWTO, right? If not, it's always available at the Linux Documentation Project <http://www.linuxdoc.org>;, or possibly on your own system under /usr/doc/HOWTO or /usr/share/doc/HOWTO.

(?) The Linux Machine has 2 NIC eth0 (10.15.1.10 | 16 ) is connected to the internal net (hub) , while the other ETH1 (10.16.1.10 | 16) is connected to the DSL Modem.

(!) [Ben] You have private IPs on both interfaces. Given a DSL modem on one of them, it would usually have an Internet-valid address, either one that you automatically get via DHCP or a static one that you get from your ISP (that's become unusual for non-commercial accounts.) Looks like you have a PPPoE setup - so you're not actually going to be hooking eth0 to eth1, but eth0 to ppp0.

(?) as you can see in the following text, everything is Up and run and I can access internet from the Linux machine.

(!) [Jason] This may see like a stupid question, but do the internal PCs have valid internet address? (i.e., those outside the 10.*.*.*, 172.16.*.*-172.31.*.* or 192.168.*.* ranges) If they don't, you need to do IP masquerading. This is not all that hard, I could give a quick & dirty answer as to how to do it (Or you could look at the IP-Masquerading-HOWTO, for the long answer), but I'd like to know if that's your situation first. Yes, I am that lazy. :-)

(?) ifconfig says

See attached jose.ifconfig-before.txt

See attached jose.ping-before.txt

The problem is that when I try to access the internet from the internal lan. I can't access it.

(!) [Ben] Yep, that's what it is. That MTU of 1492 is a good hint: that's the correct setting for PPPoE, and that's your only interface with a Net-valid IP.
(!) [John] The adjusted MTU for PPPoE (from the usual 1500 to 1492) is necessary, but can cause problems with the other machines on the LAN unless they too are adjusted for MTU.
(!) [Ben] Right - although not quite as bad as the gateway's MTU (that one can chase its own tail forever - looks like there's no connection!)
(!) [John] I've been stuck with using PPPoE for about a month now, and have found the Roaring Penguin pkg (http://www.roaringpenguin.com) to work quite well, once it's configured. I seem to remember reading that it does the MTU adjustment internally, and alleviates the headache of having to adjust the rest of the machines on the LAN to use the PPPoE gateway (see the ifconfig output below).
(!) [Ben] Oh, _sweet._ I'm not sure how you'd do that "internally", but I'm no network-programming guru, and that would save a bunch of headaches.
(!) [John] Especially nice if one of the LAN nodes is a laptop that gets carried around to different LAN environments - would be a real PITA to have to reset the MTU all the time.
# ifconfig eth1

eth1      Link encap:Ethernet  HWaddr 00:40:F4:6D:AA:3F
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:21257 errors:0 dropped:0 overruns:0 frame:0
          TX packets:14201 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:100
          RX bytes:4568502 (4.3 Mb)  TX bytes:1093173 (1.0 Mb)
          Interrupt:11 Base address:0xcc00
Then I just tacked on the firewall / masq script I've been using right along, with the only change being the external interface from eth0 to ppp0. PPPoE is also a freak in that the NIC that connects to the modem doesn't get an assigned IP.
(!) [Ben] Yep, that's what got me thinking "PPPoE" in the first place. Two RFC-1918 addresses - huh? An MTU of 1492 for ppp0 and reasonably short ping times to the Net - oh. :)

(?) all the PCs in the net have as Default gateway 10.15.1.10 (Linux internal NIC )

(!) [Ben] That part is OK.

(?) Linux's default gateway is the ppp0 adapter

[root@linuxrh root]# netstat -nr
Kernel IP routing table
Destination     Gateway         Genmask         Flags   MSS Window  irtt Iface
64.229.190.1    0.0.0.0         255.255.255.255 UH       40 0          0 ppp0
10.16.0.0       0.0.0.0         255.255.0.0     U        40 0          0 eth1
10.15.0.0       0.0.0.0         255.255.0.0     U        40 0          0 eth0
127.0.0.0       0.0.0.0         255.0.0.0       U        40 0          0 lo
0.0.0.0         64.229.190.1    0.0.0.0         UG       40 0          0 ppp0
[root@linuxrh root]#
(!) [Ben] Yep, that's what "netstat" says. I've never done masquerading with PPP-to-Ethernet, but it should work just fine, provided you do the masquerading correctly.
(!) [Ben] Can you guys give me some cues of what my problem is ???
I don't have any firewall installed.
Thanks a lot. JOE
(!) [Ben] That's probably the problem. Seriously - a firewall is nothing more than a set of routing rules; in order to do masquerading, you need - guess what? - some routing rules (as well as having it enabled in the kernel.) Here are the steps in brief - detailed in the Masquerading HOWTO:
  1. Make sure that your kernel supports masquerading; reconfigure and
  2. Load the "ip_masq" module if necessary.
  3. Enable IP forwarding (ensure that /proc/sys/net/ipv4/ip_forward is
  4. Set up the rule set (the HOWTO has good examples.)
That's the whole story. If you're missing any part of it, go thou and fix it until it cries "Lo, I surrender!" If you run into problems while following the advice in the HOWTO, feel free to ask here.
(!) [Faber] One thing you didn't mention doing is turning on forwarding between the NICs; you have to tell the Linux to forward packets from one NIC to the other. To see if it is turned on, do this:
cat /proc/sys/net/ipv4/ip_forward
If it says "0", then it's not turned on. To turn it on, type
echo "1" > /proc/sys/net/ipv4/ip_forward
And see if your Win boxen can see the internet.
If that is your problem, once you reboot the Linux box you'll lose the setting. There are two ways not to lose the setting. One is to put the echo command above into your /etc/rc.local file. The second and Approved Red Hat Way is to put the line
net.ipv4.ip_forward = 1
in your /etc/sysctl.conf file. I don't have any Red Hat 7.3 boxes lying around, so I don't know if Red Hat changed the syntax between 7.x and 8.x. One way to check is to run
/sbin/sysctl -a | grep forward
and see which one looks most like what I have.

(?) Hey Faber in NJ /.... thanks for your clues. In fact it was in 0, I changed it to 1, I've restarted tehe box and it is in 1 now; but it is still not working.

(!) [Faber] Well, that's a start. There's no way it would have worked with it being 0!

(?) First at all, m I right with this setup method? I mean using Linux as a router only ??? or shoud I set up a masquerading and use the NAT facility to populate all my internal addresses in Internet?

(!) [Faber] Whoops! Forgot that piece! Yes, you'll hve to do masquerading/NAT (I can never keep the two distinct in my head).
(!) [Jason] It seems to me that you would want the DSL modem (eth1) to be the default route to the internet, not the modem (ppp0).

(?) Because maybe the problem is that I'm trying to route my internal net to the DSL net and Internet and maybe it is not a valid proc.

(!) [Faber] Well, it can be done, that's for sure. We just have to get all the t's dotted and the i's crossed. :-)
(!) [Jason] IP-Masquerading. Here's the HOWTO:
http://www.tldp.org/HOWTO/IP-Masquerade-HOWTO
And here's a script that's supposed (I've never used it) to just be a "fill in the blanks and go":
http://www.tldp.org/HOWTO/IP-Masquerade-HOWTO/firewall-examples.html#RC.FIREWALL-2.4.X
Note this is in the HOWTO, it's just later on after explaining all the gory details of NATing.

(?) Hey, thanks for your mail, the thing is working now. I didnīt know that the NAT functions in Linux are called Masquerading.

(!) [Ben] Yeah, that's an odd one.
Masquerading is only a specific case (one-to-many) of NAT. As an example of other stuff that NAT can do, IBM had an ad for the Olympics a while back (their equipment handled all the traffic for the website); they did "many-to-many" NAT to split up the load.

(?) Thanks again for your help, due to I'm new in Linux, it took me a while to learn of the terminology in this platform.

To many NOS in may head.

I have averything working now, including the firewall, I had to compile the kernel again, but it was ok.

C U.

(!) [Ben] <grin> You're welcome! Glad we could help.


Copyright © 2003
Copying license http://www.linuxgazette.com/copying.html
Published in Issue 88 of Linux Gazette, March 2003
HTML script maintained by Heather Stern of Starshine Technical Services, http://www.starshine.org/