About
Community
Bad Ideas
Drugs
Ego
Erotica
Fringe
Society
Technology
Hack
Introduction to Hacking
Hack Attack
Hacker Zines
Hacking LANs, WANs, Networks, & Outdials
Magnetic Stripes and Other Data Formats
Software Cracking
Understanding the Internet
Legalities of Hacking
Word Lists
register | bbs | search | rss | faq | about
meet up | add to del.icio.us | digg it

Amateur Fortress Building in Linux

by Sander Plomp


NOTICE: TO ALL CONCERNED Certain text files and messages contained on this site deal with activities and devices which would be in violation of various Federal, State, and local laws if actually carried out or constructed. The webmasters of this site do not advocate the breaking of any law. Our text files and message bases are for informational purposes only. We recommend that you contact your local law enforcement officials before undertaking any project based upon any information obtained from this or any other web site. We do not guarantee that any of the information contained on this system is correct, workable, or factual. We are not responsible for, nor do we assume any liability for, damages resulting from the use of any information on this site.

Amateur Fortress Building in Linux, Part 1

by Sander Plomp

Contents

tcpserver
at cr.yp.to
X
printing
main and news
file sharing

I installed Linux on my home system, and since it's connected to the Internet I had to secure it. The average distro comes with nice set of security holes you've got to plug first. You know the routine: edit inetd.conf and comment out all services you don't need ....

I got bored with articles telling you to edit inetd.conf too. So I'm not going to do it the official way - I'm going to do it my way. I'm an amateur at this, so I don't know how good my way is. It might be wonderful or it might be awful, but at least it's uncommon.

The first step is flushing inetd down the drain and replacing it with tcpserver.

Before going on to the obvious 'why', I think it's only fair to warn you that this is beyond editing some config files after admonishing you to back them up first. Proceed at your own risk.

It's going to get a lot worse than just replacing inetd.

Tcpserver Why replace inetd with tcpserver? Tcpserver gives you roughly the same functionality as inetd, although it's configured quite differently. To be honest I prefer inetd.conf's simple table format, but tcpserver has one feature that inetd is missing, and that I really want. It allows you to specify on which port to run a service.

Before you say, "um, I'm pretty sure inetd runs fingerd on port 79, FTP on 21 and so on.", the question is: which port 79 would that be. For example, for a system connected to both a private network (say, as 10.11.12.13) and a public one (as 210.211.212.213), there is both 10.11.12.13:79 and 210.211.212.213:79 and they are different ports. There is also 127.0.0.1:79 which is the local machine and cannot be accessed from the outside.

Inetd, like most daemons, uses INADDR_ANY as the local IP address, which means it listens on all of them at the same time. Tcpserver allows you to specify it, and that means that it can run e.g. fingerd on the local net only simply by running it on port 10.11.12.13:79 rather than on *:79.

I'm weird. If I want to keep the bad guys away from services I've intended for local use only I don't want to do it by having a firewall shooting off the incoming packets or by a source host validation mechanism kicking them out after the connection has been made. I want to do it by simply not listing to public ports in the first place. There will be a firewall in front and access control after the connection is made, but they are extras, not the main mechanism.

Note that it is the destination address of the incoming packet rather than the source address that is used for access control. A server listing on 10.11.12.13 will not respond to a connection attempt made to 210.211.212.213.

The idea behind this is that if you're only listening on a private network address it becomes rather hard for an attacker to send anything to that service. The public Internet cannot route packets directly to 10.11.12.13, such packets are dropped at the routers. An attacker could possibly use source routing to get the packet on your doorstep; /proc/sys/net/ipv4/config/*/accept_source_route should be set to 0 to reject such attempts. This is the default in most Linux setups.

This method is complimentary to the usual checks based on source address of a connection, such as the the checks done by TCP wrappers. Both methods can (and probably should) be used at the same time. Tcpserver conveniently has such a mechanism build in, using the -x option. For source checking to work /proc/sys/net/ipv4/config/*/rp_filter should be set to 2 (it is off by default!) so that the kernel checks if the packet arrives at the interface it expects that source address to arrive. It won't prevent all spoofing, but at least, for most setups, something coming from the public Internet can't pretend to originate in your private network.

As you may have guessed by now, I'm limiting myself to very simple setups: small, private networks connected to the big bad Internet through a gateway machine. It's a typical home network setup and it's the case I had to deal with. I'm not trying to secure amazon.com, they should hire professional fortress builders.

How useful is it to listen only on a specific network? When I started working on this, script kiddies were taking over DSL and cable modem connected Linux boxes by the score using a root exploit in the named daemon. The obvious question (after "Why didn't the victims just use their ISP's name servers", "why does named have to run as root the whole time", "why doesn't the default configuration run in a chroot jail", and a few other questions) is: "why is named accepting connections from the Internet at large?". For most home users, if they even new they were running a name server, it was used as a simple caching name server, with no need provide services to the outside world. For a single host, it could have been listing on 127.0.0.1 and would have worked just fine for the user; for our small example network it would at most need to service net 10.0.0.0. If setup like that, a port scan from the outside wouldn't find anything on port 53, and it could not be attacked from the outside. Many other service a similarly intended for local use only and shouldn't be listing on outside ports.

So listing on the private network only would be quite useful, although actually named doesn't run from inetd. In fact it is mostly an UDP protocol, so here this example falls completely apart. But as I'm writing this, most people upgraded bind to the next version and wu_ftp is the new exploit du jour. It does run from inetd.

Let's install tcpserver first. We will deal with named later.

Using Tcpserver

The place to get tcpserver is cr.yp.to, the author's web site. The author is Dan Bernstein, best known for Qmail. The tcpserver program is part of a package called ucspi-tcp, which also contains a 'tcpclient' and a bunch of little helper programs. I'm not going into details on all the options and how to use them, just download the documentation and read it. The only hint I'm giving you here is that when testing, use -RHl0 among the options of tcpserver, otherwise you get mysterious pauses while the program tries to use DNS and identd to get details on the remote connection.

While tcpserver and inetd implement roughly the same functionality, they each have a completely different floor plan. I'll try to give a high level view of the differences, assuming the reader is familiar with the inetd.

Inetd uses a configuration file (inetd.conf) which tells it on which service ports to listen, and which program to start for each port. Normally a single inetd process is running that splits of child processes to handle each incoming connection.

Tcpserver listens only to a single service port. Normally there is one tcpserver process for each individual service port. Tcpserver does not use a configuration file, instead command line options and environment variables are used to control it. For example, to change to a different user after the connection is made you set environment variables $UID and $GID to the numerical values for that user and group, and give tcpserver the -U option telling it to use those variables. To make it easier to set those variables a helper program called envuidgid is included in the package. It will set $UID and $GID to those of a given account name, and then exec another program. So you get invocations like:

envuidgid httpaccount tcpserver -URHl0 10.11.12.13 80 myhttpdaemon

where envuidgid sets those variables with values for user httpaccount, calls tcpserver, which waits for a connection on 10.11.12.13:80, switches to user httpaccount and invokes myhttpdaemon to handle the connection. This may seem rather contrived, but in many ways it's keeping in style with the classic UNIX way of small programs strung together by the user. There are several little helper programs that, in much the same way, setup something and then run another program in that environment. It takes getting used to.

Normally inetd is paired with tcpwrappers; inetd itself doesn't care who connects to it but the 'wrapper' checks hosts.allow and hosts.deny to see if the connection should be allowed. There is no reason why you couldn't use tcp wrappers with tcpserver, but it has a similar mechanism build into it: the -x option. Rather than a global pair of hosts.allow and hosts.deny files that contain the rules for all services each tcpserver instance has it's own database of rules. These databases are in a binary format created from a text file by the tcprules program.

In the end, inetd and tcpserver give you roughly the same functionality, they're just organized completely differently. This makes switching from one to the other quite a bit of work. For one thing you need to disable inetd and add a series of tcpserver startups to whatever mechanism you use to start services. Then, for each individual service you just have to figure out how it's set up in inetd and construct the equivalent for tcpserver. Note that tcpserver only handles TCP services, if you use any UDP or rpc based services in inetd.conf you will have to keep inetd around or find some other alternative.

In the end, what does this all achieve?

The output netstat -an --inet for a system running some inetd services is shown below. They all use 0.0.0.0 as the local address.

----

Active Internet connections (only servers)

Proto Recv-Q Send-Q Local Address Foreign Address State

tcp 0 0 0.0.0.0:113 0.0.0.0:* LISTEN

tcp 0 0 0.0.0.0:79 0.0.0.0:* LISTEN

tcp 0 0 0.0.0.0:110 0.0.0.0:* LISTEN

tcp 0 0 0.0.0.0:23 0.0.0.0:* LISTEN

tcp 0 0 0.0.0.0:21 0.0.0.0:* LISTEN

raw 0 0 0.0.0.0:1 0.0.0.0:* 7

raw 0 0 0.0.0.0:6 0.0.0.0:* 7

------

With tcpserver they can get a different local address. In the example below three services are configured to listen on 10.11.12.13 only. Note that there is a second http server running on port 127.0.0.42. All 127.x.x.x ports are on the local machine. Both servers can use port 80 since they listen on different addresses. There is no reason why they can't be different programs, this allows you for example to run a very secure one on ports exposed to the public Internet and a full featured (but more difficult to secure) one on the internal network.

Active Internet connections (servers and established)

Proto Recv-Q Send-Q Local Address Foreign Address State

tcp 0 0 127.0.0.42:80 0.0.0.0:* LISTEN

tcp 0 0 10.11.12.13:80 0.0.0.0:* LISTEN

tcp 0 0 10.11.12.13:21 0.0.0.0:* LISTEN

tcp 0 0 10.11.12.13:79 0.0.0.0:* LISTEN

raw 0 0 0.0.0.0:1 0.0.0.0:* 7

raw 0 0 0.0.0.0:6 0.0.0.0:* 7

Enter Dan Bernstein

When I started getting serious about security I skimmed a three month backlog of comp.sys.linux.security. The three most common subjects were: There's some weird files on my system and Named isn't working properly. Help! My Apache runs as 'nobody'! Does this mean I've been cracked? @#$!^&^%$ ipchains *&^@$!*&^

The answer to (1) is: "Latest BIND root exploit, disconnect, wipe, reinstall, upgrade BIND, stand in line with all the others..."

The answer to (2) is that Apache is the only one that's got a clue.

(3) is one of the reasons I don't believe in firewalls.

But I'm running ahead of myself - we're still at cr.yp.to and we might as well look around. You younger kids pay attention: this a text only web site, super fast even over slow links. No banner ads, no eye candy - just text and links. You don't see that much these days.

It's fun looking around. As it turns out, Mr. Bernstein has decided to rewrite most Internet related standard software. There's replacements for, among other things, dns, http and FTP software, and of course Qmail. The nice thing about it (apart from the fact that he has mastered the electronic equivalent of writing nasty comments in the margin) is that all of it is designed with security in mind.

Most Internet software on your computer goes back a long time. It was conceived in period when the Internet was a small, friendly, village community rather than the porn and violence soaked world wide ghetto it is today. Over time, that software was improved, extended, ported, converted and just about anything else except rewritten from scratch. From a technical point of view the software often has basic design flaws that we've learned to work around. From a security point of view, they contain serious design flaws that can't be worked around.

This is the reason for item (2) above. Way to many daemons run as root, and can only work as root. Others require use of suid root programs to work. This is nasty. It turns every minor bug into a potential root exploit. If Apache can run as 'nobody', why can't the others? A dns system is just another simple database application, a mail program is just something shoveling messages around. Does it really require godlike powers to send something to the printer?

With a bit of care, most of these task can be organized in a way that little or no super user powers are required. Usually this involves giving the service its own user account, and giving that account access rights to all data and services needed for the task. For various reasons (typically, to access ports below 1024) some root access is required; with good design this can be done in a way that super user rights are dropped early in the program. There are great advantages to this; when there is a bug in a program only that particular service is compromised. Although you won't want someone to compromise any of your services it's not as bad as them taking over the entire system.

Dan Bernstein's software takes this approach very far. Often a single service uses multiple accounts. e.g. one for the actual services and one for logging - even if an intruder gets into the service she can't wipe the logs. If possible the data for the service is in a subtree and the services chroots into that subtree before starting the services. This puts yet another barrier between a compromised service and the rest of the system. Perhaps most importantly, the programs are generally small, performing a single well defined task - the purpose is not to have any bugs in the program to begin with.

Remember item (1) on my list - at one time BIND (the named program) compromised just about every DSL or cable modem connected Linux box in the world, not so much because it had a bug but because it runs as root and every bug is deadly. So I decided to install Bernstein's dnscache program. I mean, it's not like it can get any worse.

In it's default setup, dnscache runs under as user dnscache. It chroot()s into it's own subdirectory before it starts servicing requests. It listens to port 127.0.0.1:53, and even if you configure it to listen to outside ports you still have to explicitly tell it to which hosts and networks it should provide service. All in all it makes it fairly easy to hide it from the outside world, and limit the damage it can do when compromised. This the default setup we're talking about. I'm sure you can use at least some of these tricks to protect bind, but few people ever get around to doing that.

One of the other features of dnscache is that it only do one thing - cache. It's part of a package that contains several other programs, which implement different kinds of dns servers. You can have several of them running at the same time, each listening to a different port 53, each in it's own chroot jail under it's own user account. If you just need just a simple caching server and need not worry about bugs in the other programs.

There's much more interesting stuff at cr.yp.to. The publicfile package contains FTP and http daemons. They're limited, the FTP daemon is only for anonymous read only access, the http daemon just serves static pages. But if that is all you need, they're nice secure daemons that run under tcpserver, use dedicated accounts and chroot jails to protect the system they're running on. Given that serious exploits have been found very recently in both wu_ftpd and proFTP they suddenly look very attractive.

Another package is daemontools, a system for automatically starting daemons, keeping them running as well controlling and logging them. It's familiar functionality implemented in a completely different way. It works using a /service directory that contains symlinks to directories for each service you want to have running. These directories have certain standard files and scripts that are used to start and control the daemon. I don't think there's any particular security benefit to it, just something to look into if the Snn and Knn files start coming out of you nose.

Then there's Qmail, a replacement for sendmail. Normally sendmail runs as a root daemon. If invoked by a user, it's suid root. Qmail does neither, except for a bit of setup to listen on port 25. There are several other mailers that similarly avoid running as root as much as possible. pretty much any of them will be safer than sendmail because not every exploit automatically becomes a root exploit.

X

What ports are actually listening on a typical system? Quite a lot actually. Since most distros automatically activate every service the user has installed it is common to find sendmail, bind, linuxconf, FTP, apache and telnet running all at the same time, despite the fact that the user isn't actually using them remotely. All of them are significant security risks. Every Linux install should be followed by a nice round of disabling servers.

The name of the game is "netstat -nl --inet" and the purpose is to get it as empty as possible without loosing functionality you actually use. I play my own version of this game: nothing should have 0.0.0.0 as the local IP address unless it really is intended to listen to connections from the whole world. Internal stuff should be on 127.0.0.1, the private net on 10.11.12.13, you get the picture.

Disabling unused services is easy. It's the ones you are using that give most trouble. Like port 6000. It's a large, bug ridden server running as root: X. Problem is, many people are actually using X.

The designers of X made it a real network windowing system. A lot of effort went into making things so you could run your program on BigDumbBox, your display manager on LittleSmartBox while using the display on ShinyNewBox. Unfortunately, very little effort went into preventing this from actually happening.

I've never actually used X over the network. I fact I'm pretty sure that if the opportunity ever comes up it will be easer to find out how to do without it than it will be to find out how to make it work. Still, X is listening on port 6000 for anyone in the world who'd like to have chat with it. The display manager, (xdm or some relative) is opens up its own port 1024 (udp) in case someone somewhere in the world would like to have her xterminal managed by your home computer.

X has it's own mysterious access control mechanism. It'd better work: if not, every spammer can flash his ads in full color right in your face. It'd better have no buffer overflow bugs either.

Let's cut this short. One gruesome night of crawling through man pages, newsgroups, deeply hidden scripts and obscure configuration files reveals that in fact the only one not using port 6000 is the local user. On the local machine domain sockets are used. Even better, X's TCP port can be banished, if you just know how. Ditto of xdm's UDP port.

The magic words are to add -nolisten tcp to the incantation that actually starts the xserver (you can add gamma correction here too, btw.). This will close port 6000. The xdm port is closed by adding -udpPort 0 to it's startup command. Now you only have to find those commands. You'll get an interesting tour of the file system while doing so, since X is a collection of aliases that call scripts that call managers that call other scripts that activate daemons that start servers, all under the control of various configuration files.

In my case I found the command to start the X server in /etc/X11/xdm/Xservers and xdm/kdm/gdm in is actually in /etc/inittab.

Lpd

The next thing you'll find has an open port but you can't just shut off is lpd, the line printer daemon. Behold lpd. It runs as root. It listens to port 0.0.0.0:515, that is, to anyone in the world. It will happily invoke programs like ghostscript and sendmail, with input depending on the print job. It allows the other end to determine the name of the files used to queue the job. Basically, it's deja-vu all over again.

To protect the print queue lpd will only accept connections with source ports in range 721-731. This guarantees the other of the connection is under control of 'root' on that machine. This made sense in the early days of the Internet, when there were few Unix machines and they were all under the control of trustworthy system administrators. These days that offers very little protection, and in fact it horribly backfires. Because of it any program that communicates with the printer daemon has to be suid root, and becomes itself another security risk.

They only serious protection comes from /etc/hosts.lpd, the list of hosts lpd is willing to communicate with. A big task for a little file. Yes, there have been remote root exploits in lpd. Quite recently actually.

Lpd has to go. There are several potential replacements, most of them striving to be bigger, better and more powerful. It's much harder to find a reasonably secure one.

I've decided to switch to a little known print system called PDQ. It's not particularly sophisticated, but it's got the funniest web site of the lot. As before, I won't go into detail on installing it; that's what documentation is for. I will however, try to explain how it works, since it is very different from the way lpd works.

Lpd takes its work seriously. Through system crashes and reboots, if you give it a print job it will print it, or perish in the attempt. The way pdq sees it, the only thing worse than your jobs mysteriously dying en route to the printer is to have them unexpectedly come back from the grave three days later. PDQ makes an attempt to print. If it can't get it printed soon it just gives up.

Lpd's approach made a lot of sense when many people shared a single printer that rattled for hours on end to work it ways through a large backlog of print jobs. But that situation is rare these days. On most home networks printers are mostly idle. It's a decent guess that if a job couldn't be printed in half an hour than by that time the user has either printed it again, figured out where the problem is, or decided to live without the printout. In such an environment pdq's approach makes a lot of sense, and avoids accidentally printing the same thing six times over.

Pdq doesn't use a daemon. Instead each print job becomes its own process, which tries for a while to get printed and then gives up. To give someone the right to print to the local printer you will have to give them rights to /dev/lp0. Now I come to think of it this makes an incredible amount of sense. Simply create a printer group, let it group-own /dev/lp0, and put everyone who may print in that group.

The only security risk in pdq is a set of suid-root programs it uses to be able to print to remote lpd systems. What ever you do, remove the suid bit from those programs. Once that is done, pdq is reasonably secure. It has no daemons, doesn't listen on the network, and has no other suid programs.

Mail and News

Traditional UNIX systems are multi user oriented. Their email and news services are designed with the assumption that the machine has a large number of users. Typically this involves a news or email service running on the local machine, providing service to a large number of users. Communication between machines is traditionally peer to peer. For example, any machine could decide to send mail to any other machine.

The typical MS windows 95/98 setup is quite different. A single user per computer is assumed. If you really want you can do sort of a multi user setup, but so much software only allows a per system rather than a per user setup that this is usually more trouble than it's worth. The communication between machines is traditionally much more client-server like. Machines collect their mail from the mail server or send out mail to a mail server. They typically do not send mail directly to each other.

The current Internet infrastructure is based on the typical home user having a windows setup and the ISP providing the necessary servers for them. I'm talking about mail and news here, e.g. various instant messaging systems do use direct communication between individual machines.

Security-wise the Windows setup has one major advantage: all communication for mail and news is initiated from the home PC, and it does not need to make any service available to the outside world. Any service that isn't there can't be used to attack you, to relay spam or for other malicious tricks. The disadvantage is of course that your email hanging around on your ISP is no safer than their systems are.

The funny thing is that graphical desktops like Gnome and KDE have email and news programs that typically follow the Windows model of using a remote server, while at the same time many distros also install the traditional local email and news servers. These servers aren't actually used by the owner of the box, but enjoy wide popularity with spammers and script kiddies.

For a home network, there are three ways to put some method to this madness.

Make the traditional UNIX model work, and point the graphical email and news clients to servers on the local network. Shutdown the servers on the local networks and simply work with clients of the ISP's servers.

A hybrid method that uses a traditional UNIX setup on the local network, but communicates to the outside world in the windows style. Option 1 requires a permanent Internet connection and a static IP address to be practical, as well as your own domain name. Without this, the outside world can't access your mail server. Many home oriented ISPs aren't willing to provide this, or only at inflated business rates. If you can get it is workable, but you'll have to secure things yourself, e.g. install a safe mail server like Postfix or Qmail, protect yourself from spammers, and keep an eye on things.

The advantage is you can be very flexible in how email and news is handled on your network. With the flexibility comes complexity and more work to get things going.

Option 2 has the advantage of being very simple and secure. It's simple because for your ISP it's just like another windows box and many graphical email clients are configured just like their windows counterparts. It's secure because you don't need any servers running on your computer to make it work.

The disadvantage is that you loose a lot of flexibility. To keep things simple you have to always read email on the same machine, for example.

Option 3 uses programs like fetchmail or getmail to collect mail from the ISP and then hand it over to the local mail system on the network. To the ISP, you look like just another windows box picking up mail and news from time to time.

For news, I found it easy enough to setup leafnode to act as the local news server, serving only to the local network. For mail, things can get really complicated. In a perfect world, you'd have an IMAP or POP3 server running one one of your machines and you read mail online from that. You can then access your email from any machine on the network - even Windows machines. You'd also need your own SMTP server to handle outgoing mail from the local network. You really need to know what you're doing here which can be a major hurdle for people like me who basically don't.

The advantage is that you can things as flexible as you want. It's also quite secure, since all servers only serve to the local network, and with tcpserver you can keep them invisible to the outside world. The disadvantage is that there really is a lot of stuff to setup and configure. After spending a few hours trying to decide how the ideal mail setup for my home network would look like, I took two aspirins and decided that, until it becomes a problem, I let my mail reader pick up my mail from my ISP's POP3 server and send out mail through my ISP's mail server, just like the rest of the world.

File sharing

The standard way of sharing files is NFS. It's in all major Linux distributions, it's the defacto standard, and it goes a long way back. I've heard some things about NFS security, none if very pretty. NFS does not give me fuzzy feelings. It's too much a case of make it first and try to secure it later.

Worse, NFS seems to depends on whole series of other acronyms, such as RPC and NIS. All these come with their own history of serious security problems. I'd like to avoid these, and security isn't even the main thing. It's yet another thing to figure out, track down the documentation, secure it, administrate it. I have no other services that need it, and if I could flush it down the toilet I'd be a happy man.

No NFS for me. So what are the alternatives? There are several other network file systems, such as CODA and the Andrew File system. But the question is, do I want a real network file system for my home system? Is it worth to spend another three evenings doing web searches, hunting down documentation, figuring out what works and what doesn't, and what's secure and how to install it. After all, all of my machines dual boot, all of them are regularly switched off, rebooted or reconfigured, and hardware surgery is not unheard off. In such a setup you want systems to be as autonomous as possible. In this type of environment, for the purpose of data exchange, the common 3.5 inch floppy will typically outperform network files systems designed to meet the needs of larger networks.

Which brings us to Samba. After all, Windows networking was designed as a simplistic peer to peer network between a few PCs on a local network. Every few years the enterprise class operating system from Redmond discovers yet again they took the wrong exit on the road ahead, and add yet another new authentication mechanism (Shares. Servers. Domains. Active directory.) Samba has many advantages, not the least of which is that you can use if from Windows machines too. Also, unlike NFS, the Samba on my distribution came with a significant amount of documentation. While Samba works well, the ability to inter operate between the Unix and Windows environments, and the continuous changes MS makes to the protocols, makes setting it up neither easy nor quick. Another alternative is to use NetWare style networking, but it faces the same kinds of problems and the isn't trivial to setup either.

I've taken it down one step further. Many modern file browsers can access http and FTP sites as easily as local disks. Command line utilities like wget also provide easy access to remote data. If all you need to do is transfer the occasional data, rather than mounting home directories, bin directories etc., then an FTP or web server providing read-only access to the local network is enough for me. I simply put my files in a public directory on one machine and read it from the other.

Note that this approach has many security problems. FTP is a fundamentally broken protocol using clear text passwords. It's really only suited for unauthenticated read only access. I found out that I very rarely need to move secret data around on my home network, and if so, I will use encryption. I use publicfile as ftp/http server; it provides only read access and goes through a lot of trouble to keep access limited to a particular directory subtree. No passwords are ever send, the FTP server only provides anonymous access. Both the server and the users client should run under non-root user ids, and the servers should listen on the local network only.

I realize this approach is highly specific to my particular situation. I do not care that much about the security of the data transfered. I mostly care about the risks of someone abusing the communication mechanism for an attack. Since file servers typically use mechanisms on the operating system level they provide more possibilities for an attack than a service running in userland with limited privileges. This is why I prefer this setup. For other people this might be quite different, especially if you have a good reason to mount a filesystem on a remote machine, or if you have sensitive data you want to protect.

Part 2

Contents

bugs and patches
firewalls
trojans and traitors
suid, sgid
what did I learn

Bugs and Patches

Trying to get my Linux system secured the way I like it, I found out I'm actually working by a simple rule. I'm trying to avoid a single point of failure.

A single point of failure means that a single mistake, bug or error means an attacker can get sufficient control on the host so that he can do serious damage. A firewall is of limited use of various system daemons, running as root, peek through it, waiting for the next buffer overflow attack. Similarly, if your firewall is all that stands between the script kiddies and highly vulnerable network services you're putting a lot of trust in your ability to build the perfect firewall.

Of course, deep down there is always some potential for catastrophic security hole - in the TCP stack, the kernel, whatever. There is no alternative to accepting that, at some time, the worst happens and the only way out is to get things patched as quickly as possible. I can live with that. I just don't want it to be a biweekly event.

Many security experts believe in patches. I don't. It is not acceptable for system security to depend on the owner being able to get the patches for it quicker than the kiddies can get the exploit. It just leads to stupidities like "my vendor patches quicker than yours". That doesn't mean you shouldn't patch known security holes. It means you shouldn't trust your ability to do it quickly enough.

Claiming that people should be up to date on their patches any waking moment or else it's their own fault is just lame. Really lame. Almost as lame as, say, making all your document formats scriptable, encouraging people in every way to send them by email to each other, putting a single mouse click between the user and executing anything with whatever program that might be vaguely associated with a file extension, giving it total control of the machine without any form of sandboxing, adding an active policy of hiding file extensions whenever the system feels like it, making sure there is no obvious way to examine a mail attachment without activating it and saying things like: "users should be educated not to open attachments if they are not sure it is safe".

So I have my standards. Anything that does interaction with the outside world should either be very trivial, very well audited, or be sufficiently sandboxed that possible damage is limited. Preferably it should be all of them. Many classic programs cannot be set up this way and should not be exposed to the public Internet.

In Linux, sandboxing is typically done by running under a special account, one that has just enough rights to do what it needs to do. If possible, chroot() is used to limit the part of the file system that is visible. In the future, once capabilities get straightened out, it should probably also include shedding unnecessary capabilities when appropriate.

The combination of not letting local services listen on public networks, sandboxing services properly and keeping up to date on patches should keep the risk of remote exploits under control. Of course, there are more dangers than just remote exploits...

Firewalls

I don't believe in firewalls. That's more or less a lie, of course, but I'm trying to get something across here. Most security experts REALLY believe in firewalls. The expect that, when they die, arrive at the great firewall in the sky where Saint Peter is running a default policy of REJECT. I've go much bleaker expectations.

Current Internet firewalls work on IP packets. This is a fairly low level, there are no 'connections' of 'services', just 'packets', 'protocols', 'ports', and 'addresses'. It's possible to recognize the higher protocol levels in the packet flow, but it isn't the natural level for a firewall.

A firewall makes sense for a lot of things:

Since the natural level is the ip packet, it makes sense for the firewall to detect among them the strange, the weird, the malicious and the unusual, and take action. This means filtering for spoofed packets, obviously misrouted ones, any thing with suspicious options on, extremely long and extremely short packets. It makes sense to have such filtering as a separate stage, away from the actual processing. This makes for a clean separation of different functionality, and makes it easy to make the filtering step highly configurable. It even makes sense to keep 'state', which means that higher level protocols are partly emulated so you can detect attacks that involve protocol violations.

Often the firewall is the only connection between a trusted and an untrusted network (or two equally untrusted networks) and as such, sees all that goes in and all that goes out. This puts it in an excellent position to do logging and detecting suspicious activity. On the ip level, there are many filtering operations that make sense. You can stop all communication with sites known to be hostile. There is also traffic which we known should not occur between the networks. Often packets that violate those rules are easily identified based on protocol number, port number, direction of flow etc. When such a packet tries to get into the trusted network the firewall should of course stop it and possibly log it. When such a packet tries to get out the firewall should stop it, log it and scream bloody murder. The screaming part is important; a bad packet trying to get in just means someone attempts to attack the trusted network. When you see a bad packet go out it means they were successful. My problem with firewalls is that many people only see it's ability to stop a lot of different kinds of bad traffic, and that's precisely what they use it for. The firewall is a true cure all for security woes. It can stop detectable spoofed packets. It can stop things like the ping of death attack. It can shield know from known attack sources and protect services you don't want to be accessed by outsiders. It hide much things about the network behind it. It also becomes the ultimate single point of failure.

If the firewall goes down, suddenly a lot of stuff is exposed. Vulnerable services. Information about the network. Protocol weaknesses. Any of them that truly depend on the firewall to protect them is in trouble. If a lot of stuff depends on it then firewall trouble means bad trouble.

A frightening number of people don't even use it as a cure-all. They use it to solve one specific problem: protecting vulnerable ports, and see this as the main function of the firewall. They don't even bother about check outgoing traffic since that has little to do with stopping a remote attack.

In general, firewall implementations are quite solid. That is, they don't go down very often. They are quite easily misconfigured, so in reality they are not always as solid as you would think. In fact, there is a whole list of problems with setting up firewalls:

It's fundamentally a subtractive method. No matter how much everyone hammers on using a default DENY policy; if e.g. the firewall didn't get started when it should then things are wide open.

Writing good firewall rules is non trivial. It's easy to make a mistake and accidentally leave a dangerous hole.

Testing is problematic. Where for example netstat will tell you immediately which ports are currently listening, you need to do a full port scan from the outside to see what your firewall will let through.

In my world, any application or service should be able to fend for itself, security-wise. If it's not available to the public internet it shouldn't be listing on it. If it should be accessible by a limited number of hosts it better be able to enforce that by itself. There should be no weaknesses that must be masked by the firewall to make it safe. Basically, things should be safe even if there was no firewall.

For me the firewall is just an extra layer of protection, and a very powerful one as it protects against so many things. At the same time, when I check that at least two problems must occur before some feature has a serious security hole I'm really hesitant to count the firewall. Before you know it a very large number of things would depend on it, and "a firewall problem plus just about any other problem" doesn't add up to two for me.

Proxying firewalls

By now, everyone should be booing me, because I've made the assumption that all firewalls are packet filtering firewalls. There is another way to set up firewalls: as proxying firewalls. Where a packet filtering firewall routes packets between two networks and filters out the bad ones, a proxying firewall keeps both networks completely separate. Only traffic for which a proxy is running will get through such a firewall. Because proxies are specific to an application protocol very precise filtering is possible.

For me the main Advantages are:

It's not a subtractive method. The only thing that will get through are the things actively being proxied. If the proxies don't get started or fail the door is closed.

It's easy to tell what's getting through simply by looking what proxies are running.

From a security standpoint one disadvantage is that the proxies themselves are potential targets for an attack, and must themselves be protected. On the good side, proxies can also do various kinds of filtering and protection for the internal network. Proxying firewalls are often considered safer than packet filtering, but less practical to use. You need the right proxies, clients that can use a proxy in the first place, and you need to configure each client to use a proxy.

My setup

A good firewall should run on its own dedicated machine. Such a machine doesn't run any services, doesn't have users, and is optimized for security. As long as this machine is not compromised it will keep doing its firewalling properly. The thing is, I don't want another ugly beige box grinding its fan bearings out to clutter up my house. I don't care if they come for free, I care that they come small and silent. Until the day they sell cheap palm pilots with firewall software on them I probably won't have a dedicated firewall machine. So, in violation of yet another security commandment I have a gateway box to the net that's also used to work on.

I try to turn this into an advantage. As long as the gateway box has proxies for the most important services (such as http) it's no problem for me if certain services can be used from the gateway machine only. If there is no proxy possibility for some esoteric service bad luck; then there's just one machine that can use it on the Internet. Luckily, proxying software is available for most commonly used protocols. Some provide even useful extra functionality; after using junkbuster for I week I couldn't live without it. Many other programs can use a generic SOCKS proxy server. You do have to make sure outsiders can't use your proxies to relay their traffic. Proxies should service the internal network only.

The gateway box has its own packet filtering firewall using IPCHAINS. This box does not masquerade. Instead it runs proxies and servers for the local network. Forwarding is off, computers on the local network can only reach the Internet by going through some proxy. The packet filtering checks both the incoming and outgoing traffic on the Internet as well as the local network.

So why, in spite of my attitude that it isn't done right if it isn't done different, do I not use iptables? That's because iptables is just to tantalizing. Once I start on that I will most likely go completely overboard in the packet filtering department. I can't read the documentations without thinking of all kinds of interesting extensions that could be added to it. It would suck up all my spare time. If I had started on that this article would never be finished. I'm not the only one; e.g. some guys found a way to use iptables to change fingerprint of outgoing packets to fool OS fingerprinting software. (This must be the ultimate in LINUX network security: you can pretend to be OpenBSD.)

Trojans and Traitors

Once I'm done with messing up your linux box it's perfectly protected of course. If any one breaks into it, it's not because I made a mistake, but because you did. You were 20 minutes behind on your security patches. You opened an email attachment that claimed your house was on fire. You accepted a document or program over the Internet and used it without sterilizing it first. You're probably familiar with this set of lame excuses euphemistically known as 'user education', or, in the Redmond area, as 'best practices'. These are real problems and they need to be solved, not denied. There is essentially no way you can avoid that a user, educated or not, gets tricked into running a trojan. It's just a matter social engineering. However, it should take more than an email message with a catchy title to completely obliterate your system. This is what eventually let me to completely loose confidence in windows 95 and it's offspring. No matter how many hours you spend on configuring and securing it, it takes a trojan about 10msec to complete wreck the system to the point that reinstallation is the only option. All it needs is a single chance.

It is possible, for a determined and talented attacker, to get you to login as root and start ripping out system daemons and replacing them, and do other major reconfiguration work. However, this requires a level of personal service and attention to detail that is seldomly found these days. What is more realistically possible is to trick a user into accidentally running some trojaned macro, script or code. The task it to limit the damage.

I don't think this problem has really been solved yet. It would require cooperation of application that bring in untrusted data (email, web browsers) to make sure users are unlikely to accidentally invoke dangerous content. It would involve sandboxing code that works with untrusted data to protect the user from malicious actions. It becomes far more difficult when users actually transfers data from a remote source to their own data collection. That would involve things like trust levels that are associated with data and danger levels with applications. Applications would be sensible in the presence of untrusted data; they wouldn't run scripts or macros in them, and not invoke dangerous programs on them. Virus scanners could be used as tools to raise trust levels (they'd make sure their database is up to date and trusted.) Possibly the operating system itself would get involved to keep track of trust and danger in the user's files. There's interesting research topics waiting here, but this isn't the moment.

Some things can be done without getting a PhD in the process. Things like mailcap can be configured not to start any dangerous program on mail, but use safe viewers only. Some mail transfer agents can be configured to scan for dangerous content and do something about. Sometimes proxies can be used to provide some protection, e.g. there is a patch to let junkbuster selectively disable javascript in webpages viewed.

My main line of defense is to give each user two accounts: one to handle untrusted data and a real account. The real account is for programming and other serious work. The other one (the 'Internet' account) is used for Internet access and other activities involving potentially dangerous data. Users can, of course, transfer data between both accounts; the 'real' account has rights to access some parts of the Internet accounts home directory. I'm fully aware that it's virtually impossible to build a truly strong barrier between the two accounts if users can freely move data between them. Never the less, it provides a bit of protection for things the user wants to protect. The trojan will have to sneak through several stages to get to the real account, difficult for a simple 'exploit web browser bug' type of trojan.

The untrusted accounts have their home directories on a partition that is mounted with the options 'noexec,nosuid,nodev'. This means you cannot execute files in their home directory. There also cannot be any suid programs and no devices but that's a minor issue. The mean purpose is to prevent an attacker from smuggling executable code on the system and tricking a user into running it. At the same time if these Internet accounts are the only ones on the system with access to e.g. /dev/modem, with some tweaking things can be setup so that only those accounts can use dialup Internet connections.

The effect of our 'noexec' mount is completely destroyed by /tmp, which normally has rights like 'drwxrwxrwt'. This means anyone can do anything in this directory except delete other peoples files. Hence /tmp must be mounted with the same 'noexec,nosuid,nodev' options, or be a symlink to such a partition. Doing so causes problems, for example midnight commander uses shell script generated in /tmp to implement some functionality. Faced with this, I've decided that such programs need to be fixed; no executable code allowed in my /tmp directory.

Of course, is /tmp the only directory with this problem? Nope, there's at least half a dozen other directories deep in /var that have the same access rights. This pisses me off to no end. It turns out my system has a whole bunch of hidden nooks and crannies that I didn't know about, places where any user can freely hide their favorite exploit code, even if they've just stolen an extremely restricted daemon account that normally doesn't have any write rights at all. How many administrators ever check e.g. the metafont directory to see if there is anything suspicious in there? The /tmp directory is an historical anormality that's more or less implicitly a security violation (no easy way to stop anyone from using your disks). I'd appreciate it if its offspring doesn't take over unrelated parts of the directory tree. Now I've got to disable the ones I don't use and more the rest over to an easily observed and possibly protected place. Thanks a lot.

This particular part of my setup that I'm least happy about. It's relatively complex and contrived, yet not very powerful. If an attacker can trick you into running a trojan she can probably trick you into copying it to your real account first. This setup wouldn't have stopped any of the email and macro viruses that plague the windows world. Those problems are in application programs (and for years I've been enjoying complete safety from them since I can't afford the necessary bloatware.) Viruses are notorious for being able to get past obstacles, simply by patiently waiting for the right opportunity. It also breaks some programs, for example I cannot print from the 'Internet account' because PDQ produces shell scripts in the user's home directory tree to control printing.

In the end, it achieves only two things. The first is to provide each user with a somewhat protected space that is less vulnerable to mail, news or web based trojans. The second one is that viruses and trojans that rely on executable code are likely to be thwarted because they cannot use files to store and copy themselves. Given the amount of damage a single malicious trojan could do I'm willing to go through quite a bit of trouble to implement even this very limited protection.

Suid programs

Every article on unix security tells you to remove (or disable) unnecessary or dangerous suid programs. This article is no exception. Suid programs are dangerous. Suid root programs are really, really dangerous, even more so than root owned daemons.

In fact not all root owned daemons are that dangerous. The danger is in daemons that interact with untrusted users. By carefully crafting the input the user can make the daemon misbehave. The best known of doing that is buffer overflows, but there are many other ways. Clever input might trick a daemon into reading or writing files it shouldn't access, starting other programs or going into an infinite loop. Daemons that have very little or no interaction with untrusted users provide few opportunities to do so. The biggest danger comes from daemons that have extensive conversations with clients over the Internet and perform complex tasks. Such daemons should never run as root.

Things like inetd and tcpserver do often start as root. However, they only accept the connection, they have little or no direct interaction with the other side. Instead they split of a child process that as soon as possible switches to a specified (and hopefully safer) userid. Once interaction with the untrusted user really start the daemon should be stripped of all dangerous powers.

The reason suid programs are so dangerous is that interaction with the untrusted user begins before the program is even started. The caller controls the environment in which the suid program will run. You can do really weird things, like closing stdout before starting a suid program. If the programs opens a file, and if the operating system always reuses the first free file descriptor, the first file the program opens ends up on stdout. Anything the program writes to stdout will be written to the file, something the programmer most likely did not expect...

There are many other ways to confuse the program, using things like environment variables, signals, or anything you want. A suid program must very, very carefully sanitize its surroundings to avoid attacks. There is no 'safe' stage in which to prepare for user interaction. This is the curse placed on all that is suid root, and there is no escape.

Sometimes systems ship with hundreds of suid programs, the vast majority of which is never actually used - they're just dormant security risks. Other suid programs might get used - but not by everyone. Most daemons do not call suid root programs unless someone has cracked them and wishes to upgrade to root. Many installations have a classic staff/student type population: a small set of users that may do system administration tasks while other users never need to (nor should) do that.

Suid programs can be made only accessible to those who need to use them. The usual way it to create a special group for each class of suid programs you can identify and make each user a member of the groups needed. Programs like sudo provide other ways to control who may use what program.

From a philosophical point of view, suid root is often used misused as a way to run with security disabled. Complex programs should not run with 'security disabled' just because the programmer couldn't be bothered to do the task in a more careful manner. The power behind suid programs is to run a program with two ids active: that of the invoker and one provided by the system. This means that for a sample a program to put messages in the mail queue should use the userid of the mail system; this should be enough for that task. Most modern mail programs indeed avoid suid root programs for that purpose. In fact such system accounts, like daemon accounts, are often much more restricted than the typical user account. Suid root should be reserved for when security truly must be bypassed (programs like su for example).

At this point we can't put it off any longer: the thing to do is to get a list of all suid programs on the system and start the boring task of going through them, and examining each. Not just to see if we actually need this program, but also to see what alternatives there are. In the end you should be able to tell what each program is used for and why it can't be eliminated. The questions I ask myself are:

Do I need this program? Could I just strip off the suid bit and sleep better at night? Like all those suid root games I never play.

Are there more appropriate alternatives? For example, mount is suid root on many systems just so the user can mount floppies and CDs. Often sudo would be a better solution for that kind of thing.

How about removing the suid bit, and making the user su to the right account before using it? YMMV on this, some people prefer to make things like ping and traceroute normal programs so only root can use them. This is a tradeoff, if you overdo it people will have to su to root far more often, in itself a security risk. I prefer ping to be suid root, but in many server environments only root would ever use such programs, so what's the point?

If the program is suid root, is this a program that should really need to disable security? Many people leave things like sendmail and lpr alone when pruning suid programs since these are essential system services. It's worth looking into alternatives that do not need suid root programs. Who needs to use this program. Frequently the answer is "only real users, not daemons", "Only users that do system administration", or "only users over 18". In such cases access controls should be set up that restrict access to these programs.

Unfortunately, there are many programs that need to be suid root, even though you really wouldn't want them to be. Most common are programs that need to access a protected port number or other privileged network feature. For example bringing up a dialup ppp link requires super user rights because you're messing with IP and interface configuration. We can only hope that in the future capabilities and other alternative mechanisms will put a halt to full scale suid root programs popping up all over the place.

On my system I counted 49 suid programs, 47 of which were suid root. There were also 28 sgid programs, 5 of which were sgid root.

Of the 47 suid root program, I found I could disable the suid bit on 35 of them. 20 were programs that were never used - either because they provide a service I don't use, or because they require hardware I don't have. For 8 it was considered appropriate to restrict them to root use only (e.g. dump and restore) while I could eliminate 7 more by switching to alternatives that did not require suid root programs. This eliminates about three quarters of all suid programs.

Of the remaining 12, 7 had a legitimate reason to be suid root (like su) while the other 5 needed to be suid root for practical reasons but there really should be a better way of doing this. Examples are ping and Xwrapper. All remaining twelve programs had one thing in common: normally they would only be started by humans. No system account needs to use them. I therefore put them all into a directory that only those accounts that correspond to real users can access. If an attacker breaks into a system account he cannot reach any suid root program. For six of the twelve I added extra requirements - a number of special group IDs controls who may access these.

There were 2 non-root suid programs. Both appeared to use suid to provide controlled access to files for a service. Normally sgid is used for this, but there might be occasions where there is a good reason to suid instead. In any case, since I use neither of them I disabled both.

There were also 28 sgid files. Sgid is a somewhat more pleasant mechanism, because the user's true identity is never camouflaged in any way. Only 5 are sgid root, which doesn't mean much since group root is not in any way special. In fact it's a major stupidity alert to me; most programs use a group id that identifies the service they're working for ('news', 'man') which helps a lot if you're trying to audit your system. Worse, since all important root owned files are also of group root, a sgid root files might accidentally get more than usual privileges on some of these unrelated files. This should not happen on a correctly configured system, but it's no fun having to check for this. I found most sgid root programs were sgid root because they were installed incorrectly...

The most common sgid group is games, which appears 12 times. These are all games using this mechanism to e.g. write to a shared high score file. Personally, I couldn't care less if someone gained group games right and cheated on his pacman scores. The other 11 are assorted system accounts, used for things like mail, news, and other services. For each of these, it's undesirable that someone gained access to them and messed with that service. However, an exploit in these programs would not directly threaten the whole system, only one particular service or subsystem gets compromised. Note that in my setup, none of these sgid programs will gain you access right to suid root executables. They cannot be used, even if compromised, as a stepping stone from a unprivileged system account to suid root programs.

All in all, I found an interesting dichotomy here. On one hand, there are the suid root programs, each of them potentially dangerous. On the other hand, there is a set of sgid non-root programs, relatively harmless if things are setup correctly. For both, you'll be able to eliminate many and limit access to others - which should enhance security considerably. From a practical point of view, no suid root program should be world executable. There simply are none that need it.

It's interesting to know that, for the standard lpr printing system, no less than three suid root programs are needed for normal users (lpr, lpq and lprm) while administering the printing system requires only a single sgid lp program (lpc).

What did I learn?

I learned a lot. Most of all I learned that the average linux distribution is made with almost no serious thoughts about security. If you take a look at a default setup you will find with 20 minutes at least half a dozen ways to improve it. Unfortunately most of these will take many hours to implement.

A cause of many security problems is that most distributions don't differentiate between installing something and activating it. Most people will install everything they think they might want to use some time. Even then, you quickly find out it's better to err on the side of installing too much. After digging up the CD for the third time because yet another package was needed you just stop trying to figure out what everything is and just install it unless you're really, really sure you'll never needed. For me, that amounts to installing about everything except Klingon fonts and emacs. No wait, I might need Klingon fonts some day.

The typical distro will not just install the stuff but also activate it. In case of the Klingon fonts that is hardly a problem. When the package contains suid root programs or public servers there is a lot to say for keeping them disabled until someone actually starts using it.

A piece of software on it's way to from a shiny cdrom to being actually useful goes through four steps:

Installation, the task of transferring it from cdrom to the right place on the hard drive.

Configuration, setting it up the way you want it. It might vary from setting just a few options to providing complex content, but very few programs require no configuration at all.

Activation. Services are started, suid programs get their suid bit set.

Exposure, trying to set things up so that those that you want to use it can access it and others are excluded as much as possible.

Distributions think they do people a favor by short circuiting things and, upon installation, load some half backed default configuration, activate everything and expose it to as much of the world as possible. The results vary from useless (what's the point of activating a http server before anyone has put up any content for it) to completely insane (putting linuxconf on a public port) Given that 99% of the computers will connect to the Internet some way or another this leaves the poor user with the task of shutting down or securing 10 services she doesn't need for every one that is desired. Gee, thanks a lot.

There is no reason why all but the most essential suid programs couldn't be installed with the suid bit off, as long as a simple way is provided to activate them when needed. In the same vein, there is not need for servers to be running by default if a simply method exists to activate those needed. Most stuff has to be configured and can be activated as part of the configuration process. Activating a few things would certainly be a lot less work than trying to find what's active, determine what it is and if it's needed, and shutting it down if not.

But is isn't just the distributions that cut corners on security. It's been clear for years that Internet security would greatly increase if all ISPs would check the source addresses of the IP packets they forward for their users. This would make spoofing much harder, which not only stops certain attacks but also makes it more difficult for the attacker to cover their tracks. But for many ISPs the only check ever made is whether there is a valid credit card number on file and configuring routers is too difficult, too much work and a load of other stupid excuses.

Billions have been spend on writing Y2K readiness statements but configuring routers to stop spoofed packets is suddenly beyond the reach of mortals.

I won't even mention certain a competing OS family that not only has scripting everywhere and everything eager to follow any URL, but also invented ActiveX as the ultimate destructive content.

So every user will have to take care of securing their own setup. I found out it is a lot of work. Most of it is spend reading documentation, trying to understand what's going on and fixing the more obvious problems. This article is much, much longer than I had originally intended, mostly because every time I found out enough to fix one problem I learned enough to identify at least two new ones. I don't mind doing some work on securing my machine. But I want it to be my own personal touch - tailoring things to my own situation and building in a few tricks of my own that'll snare the unsuspecting cracker. Not painfully tracking down a long list stupid things like world accessible X servers, and spending yet another few hours to find a decent fix.

There are many more things to do, things I originally wanted to do but didn't get around to because there were so many trivial things to fix first. Kernel patches, like LIDS and the openwall patches. Various integrity checking and intrusion detection methods. Someday.

Not everything is complicated. Some things the simplest fixes are the best ones; for example, the Bastille hardening script installs an immutable, root owned empty .rhost file in every home directory. A very simple fix that eliminates a classic weak spot. It's worth to look around a bit to see what others have done.

Articles like this one tend to either end or begin by the author telling you there is no absolutely secure system unless it's off, encased in concrete and preferably on another planet. While technically true, the main reason they're telling you this is that mankind fundamentally just likes telling other people bad news. You can't have an absolutely secure system - but with a some effort you can come up pretty good imitation. Linux security is now at the stage linux installation was some years ago. Back then, installing Linux and getting X to run required quite a bit of knowledge and skill, as well as some serious documentation diving. Nowadays most bumps are smoothed out and you have comfortable graphical installers that help you with the few things that can't be done automatically. Hardening scripts and security options are getting more and more common in distributions. Once they've figured out that these are no excuse for having a sloppy default setup, and that setting things up carefully for the start is probably easier than securing them afterwards, we're likely to get some progress. Currently a good security setup requires skill, knowledge and time. If history has the decency to repeat itself then in a few years any Linux distribution will be able to build a secure default setup with just a few hints from the user.

 
To the best of our knowledge, the text on this page may be freely reproduced and distributed.
If you have any questions about this, please check out our Copyright Policy.

 

totse.com certificate signatures
 
 
About | Advertise | Bad Ideas | Community | Contact Us | Copyright Policy | Drugs | Ego | Erotica
FAQ | Fringe | Link to totse.com | Search | Society | Submissions | Technology
Hot Topics
Php
Withstanding an EMP
Good computer destroyer?
Wow, I never thought the navy would be so obvious.
Alternatives Internets to HTTP
Anti-Virus
a way to monitor someones AIM conversation
VERY simple question: browser history
 
Sponsored Links
 
Ads presented by the
AdBrite Ad Network

 

TSHIRT HELL T-SHIRTS