Interfaces should be minimal (simple as possible), narrow (provide only the functions needed), and non-bypassable. Trust should be minimized. Applications and data viewers may be used to display files developed externally, so in general don't allow them to accept programs (including auto-executing macros) unless you're willing to do the extensive work necessary to create a secure sandbox.
As noted earlier, it is an important general principle that programs have the minimal amount of permission necessary to do its job. That way, if the program is broken, its damage is limited. The most extreme example is to simply not write a secure program at all - if this can be done, it usually should be.
In Linux, the primary determiner of a process' permission is the set of id's associated with it: each process has a real, effective, filesystem, and saved id for both the user and group. Manipulating these values is critical to keeping permissions minimized.
Permissions should be minimized along several different views:
If you must give a program root privileges, consider using the POSIX capability features available in Linux 2.2 and greater to minimize them immediately on program startup. By calling cap_set_proc(3) or the Linux-specific capsetp(3) routines immediately after starting, you can permanently reduce the abilities of your program to just those abilities it actually needs. Note that not all Unix-like systems implement POSIX capabilities. For more information on Linux's implementation of POSIX capabilities, see http://linux.kernel.org/pub/linux/libs/security/linux-privs.
Consider creating separate user or group accounts for different functions, so breaking into one system will not automatically allow damage to others.
You can use the chroot(2) command so that the program has only a limited number of files available to it. This requires carefully setting up a directory (called the ``chroot jail''). A program with root permission can still break out (using calls like mknod(2) to modify system memory), but otherwise such a jail can significantly improve a program's security.
Some operating systems have the concept of multiple layers of trust in a single process, e.g., Multics' rings. Standard Unix and Linux don't have a way of separating multiple levels of trust by function inside a single process like this; a call to the kernel increases permission, but otherwise a given process has a single level of trust. Linux and other Unix-like systems can sometimes simulate this ability by forking a process into multiple processes, each of which has different permissions. To do this, set up a secure communication channel (usually unnamed pipes are used), then fork into different processes and drop as many permissions as possible. Then use a simple protocol to allow the less trusted processes to request actions from more trusted processes, and ensure that the more trusted processes only support a limited set of requests.
This is one area where technologies like Java 2 and Fluke have an advantage. For example, Java 2 can specify fine-grained permissions such as the permission to only open a specific file. However, general-purpose operating systems do not typically have such abilities.
Each Linux process has two Linux-unique state values called filesystem user id (fsuid) and filesystem group id (fsgid). These values are used when checking for filesystem permissions. Programs with root privileges should consider changing just fsuid and fsgid before accessing files on behalf of a normal user. The reason is that setting a process' euid allows the corresponding user to send a signal to that process, while just setting fsuid does not. The disadvantage is that these calls are not portable to other POSIX systems.
On installation the program should deny all accesses until the user has had a chance to configure it. Installed files and directories should certainly not be world writable, and in in fact it's best to make them unreadable by all but the trusted user. If there's a configuration language, the default should be to deny access until the user specifically grants it.
A secure program should always ``fail open,'' that is, it should be designed so that if the program does fail, the program will deny all access (this is also called ``failing safe''). If there seems to be some sort of bad behavior (malformed input, reaching a ``can't get here'' state, and so on), then the program should immediately deny service. Don't try to ``figure out what the user wanted'': just deny the service. Sometimes this can decrease reliability or usability (from a user's perspective), but it increases security.
Secure programs must determine if a request should be granted, and if so, act on that request. There must be no way for an untrusted user to change anything used in this determination before the program acts on it.
This issue repeatedly comes up in the filesystem. Programs should generally avoid using access(2) to determine if a request should be granted, followed later by open(2), because users may be able to move files around between these calls. A secure program should instead set its effective id or filesystem id, then make the open call directly. It's possible to use access(2) securely, but only when a user cannot affect the file or any directory along its path from the filesystem root.
In general, do not trust results from untrustworthy channels.
In most computer networks (and certainly for the Internet at large), no unauthenticated transmission is trustworthy. For example, on the Internet arbitrary packets can be forged, including header values, so don't use their values as your primary criteria for security decisions unless you can authenticate them. In some cases you can assert that a packet claiming to come from the ``inside'' actually does, since the local firewall would prevent such spoofs from outside, but broken firewalls, alternative paths, and mobile code make even this assumption suspect. In a similar vein, do not assume that low port numbers (less than 1024) are trustworthy; in most networks such requests can be forged or the platform can be made to permit use of low-numbered ports.
If you're implementing a standard and inherently insecure protocol (e.g., ftp and rlogin), provide safe defaults and document clearly the assumptions.
The Domain Name Server (DNS) is widely used on the Internet to maintain mappings between the names of computers and their IP (numeric) addresses. The technique called ``reverse DNS'' eliminates some simple spoofing attacks, and is useful for determining a host's name. However, this technique is not trustworthy for authentication decisions. The problem is that, in the end, a DNS request will be sent eventually to some remote system that may be controlled by an attacker. Therefore, treat DNS results as an input that needs validation and don't trust it for serious access control.
If asking for a password, try to set up trusted path (e.g., require pressing an unforgeable key before login, or display unforgeable pattern such as flashing LEDs). When handling a password, encrypt it between trusted endpoints.
Arbitrary email (including the ``from'' value of addresses) can be forged as well. Using digital signatures is a method to thwart many such attacks. A more easily thwarted approach is to require emailing back and forth with special randomly-created values, but for low-value transactions such as signing onto a public mailing list this is usually acceptable.
If you need a trustworthy channel over an untrusted network, you need some sort of cryptologic service (at the very least, a cryptologically safe hash); see the section below on cryptographic algorithms and protocols.
Note that in any client/server model, including CGI, that the server must assume that the client can modify any value. For example, so-called ``hidden fields'' and cookie values can be changed by the client before being received by CGI programs. These cannot be trusted unless they are signed in a way the client cannot forge and the server checks the signature.
The routines getlogin(3) and ttyname(3) return information that can be controlled by a local user, so don't trust them for security purposes.
The program should check to ensure that its call arguments and basic state assumptions are valid. In C, macros such as assert(3) may be helpful in doing so.
In network daemons, shed or limit excessive loads. Set limit values (using setrlimit(2)) to limit the resources that will be used. At the least, use setrlimit(2) to disable creation of ``core'' files. Normally Linux will create a core file that saves all program memory if the program fails abnormally, but such a file might include passwords or other sensitive data.