distcc frequently asked questions

Question not answered here? Check the mailing list archives or send email to distcc (at) lists.samba.org

What compilers are supported?

gcc is fully supported, building C, C++, Objective C and Objective C++. Other gcc languages such as Java are not supported. All recent versions of gcc are thought to work, though later versions tend to work better.

Intel's icc compiler is somewhat compatible with gcc and works with distcc, but some problems have been reported.

Sun's proprietary C compiler is reported to work with distcc in C mode. It appears that Sun CC cannot compile C++ templates correctly when using a separate preprocessor, so it is generally not practical to compile C++ using Sun CC and distcc.

The main feature required by distcc is that the compiler must be able to run the preprocessor separately, and then compile the preprocessor output from a file. This was a basic part of the original design of C, but some compilers seem to have lost the ability to do this. However, this feature is not required if you use distcc's "pump" mode. Secondarily, distcc is currently hardcoded to suit gcc's behaviour and command-line syntax, so only compilers that act like gcc will work. This could in principle be changed.

How to build gcc with distcc?

gcc uses an unusual three-stage process to verify that the compiler can recompile itself with the same results. Because of bugs in the gcc integrated preprocessor, compiling locally can produce a program that is functionally identical, but not byte-for-byte identical, with a program compiled remotely.

This can apparently be fixed by specifying 127.0.0.1 in the host list, rather than localhost. This causes distcc to run compilation "remotely" onto the same machine, which should be the same as compiling remotely.

You will also need to make sure that the build directory is mounted at the same location on all machines so that the new compiler can be located.

(More information or instructions here would be welcome.)

distcc gets slower when I add slow machines to the cluster

Make sure you put the preferred (fastest/closest/least loaded) machines at the start of the DISTCC_HOSTS list. This is particularly important when running ./configure scripts because all compilation will be done on the first machine listed. Normally this should be localhost, but if another machine is much faster then perhaps not.

To some extent this is still an open bug that I hope to address in a future version. It would be nice if distcc could automatically detect the best distribution, but it doesn't do that yet.

Restarting distccd on reboot

You may have distccd installed on a machine where you don't have root, and want it to restart when the machine reboots.

One way to do this, suggested by Shane McDaniel, is to put it in your per-user crontab. distccd will be started at regular intervals, but will exit if something is already listening on the port. Remember to set your PATH in the crontab so that distccd and all necessary compilers can be found.

Choosing a userid

It would be awesome if you could run the daemon as a specific user in daemon mode.

It is awesome! :-) From distcc 1.1, root can use the --user option to cause distcc to start as a particular user.

# distccd --user nobody

If distccd is started by root and no user is specified, it will change to the user distcc if possible, or otherwise to nobody. I recommend you create this user when installing the package.

If distccd is started by a non-root user then it will continue running as that user.

Why not use a distributed-make-style system?

A few people have extended make to allow it to distribute jobs across several machines. Projects I know of include Graydon Hoare's Doozer, Sun's dmake, pvmmake, and ppmake. GNU Make apparently has internal hooks to add distribution mechanisms, which I think is how ppmake and pvmmake work.

Unlike distcc, most of programs have no special knowledge of C: they just schedule jobs remotely or locally. The advantage of this is that you can distribute all different jobs, such as linking, or building documentation, or compiling programs in other languages.

The disadvantage of this approach is because it relies on running tasks on any node, all relevant aspects of the nodes must be the same. This typically means that all machines must have a shared filesystem mounted at the same location, that they must all have exactly the same compiler, headers and libraries installed, their clocks must be in sync, and typically that they must all have the same OS and CPU architecture.

In some situations, such as a lab of centrally managed machines, this is quite practical. However, many people have a less homogenous environment: perhaps some machines run a different OS release, or developers are allowed to upgrade libraries on their own machines, or perhaps you just don't want to run NFS.

In this situation distcc is much easier to set up. You don't even need root on the volunteer machines, let alone a mandate to move /home onto NFS.

(You might get away with having slightly different headers or libraries, but the potential for confusion is so great that I think you'd be crazy to try.)

By Amdahl's law, a distributed make system could in principle be faster than distcc, because it can distribute many different jobs. In practice, however, for many projects compiling C or C++ takes over 80% of the time. Many of the other jobs, such as linking, cannot be parallelized anyhow.

I don't know how their performance compares but I would be interested to hear.

Has anybody yet thought of integrating distcc with ccache?

If you don't use distcc's "pump" mode, then they work pretty well as separate programs that call each other. You can either set CC='ccache distcc gcc', or arrange for both ccache and distcc to be "masqueraded" on the path. (See the manual for information on how to install this.)

Normally it is better for ccache to be run before distcc.

This is very nearly as efficient as making them both part of a single program. The preprocessor will only run once and the preprocessed source will be passed from ccache straight to distcc. Having them separate allows them to be tested, debugged and released separately.

Unless there's a really strong argument to do otherwise, having two smaller programs is better practice, or at least more to my taste, than making one monolithic one.

However, all of that said, now that we have "pump" mode, the trade-offs have changed. Using ccache prevents the use of "pump" mode. It would make sense to integrate caching into distcc so that you can get distributed preprocessing, distributed compilation, and caching all at the same time. This would make a great project for someone...

Also, it would be nice to allow a cache of compiled files to be shared across several users or machines.

Joerg Beyer started a project called gecc to explore this architecture, but development appears to have stalled in 2002.

Temporary files in strange location?

distcc always tries to create subdirs in /root/tmp for its temp files. How do I get around this?

distcc respects the $TMPDIR environment variable when creating its scratch directory. I suspect you have that set in root's .profile. If you unset it in the shell script that launches xinetd, or set it to something not in root's path, then it should be fine.

Server dies after a few connections when run from inetd

i noticed that when you start a make proccess, the remote distccd recieves 10-15 connections from my host (which is good) and those proccesses die after a while. this is a long compilation (kdelibs) and after the initial proccesses die, i cannot see any new one's coming in. is this normal ? it seems as if after a while (2-3 minuets) distcc stops working and only my local gcc is still compiling.

If you're running distccd from inetd, then it may be that inetd thinks that the service is "looping" because of all the rapid connections. You need to increase the maximum connection rate. See the inetd manual.

On traditional BSD inetd, you can do this by changing the word t to something like nowait.1000

Alternatively, run distccd with --daemon, rather than from inetd.

Running distccd on a firewall?

another machine i want to add to this "cluster" is my firewall. its not a very powerfull one, but it can help speed things up a little more ;) are there any knows security issues with distcc ? how stupid will it be to run it on a server that acts as a firewall ?

It depends on your security profile, but it's not completely unreasonable. Hopefully your firewall already has iptables and tcpwrappers protection against connections from the outside world. Just make sure that nobody else can connect to the distccd port.

What -j level to use?

when starting a compilation, the howto says to use -j8 . is this optimal ? anything else i should be using for better performance ?

For plain (that is, non-pump) mode, you should use about twice the total number of CPUs available, but it depends on your network, program being compiled, available memory, etc. Experiment with different values.

Client machines tend to be saturated at about 10-20 jobs, so using values above -j10 is rarely useful. (Perhaps higher levels would work well on clients with two or more CPUs and very fast network connections, memory and disk.)

The advice above is all for plain mode; for "pump" mode, you may see benefit from higher -j values. Please experiment and let us know what works best.

Should I include localhost in the host list?

Naturally, compiling on the machine that drives compilation has low overhead.

But, if a large number of machines are part of the distcc "farm", I suspect most of the driving machine's time would be better spent doing preprocessing & feeding only, and as such increase the chances of always having something ready to handle to machines that finish their distcc compilation jobs.

I'm just hoping to collect some opinions on this matter. For a large project and for a cluster of about 8 machines, is it actually better to dedicated the driving machine to do preprocessing only? (and so not include it in the hosts file).

It will depend on your source tree, your network, your compiler, and your makefile (or alternative) just what fraction of the work must be done locally, which is the most important thing here. At about the 3-4 machine level it may be worth putting localhost last; at 8-10 machines it may be better to leave it out altogether.

Different gcc versions?

if the host machine is using gcc 3.2, can the other machines use an older version of gcc ? 2.9.x for example ?

distcc doesn't care. However, in some circumstances, particularly for C++, gcc object files compiled with one version of gcc are not compatible with those compiled by another. This is true even if they are built on the same machine.

It is usually best to make sure that every compiler name maps to a reasonably similar version on every machine. You can either make sure that gcc is the same everywhere, or use a version-qualified compiler name, such as gcc-3.2 or i386-redhat-linux-gcc-3.2.2.

In particular, object files can be incompatible because:

  1. Calling conventions or ABIs have changed, particularly for C++.
  2. Different versions optimize differently.
  3. The header files specifically detect the version of gcc being used, to change optimization or to work around bugs. The Linux kernel does this.

In all these cases you would need to "make clean" if you upgraded your gcc.

It is hard to generalize, but using gcc versions which have the first two components the same (e.g. 3.2.1 and 3.2.2) is usually OK. To be safe, use the exact same release on all machines.

It is also a good idea to have the assembler versions be the same too, although the problems there are slightly less complex.

Shouldn't distcc check gcc versions?

It might be a good idea for distcc to check the version of gcc on all the volunteer machines. However this turns out to be hard to do completely reliably, because some programs which both call themselves "gcc 3.2.1" behave differently, presumably because of vendor patches or because vendors have shipped pre-release code.

So since automatic detection would not be a reliable solution, for the moment we depend on the user to make sure they have compatible versions installed.

Using different platforms?

I have a mixed network environment. MacOS X, Linux, and Windows (with cygwin). Seeing distcc really piqued my interest. Using gcc on all of my environments, can I set up distcc to span across this variety of operating systems?

It should be reasonably straightforward. Of course you will need to either install or build appropriate cross compilers for each machine.

For example, on each volunteer machine, build an x86-linux cross compiler and (this is important) install it as "i386-redhat-linux-gcc-3.2.2" or something similar, using the appropriate gcc configuration options to set the name. You also need to make a link to that name on the Linux machine.

Then from Linux, run "distcc i386-redhat-linux-gcc-3.2.2".

Repeat as appropriate for every combination you want to use.

On Windows, you need to use the Cygwin (or perhaps Mingw?) software, which provides a Unix-like environment for running gcc.

The easiest way to build a cross compiler is to use Dan Kegel's crosstool.

Brian Mosher reports that Mumit Khan's description of building a Cygwin-hosted Linux toolchain works well. Mike Santy suggests a slightly different approach.

Compiling between different i386 Unixes?

Do I need to install cross-compilers to distribute builds between different operating systems on the same CPU architecture? For example, OpenBSD i386 and NetBSD i386.

In general, yes you do. Even on the same system, there can be incompatibilites between the output of compilers such as

  • Different object file formats, or variations of the format. (OpenBSD uses a.out, but most Unixes use ELF.)
  • Different patches applied to the compiler, even if it claims to be the same version.

What's hard about synchronizing clocks?

Why do you say it's a feature of distcc that the machine clocks don't need to be synchronized? It's easy to do: you just install an NTP client.

It's not terribly hard, I agree. But it's not quite as trivial as you might think:

  1. Installing an NTP client requires root access on all the machines, not just your own workstation. (There's no way around it, because it needs to change the machine's clock.)
  2. You need to be able to reach a reliable timeserver, which is sometimes a problem with firewalls.
  3. If one of the machines does get out of sync, then builds may be incorrect or you may get errors. You may not notice until after problems have occurred. You can't fix it without intervention from the administrator, who may be on holidays.

What's not there can't break.

What does "listening on 0.0.0.0:3632" mean?

This message means it's listening for connections on a wildcard IP address, so clients coming through any network interface should be able to connect, subject to --allow rules.

If you have more than one interface and want to only allow connections from one of them then use the --listen or w options, or both.

How can I use SSH connection multiplexing?

Can I use ssh connection sharing to reduce the overhead of opening SSH connections?

Yes. Using SSH connection sharing can reduce the overhead of establishing SSH connections by a factor of 10.

Make sure you have a recent SSH (only on client side). I think you need OpenSSH 4 or later.

  $ ssh -V
  OpenSSH_4.6p1 Debian-5ubuntu0.1, OpenSSL 0.9.8e 23 Feb 2007

Create a file ~/.ssh/config, and add this:

  Host *
  ControlMaster auto
  ControlPath ~/.ssh_tmp/master-%r@%h:%p

Then create the master SSH connection:

  ssh -fMN hostname

Subsequent connections to that host will now fly!

For best results, create master SSH connections for each host in your distcc host list.

For more information on ssh connection sharing, see here.

Will compiling with distcc magically make my program distributed?

All objects files were created , but when I issue command for execution as ./executable then program takes so much time as without distcc. Kindly tell me that how this execution time can be reduced. All things work Properly , But all processors of systems are not utilized during EXECUTION of programs.

(This has to be the wierdest question in the list, but it really was sent.)

distcc makes building the program take less time. The program that is produced is the same as for a local build.

If you want your program to be distributed at run time across several machines, you need to design in parallelism and distribution, possibly using a framework such as MPI or OpenMOSIX.

What you seem to be asking for is to mechanically transform an arbitrary C program into a distributed parallel program. distcc doesn't do anything like that. It is perhaps not quite impossible (people have written parallelizing compilers) but it's very very hard.

make-kpkg with distcc?

How to compile the debian kernel with make-kpkg using distcc?

Use the CONCURRENCY_LEVEL environment variable eg make-kpkg kernel_image CONCURRENCY_LEVEL=4 after installing distcc into a masquerade directory.

How to use an SSH TCP port other than 22?

I want to use two machines over ssh. One machine has a differnt port than 22. But I can't set a port in the host specifications for ssh.

Use something like this in your ~/.ssh/config:

Host bertie
Port 2202

How do I build a cross compiler to Mac OS X?

Dara Hazeghi says:

Basically, you have to download the same source version compiler as the one on your OS X box, build it on you Linux PC, etc. The assembler now works on Linux as well, so that other issue you had should be moot.

That is, you should compile the Apple source on your non-Apple machine. Apple have some patches that are not in the upstream gcc release so a compiler built from the gnu.org source will not be compatible.

Randomly patched kernel crashes/hangs

I installed applied some random unstable kernel patches, and now my machine hangs/crashes/corrupts data when I run distcc!

If your kernel crashes, it is by definition a kernel bug.

If you have applied any patches that are not in the kernel.org stable release, your first step should be to back them out and see if the problem is still reproducible.

distccd sometimes unable to find gcc

distccd inherits its PATH from whichever process starts it, and uses this to find compilers. You may need to make sure the PATH is set appropriately by the script that starts distccd. You can also set DISTCCD_PATH, which overrides PATH and bypasses checks for distcc masquerade directories. The daemon's path is logged when --verbose is given.

Error about "jobserver unavailable"

This indicates a problem with your Makefile or shell. See the explanation of this error from GNU make.

This can also happen when make is unable to create the fifo it uses to communicate among parallel processes, because TMPDIR is not accessible. Check that the variable is set properly (or unset), and that the directory has the right permissions.

Files written to NFS filesystems are corrupt

List post: Writing object files from distcc to an NFS directory can cause corrupt output: object files will be full of zeros.

This is a bug in the Linux kernel NFS client interaction between mmap and rename. It can be avoided by using the no_subtree_check export option on the NFS server. distcc 2.18 no longer uses mmap to receive files and may not suffer this bug so strongly, but setting no_subtree_check is still recommended.

distccmon doesn't work on NFS

When I set DISTCC_DIR to a directory on an NFS server, or have HOME on an NFS server, the monitors give errors or just don't work.

The monitor checks that the processes recorded in the state files are actually running (using kill -0). If the processes aren't running on the same machine, this doesn't work.

In any case, having DISTCC_DIR on NFS is likely to cause problems with locking. Please set DISTCC_DIR to a local directory instead.

gcc's DEPENDENCIES_OUTPUT option is broken

Programs that use gcc's DEPENDENCIES_OUTPUT option don't work with ccache.

This should be fixed in ccache 2.3. There is no problem with distcc.

distcc fails to build on OS X

I'm trying to compile Distcc on OS X (gcc 3.1) and it fails with the following:

gcc -DHAVE_CONFIG_H -D_GNU_SOURCE -I./popt -I./src "-DSYSCONFDIR=\"/usr/local/etc\"" -g -O2 -W -Wall -W -Wimplicit -Wshadow -Wpointer-arith -Wcast-align -Wwrite-strings -Waggregate-return -Wstrict-prototypes -Wmissing-prototypes -Wnested-externs -o src/climasq.o -c src/climasq.c
src/climasq.c:69: only 1 arg to macro 'rs_log_error' (2 expected)
src/climasq.c:114: only 1 arg to macro 'rs_trace' (2 expected)
cpp-precomp: warning: errors during smart preprocessing, retrying in basic mode
make: *** [src/climasq.o] Error 1

There is a bug in some versions of Apple's gcc. Building with CFLAGS="-no-cpp-precomp" should fix it. It should be automatically corrected in distcc 2.8 and later.

Problems with gcc -MD

I'm using 0.12, and having trouble with dependency file creation. The makefiles that are set up to use gcc's -MD option aren't working. I'm getting some of the .d files in their proper directory, and not others.

This problem can only occur if you're using gcc 3.0 or later, have the source and object files in different directories or under different names, and you are using -MD or -MMD but not -MF.

The workaround is to change the Makefile to explicitly specify with -MF the file which should receive the dependency information. Many Makefiles already do this. Note that the -MF option is only available in gcc 3.0 and later.

Because the behaviour of -MD has changed from gcc2 to gcc3 there is no perfect solution available at the moment.

gdb can't find source files

When I try to debug with gdb an executable compiled with distcc, gdb doesn't find the source of the object to be debugged, unless that source is in the directory from which I start gdb.

Unfortunately this is caused by a bug in gcc, which I hope will be fixed in a future release. gcc embeds the directory where the compiler (cc1) was run, when it really ought to record the directory the source came from.

You can work around it for now by using the "directory" command in gdb to tell it where to find the source, or by passing an absolute file name when compiling.

Tim Janik has an unofficial patch for distcc which works around this but I think I won't merge it because it's better to fix it in gcc.

This is Debian #148957.

There was a discussion about this bug on the gcc-patches mailing list. This can affect other programs which rely on debug stabs, such as addr2line, and it results in object files not being byte-for-byte identical when they include the source directory. The same bug affects ccache.

TCP_CORK in linux-2.2

Linux 2.2 has a bug to do with TCP_CORK sockets getting stuck in the FIN_WAIT1 state. distcc 0.10 tries to work around it but it is not completely possible; if it causes trouble set DISTCC_CORK=0.

Hung sockets in Linux 2.5

distcc seems to produce a problem in Linux 2.5 where one machine thinks the socket is CLOSED, and the other thinks it is ESTABLISHED. As a result the transfer hangs.

The CONNECTED/ESTABLISHED state should *never* be reachable by a correct TCP implementation. distcc is triggering a bug in the 2.5 TCP stack, which was found and fixed in 2.5 in June 2003.

libtool and trouble building KDE

Trying to build KDE with make CC=distcc CXX=distcc -j5 fails with a libtool error.

The version of libtool included with some KDE releases is buggy.

The best workaround is to install distcc in "masquerade mode". Wayne Davison writes:

To get distcc going, just follow the instructions in the docs to setup a "masquerade" dir, add it to the start of your PATH, and then never fiddle with CC and/or CXX again (i.e. undefine them). Note that you need to be running a 2.x version of distcc for masquerade support to be integrated by default (e.g. 2.0.1). (Side note: if you're using a binary RPM of distcc, make sure that the maintainer has made a masquerade dir a part of the installed config. If not, ask them to do so, as it is the easiest way to make distcc compatible with the widest range of packages.)

To be explicit, do something very similar (or identical) to this:

# mkdir -p /usr/lib/distcc/bin
# cd /usr/lib/distcc/bin
# ln -s ../../../bin/distcc cc
# ln -s ../../../bin/distcc c++
# ln -s ../../../bin/distcc gcc
# ln -s ../../../bin/distcc g++

You can add links to any other compiler names you use on your system as well.

The other setup requirements remain unchanged: add DISTCC_HOSTS in your environment and run make with the -j5 (or whatever) option, perhaps by setting MAKEFLAGS in your environment.

Linux 2.2 makefile strangeness

Linux 2.2.21 cannot be built with distcc because of bugs in the kernel's Makefile. ($CC is set to the compiler name plus the computed options.) It may be possible to use distcc in masquerade mode.

Compiler command line parsing problems?

distcc parses the compiler command line to work out what operation is being invoked, and what are the input and output files. The semantics of gcc command lines is fairly complex, and has changed slightly over time. There may be some valid command lines that distcc understands differently to gcc, though none are known at the moment. These ought to cause the command to be run locally, but it is possible that it would cause a failure. Either case should be reported as a bug.

Can't handle local file access

distcc can't handle compilers that need to read other files from the local filesystem. This might be a problem with such things as profile-directed optimizers. distcc tries to detect such commands and run them locally, but there may be cases which are not handled properly.

Huge files

distcc's protocol and file IO would probably have trouble with source or object files over 2GB in size. I've never heard of a .c or .o file that large, and I rather suspect gcc would not handle them well either.

KDE builds slowly with --enable-final

Bernardo Innocenti says

Using the --enable-final configure option of KDE makes distcc almost useless.

Frerich Raabe explains:

--enable-final makes the build system concatenate all sourcefiles in a directory (say, Konqueror's sourcefiles) into one big file.

Technically, this is achieved by creating a dummy file which simply includes every C++ sourcefile. The advantage of this is that the compile a- takes less time since there is only little scattered file opening involved and b- produces usually more optimized code, since the compiler can see more code at once and thus there are more chances to optimize. Of course this eats a lot more memory, but that is not an issue nowadays.

Now, it's clear why this makes distcc useless: there is just one huge file per project, and outsourcing that file via distcc to other nodes will just delay the build since the sourcecode (and it's a lot) has to be transferred over the network, and there is no way to pararellize this.

To avoid this, configure with --disable-final.

C++ code that uses #pragma implementation doesn't seem to work properly.

That pragma can't work with distcc because it introduces dependencies between the source and local filenames. It is strongly deprecated in gcc and there are no plans to support it in distcc.

Data corruption in compiles

Typical symptoms include linker errors, wierd syntax errors or compiler crashes, caused by corrupt object or source files.

This has occurred several times in the past, typically because of kernel bugs such as Gentoo #36320. See if you can reproduce the problem using a standard kernel.org kernel.

Linux kernel panic using gigabit ethernet

There is a bug in the Linux 2.4.26 kernel [Google groups thread] that is triggered by using distcc. It is timing-related and seems more likely to occur with a gigabit network adapter.

This patch may fix the problem. If it doesn't, please report your problem to the kernel mailing list.

You may be able to avoid it by setting these environment variables on both the client and the server:

DISTCC_MMAP=0 DISTCC_SENDFILE=0

This bug is present in the Fedora Core 1 kernel and is Red Hat bug #114192.

Sun Workshop CC does not work with distcc

I am trying to use distcc on a network of Solaris 2.7 workstations and I also ran into this problem. I modified the src just like you did but I still have problems compiling certain C++ files - specifically those that include STL headers.

Has anyone figured out how to prevent the re-inclusion of the headers?

P. Christeas says:

After some time, the use of distcc with SunWS seems an unresolved issue. Of course, I exclude any attempt to modify the STL headers (which in fact are Sun's version of STL headers). The correct statement is that _Sun's CC does not behave correctly when the preprocessor is involved as a separate process_.

This compiler should still be marked as 'unsupported'. The only way (I know of) to make some use of a compile farm is to exclusively mark the few files that *do* compile, and send those through distcc. Hint: it seems that templates and CORBA implementation source do generally break the precompiler.

So Sun CC will not work for C++ with distcc, only for C.

configure has trouble when hosts are down

I'm having a strange problem. When no distcc-server is available and distcc decides to run a task locally, it fails.

configure:5648: checking for i386-redhat-linux-gcc-3.3.2 option to produce PIC
configure:5825: result: -fPIC
configure:5833: checking if i386-redhat-linux-gcc-3.3.2 PIC flag -fPIC works
configure:5854: i386-redhat-linux-gcc-3.3.2 -c -O2 -march=i386
-mcpu=i686 -fPIC -DPIC conftest.c >&5
distcc[32203] (dcc_build_somewhere) Warning: failed to distribute, running locally instead
configure:5858: $? = 0
configure:5866: result: no

For some reason configure thinks -fPIC is not supported although it works if a distcc-server is started. This causes some software to not work when build with distcc (but without distcc-servers). If build without distcc, it works (as -fPIC is supported).

What could be the cause ? I'm running the latest distcc.

autoconf interprets any message from the compiler as a failure for some tests, even if compilation succeeds. This is fixed in distcc 2.18 by always building autoconf tests locally.

distcc only listens for IPv6 connections on BSD

I use distcc from NetBSD's pkgsrc, which sets --enable-rfc2553 by default. When distccd is run as "distccd --daemon --user nobody", distccd seems to *only* be listening on an IPv6 socket.

By default some BSD systems do not allow applications to accept both IPv4 and IPv6 connections through a single server socket. See this message.

There are several options

  1. Don't build distcc with the --enable-rfc2553 option unless you need to support IPv6.
  2. Tell distcc to explicitly listen on either the IPv4 or IPv6 address, unless you need to accept connections over both protocols.
  3. Set the net.inet6.ip6.v6only sysctl to 0. (This affects all programs on the system, see the BSD manual for details.)
  4. Run two copies of distcc listening on both ports.

distcc fails with "No locks available"

This can happen if your home directory is on an NFS filesystem and NFS locking is not working. Locking in NFS is supported by a separate protocol and daemon which can be down even if other file operations are working OK.

To fix this, either fix NFS locking on your system, or set DISTCC_DIR to point to a local disk. If your system doesn't have a local hard disk you can just use tmpfs or something similar, because none of the files in that directory need to persist across reboots.

How to stop distcc using a particular machine?

I want to stop my machine accepting distcc jobs because I need it for something else.

The easiest way is to just shut down the distccd server. When it comes back up, clients will start using it again.

distcc only builds on one machine at a time

If I set DISTCC_HOSTS="remote_host localhost" then the project seems to be compiled only on the remote host (I see traces in ethereal and compilation is slow).

You are probably not using the -j option to make or scons, so it's only compiling one file at a time. Only the first host in the list will be used in this case.

Errors with temporary files on Cygwin

Some Windows compilers can't handle the default TMPDIR setting when distcc is run under Cygwin. To fix this, put something like export TMPDIR=c:/temp in /etc/profile.

Server won't start on Windows 98 or ME

For some reason the server does not work properly on Windows 98 unless it is invoked with --no-detach.

Make fails with "file not found"

If Make works for non-parallel builds, but fails when you use j and distcc, then there is probably a concurrency bug in your Makefile. It is similar to a threading bug in C, C++ or Java when a program is run on a multiprocessor system.

Sometimes it will work with -j without distcc, but fail when you use distcc. This is probably because distcc lets the compilation run faster and therefore more jobs run in parallel.

You need to just fix your makefile, or not build that particular directory in parallel. It is very unlikely this is a bug in GNU make or distcc.

Internet-wide distributed compilation

I'm a newbie with distcc and linux. I have been able to bootstrap (with lots of effort) Gentoo. I'm having only one computer on my lan that I control, so can't install distcc on the other host. I'm looking for a WAN of public distcc (I would grant access to mine also). I would like have a vast distributed network of distcc that would be public. It may use some ssh. I don't even know if this possible.

Won't it be faster then a laptop and a desktop if we have N hosts? I have heard of sub-ether that create a virtual NIC -- as far as I could understand. Any body know if it work with distcc?

This could be done, but some preparatory work is needed:

This really has to be done across SSH or a VPN; sending commands across the public net unencrypted would be insanely insecure.

Connections across the public internet are slower than on a LAN in both throughput and bandwidth. It's probably only feasible if the machines involved are on the same continent (preferably the same state) and have at least 512kbit connections.

The more serious problem is compiling on possibly untrusted machines. You need to assume that anyone who sends jobs can take over control of the user running distcc: if it's run within a UML instance or some similar sandbox, and each user has their own instance, that might be OK. Clients also need to implicitly trust all the servers not to give them corrupted or malicious data back.

There is also the organizational issue of just locating machines to use and the right public key to get into them.

Probably the best way to prototype this is to find a few friends who live in the same country. Set up ssh access into each others machines, and run distcc over that to try compilation. Please report your results to the distcc mailing list. If this works well, then we can look at scaling it up to run on larger groups or between potentially untrusted machines.

What is the relationship between Apple Xcode and distcc?

Apple's Xcode development kit includes a fork of an old verison of distcc, plus Apple patches to locate machines using Rendezvous and a small GUI to configure it.

The source is available from opensource.apple.com.

How can I avoid compiling on workstations when they're in use

This watch-ssaver script from DMUCS enables distcc only when the screensaver is running.