Every time you compile something from source, you go through the same 3 steps:
$ ./configure
$ make
$ make install
I understand, that it ma
Firstly ./configure doesn't always find everything that it needs, or in other cases it finds everything it requires but not everything it could use. In that case you would want to know about it (and your ./install.sh script would fail anyway!) The classic example of non-failure with unintended consequences, from my point of view, is compiling large applications like ffmpeg or mplayer. These will use libraries if they are available but will compile anyway if they aren't, leaving some options disabled. The problem is that you then discover later that it was compiled without support for some format or another, thus requiring you to go back and redo it.
Another thing ./configure does somewhat interactively is giving you the option to customise where on the system the application will be installed. Different distributions/environments have different conventions, and you would probably want to stick to the convention on your system. Also, you might want to install it locally (solely for yourself). Traditionally the ./configure and make steps aren't run as root, while make install (unless it is installed solely for yourself) has to be run as root.
Specific distributions often provide scripts that perform this ./install.sh functionality in a distribution-sensitive manner - for example, source RPMs + spec file + rpmbuild or slackbuilds.
(Footnote: that being said, I agree that ./configure; make; make install; can get extremely tedious.)
First, it should be ./configure && make && make install
since each depends on the success of the former. Part of the reason is evolution and part of the reason is convenience for the development workflow.
Originally, most Makefile
s would only contain the commands to compile a program and installation was left to the user. An extra rule allows make install
to place the compiled output in a place that might be correct; there are still plenty of good reasons that you might not want to do this, including not being the system administrator, not want to install it at all. Moreover, if I am developing the software, I probably don't want to install it. I want to make some changes and test the version sitting in my directory. This becomes even more salient if I'm going to have multiple versions lying around.
./configure
goes and detects what is available in the environment and/or is desired by the user to determine how to build the software. This is not something that needs to change very often and can often take some time. Again, if I am a developer, it's not worth the time to reconfigure constantly. More importantly, since make
uses timestamps to rebuild modules, if I rerun configure
there is a possibility that flags will change and now some of the components in my build will be compile with one set of flags and others with a different set of flags that might lead to different, incompatible behaviour. So long as I don't rerun configure
, I know that my compilation environment remains the same even if I change my sources. If I rerun configure
, I should make clean
first, to remove any built sources to ensure things are built uniformly.
The only case where the three command are run in a row are when users install the program or a package is built (e.g., Debian's debuild or RedHat's rpmbuild). And that assumes that the package can be given a plain configure
, which is not usually the case for packaging, where, at least, --prefix=/usr
is desired. And pacakgers are like to have to deal with fake-roots when doing the make install
part. Since there are lots of exceptions, making ./configure && make && make install
the rule would be inconvenient for a lot of people who do it on a far more frequent basis!
configure
may fail if it finds that dependencies are missing.
make
runs a default target, the first one listed in the Makefile. Often this target is all
, but not always. So you could only make all install
if you knew that was the target.
So ...
#!/bin/sh
if ./configure $*; then
if make; then
make install
fi
fi
or:
./configure $* && ./make && ./make install
The $*
is included because one often has to provide options to configure
.
But why not just let people do it themselves? Is this really such a big win?
Because each step does different things
./configure
This script has lots of options that you should change. Like --prefix
or --with-dir=/foo
. That means every system has a different configuration. Also ./configure
checks for missing libraries that should be installed. Anything wrong here causes not to build your application. That's why distros have packages that are installed on different places, because every distro thinks it's better to install certain libraries and files to certain directories. It is said to run ./configure
, but in fact you should change it always.
For example have a look at the Arch Linux packages site. Here you'll see that any package uses a different configure parameter (assume they are using autotools for the build system).
make
This is actually make all
by default. And every make has different actions to do. Some do building, some do tests after building, some do checkout from external SCM repositories. Usually you don't have to give any parameters, but again some packages execute them differently.
make install
This installs the package in the place specified with configure. If you want you can specify ./configure
to point to your home directory. However, lots of configure options are pointing to /usr
or /usr/local
. That means then you have to use actually sudo make install
because only root can copy files to /usr and /usr/local.
Now you see that each step is a pre-requirement for next step. Each step is a preparation to make things work in a problemless flow. Distros use this metaphor to build packages (like RPM, deb, etc.).
Here you'll see that each step is actually a different state. That's why package managers have different wrappers. Below is an example of a wrapper that lets you build the whole package in one step. But remember that each application has a different wrapper (actually these wrappers have a name like spec, PKGBUILD, etc.):
def setup:
... #use ./configure if autotools is used
def build:
... #use make if autotools is used
def install:
... #use make all if autotools is used
Here one can use autotools, that means ./configure
, make
and make install
. But another one can use SCons, Python related setup or something different.
As you see splitting each state makes things much easier for maintaining and deployment, especially for package maintainers and distros.