new
constructorclone
methodcopy
methodInstall
methodInstallAs
methodPrecious
methodAfterBuild
methodCommand
methodObjects
methodProgram
methodLibrary
methodModule
methodDepends
methodRuleSet
methodDefaultRules
methodIgnore
methodSalt
methodUseCache
methodSourcePath
methodConsPath
methodSplitPath
methodDirPath
methodFilePath
methodHelp
method
Cons - A Software Construction System
A guide and reference for version 2.3.0
Copyright (c) 1996-2001 Free Software Foundation, Inc.
This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version.
This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License along with this program; see the file COPYING. If not, write to the Free Software Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
Cons is a system for constructing, primarily, software, but is quite different from previous software construction systems. Cons was designed from the ground up to deal easily with the construction of software spread over multiple source directories. Cons makes it easy to create build scripts that are simple, understandable and maintainable. Cons ensures that complex software is easily and accurately reproducible.
Cons uses a number of techniques to accomplish all of this. Construction scripts are just Perl scripts, making them both easy to comprehend and very flexible. Global scoping of variables is replaced with an import/export mechanism for sharing information between scripts, significantly improving the readability and maintainability of each script. Construction environments are introduced: these are Perl objects that capture the information required for controlling the build process. Multiple environments are used when different semantics are required for generating products in the build tree. Cons implements automatic dependency analysis and uses this to globally sequence the entire build. Variant builds are easily produced from a single source tree. Intelligent build subsetting is possible, when working on localized changes. Overrides can be setup to easily override build instructions without modifying any scripts. MD5 cryptographic signatures are associated with derived files, and are used to accurately determine whether a given file needs to be rebuilt.
While offering all of the above, and more, Cons remains simple and easy to use. This will, hopefully, become clear as you read the remainder of this document.
Cons is a make replacement. In the following paragraphs, we look at a few of the undesirable characteristics of make--and typical build environments based on make--that motivated the development of Cons.
Traditional make-based systems of any size tend to become quite complex. The original make utility and its derivatives have contributed to this tendency in a number of ways. Make is not good at dealing with systems that are spread over multiple directories. Various work-arounds are used to overcome this difficulty; the usual choice is for make to invoke itself recursively for each sub-directory of a build. This leads to complicated code, in which it is often unclear how a variable is set, or what effect the setting of a variable will have on the build as a whole. The make scripting language has gradually been extended to provide more possibilities, but these have largely served to clutter an already overextended language. Often, builds are done in multiple passes in order to provide appropriate products from one directory to another directory. This represents a further increase in build complexity.
The bane of all makes has always been the correct handling of dependencies. Most often, an attempt is made to do a reasonable job of dependencies within a single directory, but no serious attempt is made to do the job between directories. Even when dependencies are working correctly, make's reliance on a simple time stamp comparison to determine whether a file is out of date with respect to its dependents is not, in general, adequate for determining when a file should be rederived. If an external library, for example, is rebuilt and then ``snapped'' into place, the timestamps on its newly created files may well be earlier than the last local build, since it was built before it became visible.
Make provides only limited facilities for handling variant builds. With the proliferation of hardware platforms and the need for debuggable vs. optimized code, the ability to easily create these variants is essential. More importantly, if variants are created, it is important to either be able to separate the variants or to be able to reproduce the original or variant at will. With make it is very difficult to separate the builds into multiple build directories, separate from the source. And if this technique isn't used, it's also virtually impossible to guarantee at any given time which variant is present in the tree, without resorting to a complete rebuild.
Make provides only limited support for building software from code that exists in a central repository directory structure. The VPATH feature of GNU make (and some other make implementations) is intended to provide this, but doesn't work as expected: it changes the path of target file to the VPATH name too early in its analysis, and therefore searches for all dependencies in the VPATH directory. To ensure correct development builds, it is important to be able to create a file in a local build directory and have any files in a code repository (a VPATH directory, in make terms) that depend on the local file get rebuilt properly. This isn't possible with VPATH, without coding a lot of complex repository knowledge directly into the makefiles.
A few of the difficulties with make have been cited above. In this and subsequent sections, we shall introduce Cons and show how these issues are addressed.
Cons is Perl-based. That is, Cons scripts--Conscript and Construct files, the equivalent to Makefile or makefile--are all written in Perl. This provides an immediate benefit: the language for writing scripts is a familiar one. Even if you don't happen to be a Perl programmer, it helps to know that Perl is basically just a simple declarative language, with a well-defined flow of control, and familiar semantics. It has variables that behave basically the way you would expect them to, subroutines, flow of control, and so on. There is no special syntax introduced for Cons. The use of Perl as a scripting language simplifies the task of expressing the appropriate solution to the often complex requirements of a build.
To ground the following discussion, here's how you could build the Hello, World! C application with Cons:
$env = new cons(); Program $env 'hello', 'hello.c';
If you install this script in a directory, naming the script Construct,
and create the hello.c source file in the same directory, then you can
type cons hello
to build the application:
% cons hello cc -c hello.c -o hello.o cc -o hello hello.o
A key simplification of Cons is the idea of a construction environment. A construction environment is an object characterized by a set of key/value pairs and a set of methods. In order to tell Cons how to build something, you invoke the appropriate method via an appropriate construction environment. Consider the following example:
$env = new cons( CC => 'gcc', LIBS => 'libworld.a' );
Program $env 'hello', 'hello.c';
In this case, rather than using the default construction environment, as is,
we have overridden the value of CC
so that the GNU C Compiler equivalent
is used, instead. Since this version of Hello, World! requires a library,
libworld.a, we have specified that any program linked in this environment
should be linked with that library. If the library exists already, well and
good, but if not, then we'll also have to include the statement:
Library $env 'libworld', 'world.c';
Now if you type cons hello
, the library will be built before the program
is linked, and, of course, gcc
will be used to compile both modules:
% cons hello gcc -c hello.c -o hello.o gcc -c world.c -o world.o ar r libworld.a world.o ar: creating libworld.a ranlib libworld.a gcc -o hello hello.o libworld.a
With Cons, dependencies are handled automatically. Continuing the previous example, note that when we modify world.c, world.o is recompiled, libworld.a recreated, and hello relinked:
% vi world.c [EDIT] % cons hello gcc -c world.c -o world.o ar r libworld.a world.o ar: creating libworld.a ranlib libworld.a gcc -o hello hello.o libworld.a
This is a relatively simple example: Cons ``knows'' world.o depends upon
world.c, because the dependency is explicitly set up by the Library
method. It also knows that libworld.a depends upon world.o and that
hello depends upon libworld.a, all for similar reasons.
Now it turns out that hello.c also includes the interface definition file, world.h:
% emacs world.h [EDIT] % cons hello gcc -c hello.c -o hello.o gcc -o hello hello.o libworld.a
How does Cons know that hello.c includes world.h, and that hello.o must therefore be recompiled? For now, suffice it to say that when considering whether or not hello.o is up-to-date, Cons invokes a scanner for its dependency, hello.c. This scanner enumerates the files included by hello.c to come up with a list of further dependencies, beyond those made explicit by the Cons script. This process is recursive: any files included by included files will also be scanned.
Isn't this expensive? The answer is--it depends. If you do a full build of a large system, the scanning time is insignificant. If you do a rebuild of a large system, then Cons will spend a fair amount of time thinking about it before it decides that nothing has to be done (although not necessarily more time than make!). The good news is that Cons makes it very easy to intelligently subset your build, when you are working on localized changes.
Because Cons does full and accurate dependency analysis, and does this globally, for the entire build, Cons is able to use this information to take full control of the sequencing of the build. This sequencing is evident in the above examples, and is equivalent to what you would expect for make, given a full set of dependencies. With Cons, this extends trivially to larger, multi-directory builds. As a result, all of the complexity involved in making sure that a build is organized correctly--including multi-pass hierarchical builds--is eliminated. We'll discuss this further in the next sections.
A larger build, in Cons, is organized by creating a hierarchy of build
scripts. At the top of the tree is a script called Construct. The rest
of the scripts, by convention, are each called Conscript. These scripts
are connected together, very simply, by the Build
, Export
, and
Import
commands.
The Build
command takes a list of Conscript file names, and arranges
for them to be included in the build. For example:
Build qw( drivers/display/Conscript drivers/mouse/Conscript parser/Conscript utilities/Conscript );
This is a simple two-level hierarchy of build scripts: all the subsidiary Conscript files are mentioned in the top-level Construct file. Notice that not all directories in the tree necessarily have build scripts associated with them.
This could also be written as a multi-level script. For example, the Construct file might contain this command:
Build qw( parser/Conscript drivers/Conscript utilities/Conscript );
and the Conscript file in the drivers directory might contain this:
Build qw( display/Conscript mouse/Conscript );
Experience has shown that the former model is a little easier to understand, since the whole construction tree is laid out in front of you, at the top-level. Hybrid schemes are also possible. A separately maintained component that needs to be incorporated into a build tree, for example, might hook into the build tree in one place, but define its own construction hierarchy.
By default, Cons does not change its working directory to the directory containing a subsidiary Conscript file it is including. This behavior can be enabled for a build by specifying, in the top-level Construct file:
Conscript_chdir 1;
When enabled, Cons will change to the subsidiary Conscript file's containing directory while reading in that file, and then change back to the top-level directory once the file has been processed.
It is expected that this behavior will become the default in some future version of Cons. To prepare for this transition, builds that expect Cons to remain at the top of the build while it reads in a subsidiary Conscript file should explicitly disable this feature as follows:
Conscript_chdir 0;
You may have noticed that the file names specified to the Build command are relative to the location of the script it is invoked from. This is generally true for other filename arguments to other commands, too, although we might as well mention here that if you begin a file name with a hash mark, ``#'', then that file is interpreted relative to the top-level directory (where the Construct file resides). And, not surprisingly, if you begin it with ``/'', then it is considered to be an absolute pathname. This is true even on systems which use a back slash rather than a forward slash to name absolute paths.
You may pull modules into each Conscript file using the normal Perl
use
or require
statements:
use English; require My::Module;
Each use
or require
only affects the one Conscript file in which
it appears. To use a module in multiple Conscript files, you must
put a use
or require
statement in each one that needs the module.
The top-level Construct file and all Conscript files begin life in a common, separate Perl package. Cons controls the symbol table for the package so that, the symbol table for each script is empty, except for the Construct file, which gets some of the command line arguments. All of the variables that are set or used, therefore, are set by the script itself--not by some external script.
Variables can be explicitly imported by a script from its parent script. To import a variable, it must have been exported by the parent and initialized (otherwise an error will occur).
The Export
command is used as in the following example:
$env = new cons(); $INCLUDE = "#export/include"; $LIB = "#export/lib"; Export qw( env INCLUDE LIB ); Build qw( util/Conscript );
The values of the simple variables mentioned in the Export
list will be
squirreled away by any subsequent Build
commands. The Export
command
will only export Perl scalar variables, that is, variables whose name
begins with $
. Other variables, objects, etc. can be exported by
reference--but all scripts will refer to the same object, and this object
should be considered to be read-only by the subsidiary scripts and by the
original exporting script. It's acceptable, however, to assign a new value
to the exported scalar variable--that won't change the underlying variable
referenced. This sequence, for example, is OK:
$env = new cons(); Export qw( env INCLUDE LIB ); Build qw( util/Conscript ); $env = new cons(CFLAGS => '-O'); Build qw( other/Conscript );
It doesn't matter whether the variable is set before or after the Export
command. The important thing is the value of the variable at the time the
Build
command is executed. This is what gets squirreled away. Any
subsequent Export
commands, by the way, invalidate the first: you must
mention all the variables you wish to export on each Export
command.
Variables exported by the Export
command can be imported into subsidiary
scripts by the Import
command. The subsidiary script always imports
variables directly from the superior script. Consider this example:
Import qw( env INCLUDE );
This is only legal if the parent script exported both $env
and
$INCLUDE
. It also must have given each of these variables values. It is
OK for the subsidiary script to only import a subset of the exported
variables (in this example, $LIB
, which was exported by the previous
example, is not imported).
All the imported variables are automatically re-exported, so the sequence:
Import qw ( env INCLUDE ); Build qw ( beneath-me/Conscript );
will supply both $env
and $INCLUDE
to the subsidiary file. If only
$env
is to be exported, then the following will suffice:
Import qw ( env INCLUDE ); Export qw ( env ); Build qw ( beneath-me/Conscript );
Needless to say, the variables may be modified locally before invoking
Build
on the subsidiary script.
The only constraint on the ordering of build scripts is that superior
scripts are evaluated before their inferior scripts. The top-level
Construct file, for instance, is evaluated first, followed by any
inferior scripts. This is all you really need to know about the evaluation
order, since order is generally irrelevant. Consider the following Build
command:
Build qw( drivers/display/Conscript drivers/mouse/Conscript parser/Conscript utilities/Conscript );
We've chosen to put the script names in alphabetical order, simply because that's the most convenient for maintenance purposes. Changing the order will make no difference to the build.
In any complex software system, a method for sharing build products needs to be established. We propose a simple set of conventions which are trivial to implement with Cons, but very effective.
The basic rule is to require that all build products which need to be shared between directories are shared via an intermediate directory. We have typically called this export, and, in a C environment, provided conventional sub-directories of this directory, such as include, lib, bin, etc.
These directories are defined by the top-level Construct file. A simple Construct file for a Hello, World! application, organized using multiple directories, might look like this:
# Construct file for Hello, World!
# Where to put all our shared products. $EXPORT = '#export';
Export qw( CONS INCLUDE LIB BIN );
# Standard directories for sharing products. $INCLUDE = "$EXPORT/include"; $LIB = "$EXPORT/lib"; $BIN = "$EXPORT/bin";
# A standard construction environment. $CONS = new cons ( CPPPATH => $INCLUDE, # Include path for C Compilations LIBPATH => $LIB, # Library path for linking programs LIBS => '-lworld', # List of standard libraries );
Build qw( hello/Conscript world/Conscript );
The world directory's Conscript file looks like this:
# Conscript file for directory world Import qw( CONS INCLUDE LIB );
# Install the products of this directory Install $CONS $LIB, 'libworld.a'; Install $CONS $INCLUDE, 'world.h';
# Internal products Library $CONS 'libworld.a', 'world.c';
and the hello directory's Conscript file looks like this:
# Conscript file for directory hello Import qw( CONS BIN );
# Exported products Install $CONS $BIN, 'hello';
# Internal products Program $CONS 'hello', 'hello.c';
To construct a Hello, World! program with this directory structure, go to
the top-level directory, and invoke cons
with the appropriate
arguments. In the following example, we tell Cons to build the directory
export. To build a directory, Cons recursively builds all known products
within that directory (only if they need rebuilding, of course). If any of
those products depend upon other products in other directories, then those
will be built, too.
% cons export Install world/world.h as export/include/world.h cc -Iexport/include -c hello/hello.c -o hello/hello.o cc -Iexport/include -c world/world.c -o world/world.o ar r world/libworld.a world/world.o ar: creating world/libworld.a ranlib world/libworld.a Install world/libworld.a as export/lib/libworld.a cc -o hello/hello hello/hello.o -Lexport/lib -lworld Install hello/hello as export/bin/hello
You'll note that the two Conscript files are very clean and to-the-point. They simply specify products of the directory and how to build those products. The build instructions are minimal: they specify which construction environment to use, the name of the product, and the name of the inputs. Note also that the scripts are location-independent: if you wish to reorganize your source tree, you are free to do so: you only have to change the Construct file (in this example), to specify the new locations of the Conscript files. The use of an export tree makes this goal easy.
Note, too, how Cons takes care of little details for you. All the export directories, for example, were made automatically. And the installed files were really hard-linked into the respective export directories, to save space and time. This attention to detail saves considerable work, and makes it even easier to produce simple, maintainable scripts.
It's often desirable to keep any derived files from the build completely separate from the source files. This makes it much easier to keep track of just what is a source file, and also makes it simpler to handle variant builds, especially if you want the variant builds to co-exist.
Cons provides a simple mechanism that handles all of these requirements. The
Link
command is invoked as in this example:
Link 'build' => 'src';
The specified directories are ``linked'' to the specified source directory. Let's suppose that you setup a source directory, src, with the sub-directories world and hello below it, as in the previous example. You could then substitute for the original build lines the following:
Build qw( build/world/Conscript build/hello/Conscript );
Notice that you treat the Conscript file as if it existed in the build directory. Now if you type the same command as before, you will get the following results:
% cons export Install build/world/world.h as export/include/world.h cc -Iexport/include -c build/hello/hello.c -o build/hello/hello.o cc -Iexport/include -c build/world/world.c -o build/world/world.o ar r build/world/libworld.a build/world/world.o ar: creating build/world/libworld.a ranlib build/world/libworld.a Install build/world/libworld.a as export/lib/libworld.a cc -o build/hello/hello build/hello/hello.o -Lexport/lib -lworld Install build/hello/hello as export/bin/hello
Again, Cons has taken care of the details for you. In particular, you will notice that all the builds are done using source files and object files from the build directory. For example, build/world/world.o is compiled from build/world/world.c, and export/include/world.h is installed from build/world/world.h. This is accomplished on most systems by the simple expedient of ``hard'' linking the required files from each source directory into the appropriate build directory.
The links are maintained correctly by Cons, no matter what you do to the source directory. If you modify a source file, your editor may do this ``in place'' or it may rename it first and create a new file. In the latter case, any hard link will be lost. Cons will detect this condition the next time the source file is needed, and will relink it appropriately.
You'll also notice, by the way, that no changes were required to the underlying Conscript files. And we can go further, as we shall see in the next section.
Variant builds require just another simple extension. Let's take as an
example a requirement to allow builds for both the baNaNa and peAcH
operating systems. In this case, we are using a distributed file system,
such as NFS to access the particular system, and only one or the other of
the systems has to be compiled for any given invocation of cons
. Here's
one way we could set up the Construct file for our Hello, World!
application:
# Construct file for Hello, World!
die qq(OS must be specified) unless $OS = $ARG{OS}; die qq(OS must be "peach" or "banana") if $OS ne "peach" && $OS ne "banana";
# Where to put all our shared products. $EXPORT = "#export/$OS";
Export qw( CONS INCLUDE LIB BIN );
# Standard directories for sharing products. $INCLUDE = "$EXPORT/include"; $LIB = "$EXPORT/lib"; $BIN = "$EXPORT/bin";
# A standard construction environment. $CONS = new cons ( CPPPATH => $INCLUDE, # Include path for C Compilations LIBPATH => $LIB, # Library path for linking programs LIBS => '-lworld', # List of standard libraries );
# $BUILD is where we will derive everything. $BUILD = "#build/$OS";
# Tell cons where the source files for $BUILD are. Link $BUILD => 'src';
Build ( "$BUILD/hello/Conscript", "$BUILD/world/Conscript", );
Now if we login to a peAcH system, we can build our Hello, World! application for that platform:
% cons export OS=peach Install build/peach/world/world.h as export/peach/include/world.h cc -Iexport/peach/include -c build/peach/hello/hello.c -o build/peach/hello/hello.o cc -Iexport/peach/include -c build/peach/world/world.c -o build/peach/world/world.o ar r build/peach/world/libworld.a build/peach/world/world.o ar: creating build/peach/world/libworld.a ranlib build/peach/world/libworld.a Install build/peach/world/libworld.a as export/peach/lib/libworld.a cc -o build/peach/hello/hello build/peach/hello/hello.o -Lexport/peach/lib -lworld Install build/peach/hello/hello as export/peach/bin/hello
Other variations of this model are possible. For example, you might decide
that you want to separate out your include files into platform dependent and
platform independent files. In this case, you'd have to define an
alternative to $INCLUDE
for platform-dependent files. Most Conscript
files, generating purely platform-independent include files, would not have
to change.
You might also want to be able to compile your whole system with debugging
or profiling, for example, enabled. You could do this with appropriate
command line options, such as DEBUG=on
. This would then be translated
into the appropriate platform-specific requirements to enable debugging
(this might include turning off optimization, for example). You could
optionally vary the name space for these different types of systems, but, as
we'll see in the next section, it's not essential to do this, since Cons
is pretty smart about rebuilding things when you change options.
Cons uses file signatures to decide if a derived file is out-of-date and needs rebuilding. In essence, if the contents of a file change, or the manner in which the file is built changes, the file's signature changes as well. This allows Cons to decide with certainty when a file needs rebuilding, because Cons can detect, quickly and reliably, whether any of its dependency files have been changed.
Cons uses the MD5 (Message Digest 5) algorithm to compute file signatures. The MD5 algorithm computes a strong cryptographic checksum for any given input string. Cons can, based on configuration, use two different MD5 signatures for a given file:
The content signature of a file is an MD5 checksum of the file's contents. Consequently, when the contents of a file change, its content signature changes as well.
The build signature of a file is a combined MD5 checksum of:
the signatures of all the input files used to build the file
the signatures of all dependency files discovered by source scanners
(for example, .h
files)
the signatures of all dependency files specified explicitly via the
Depends
method)
the command-line string used to build the file
The build signature is, in effect, a digest of all the dependency information for the specified file. Consequently, a file's build signature changes whenever any part of its dependency information changes: a new file is added, the contents of a file on which it depends change, there's a change to the command line used to build the file (or any of its dependency files), etc.
For example, in the previous section, the build signature of the world.o file will include:
the signature of the world.c file
the signatures of any header files that Cons detects are included, directly or indirectly, by world.c
the text of the actual command line was used to generate world.o
Similarly, the build signature of the libworld.a file will include all the signatures of its constituents (and hence, transitively, the signatures of their constituents), as well as the command line that created the file.
Note that there is no need for a derived file to depend upon any particular Construct or Conscript file. If changes to these files affect a file, then this will be automatically reflected in its build signature, since relevant parts of the command line are included in the signature. Unrelated Construct or Conscript changes will have no effect.
Before Cons exits, it stores the calculated signatures for all of the files it built or examined in .consign files, one per directory. Cons uses this stored information on later invocations to decide if derived files need to be rebuilt.
After the previous example was compiled, the .consign file in the build/peach/world directory looked like this:
world.h:985533370 - d181712f2fdc07c1f05d97b16bfad904 world.o:985533372 2a0f71e0766927c0532977b0d2158981 world.c:985533370 - c712f77189307907f4189b5a7ab62ff3 libworld.a:985533374 69e568fc5241d7d25be86d581e1fb6aa
After the file name and colon, the first number is a timestamp of the file's modification time (on UNIX systems, this is typically the number of seconds since January 1st, 1970). The second value is the build signature of the file (or ``-'' in the case of files with no build signature--that is, source files). The third value, if any, is the content signature of the file.
When Cons is deciding whether to build or rebuild a derived file, it first computes the file's current build signature. If the file doesn't exist, it must obviously be built.
If, however, the file already exists, Cons next compares the modification timestamp of the file against the timestamp value in the .consign file. If the timestamps match, Cons compares the newly-computed build signature against the build signature in the .consign file. If the timestamps do not match or the build signatures do not match, the derived file is rebuilt.
After the file is built or rebuilt, Cons arranges to store the newly-computed build signature in the .consign file when it exits.
The use of these signatures is an extremely simple, efficient, and effective method of improving--dramatically--the reproducibility of a system.
We'll demonstrate this with a simple example:
# Simple "Hello, World!" Construct file $CFLAGS = '-g' if $ARG{DEBUG} eq 'on'; $CONS = new cons(CFLAGS => $CFLAGS); Program $CONS 'hello', 'hello.c';
Notice how Cons recompiles at the appropriate times:
% cons hello cc -c hello.c -o hello.o cc -o hello hello.o % cons hello cons: "hello" is up-to-date. % cons DEBUG=on hello cc -g -c hello.c -o hello.o cc -o hello hello.o % cons DEBUG=on hello cons: "hello" is up-to-date. % cons hello cc -c hello.c -o hello.o cc -o hello hello.o
Cons provides a SourceSignature
method that allows you to configure
how the signature should be calculated for any source file when its
signature is being used to decide if a dependent file is up-to-date.
The arguments to the SourceSignature
method consist of one or more
pairs of strings:
SourceSignature 'auto/*.c' => 'content', '*' => 'stored-content';
The first string in each pair is a pattern to match against derived file
path names. The pattern is a file-globbing pattern, not a Perl regular
expression; the pattern <*.l> will match all Lex source files. The *
wildcard will match across directory separators; the pattern foo/*.c
would match all C source files in any subdirectory underneath the foo
subdirectory.
The second string in each pair contains one of the following keywords to specify how signatures should be calculated for source files that match the pattern. The available keywords are:
The Cons default behavior of always calculating a source file's signature from the file's contents is equivalent to specifying:
SourceSignature '*' => 'content';
The '*'
will match all source files. The content
keyword
specifies that Cons will read the contents of a source file to calculate
its signature each time it is run.
A useful global performance optimization is:
SourceSignature '*' => 'stored-content';
This specifies that Cons will use pre-computed content signatures from .consign files, when available, rather than re-calculating a signature from the the source file's contents each time Cons is run. In practice, this is safe for most build situations, and only a problem when source files are changed automatically (by scripts, for example). The Cons default, however, errs on the side of guaranteeing a correct build in all situations.
Cons tries to match source file path names against the patterns in the
order they are specified in the SourceSignature
arguments:
SourceSignature '/usr/repository/objects/*' => 'stored-content', '/usr/repository/*' => 'content', '*.y' => 'content', '*' => 'stored-content';
In this example, all source files under the /usr/repository/objects
directory will use .consign file content signatures, source files
anywhere else underneath /usr/repository will not use .consign
signature values, all Yacc source files (*.y
) anywhere else will not
use .consign signature values, and any other source file will use
.consign signature values.
Cons provides a SIGNATURE
construction variable that allows you to
configure how signatures are calculated for any derived file when its
signature is being used to decide if a dependent file is up-to-date.
The value of the SIGNATURE
construction variable is a Perl array
reference that holds one or more pairs of strings, like the arguments to
the SourceSignature
method.
The first string in each pair is a pattern to match against derived file
path names. The pattern is a file-globbing pattern, not a Perl regular
expression; the pattern `*.obj' will match all (Win32) object files.
The *
wildcard will match across directory separators; the pattern
`foo/*.a' would match all (UNIX) library archives in any subdirectory
underneath the foo subdirectory.
The second string in each pair contains one of the following keywords
to specify how signatures should be calculated for derived files that
match the pattern. The available keywords are the same as for the
SourceSignature
method, with an additional keyword:
The Cons default behavior (as previously described) for using derived-file signatures is equivalent to:
$env = new cons(SIGNATURE => ['*' => 'build']);
The *
will match all derived files. The build
keyword specifies
that all derived files' build signatures will be used when calculating
whether a dependent file is up-to-date.
A useful alternative default SIGNATURE
configuration for many sites:
$env = new cons(SIGNATURE => ['*' => 'content']);
In this configuration, derived files have their signatures calculated from the file contents. This adds slightly to Cons' workload, but has the useful effect of ``stopping'' further rebuilds if a derived file is rebuilt to exactly the same file contents as before, which usually outweighs the additional computation Cons must perform.
For example, changing a comment in a C file and recompiling should
generate the exact same object file (assuming the compiler doesn't
insert a timestamp in the object file's header). In that case,
specifying content
or stored-content
for the signature calculation
will cause Cons to recognize that the object file did not actually
change as a result of being rebuilt, and libraries or programs that
include the object file will not be rebuilt. When build
is
specified, however, Cons will only ``know'' that the object file was
rebuilt, and proceed to rebuild any additional files that include the
object file.
Note that Cons tries to match derived file path names against the
patterns in the order they are specified in the SIGNATURE
array
reference:
$env = new cons(SIGNATURE => ['foo/*.o' => 'build', '*.o' => 'content', '*.a' => 'cache-content', '*' => 'content']);
In this example, all object files underneath the foo subdirectory will use build signatures, all other object files (including object files underneath other subdirectories!) will use .consign file content signatures, libraries will use .consign file build signatures, and all other derived files will use content signatures.
Cons provides a -S
option that can be used to specify what internal
Perl package Cons should use to calculate signatures. The default Cons
behavior is equivalent to specifying -S md5
on the command line.
The only other package (currently) available is an md5::debug
package that prints out detailed information about the MD5 signature
calculations performed by Cons:
% cons -S md5::debug hello sig::md5::srcsig(hello.c) => |52d891204c62fe93ecb95281e1571938| sig::md5::collect(52d891204c62fe93ecb95281e1571938) => |fb0660af4002c40461a2f01fbb5ffd03| sig::md5::collect(52d891204c62fe93ecb95281e1571938, fb0660af4002c40461a2f01fbb5ffd03, cc -c %< -o %>) => |f7128da6c3fe3c377dc22ade70647b39| sig::md5::current(|| eq |f7128da6c3fe3c377dc22ade70647b39|) cc -c hello.c -o hello.o sig::md5::collect() => |d41d8cd98f00b204e9800998ecf8427e| sig::md5::collect(f7128da6c3fe3c377dc22ade70647b39, d41d8cd98f00b204e9800998ecf8427e, cc -o %> %< ) => |a0bdce7fd09e0350e7efbbdb043a00b0| sig::md5::current(|| eq |a0bdce7fd09e0350e7efbbdb043a00b0|) cc -o hello, hello.o
Many software development organizations will have one or more central repository directory trees containing the current source code for one or more projects, as well as the derived object files, libraries, and executables. In order to reduce unnecessary recompilation, it is useful to use files from the repository to build development software--assuming, of course, that no newer dependency file exists in the local build tree.
Cons provides a mechanism to specify a list of code repositories that will be searched, in-order, for source files and derived files not found in the local build directory tree.
The following lines in a Construct file will instruct Cons to look first under the /usr/experiment/repository directory and then under the /usr/product/repository directory:
Repository qw ( /usr/experiment/repository /usr/product/repository );
The repository directories specified may contain source files, derived files (objects, libraries and executables), or both. If there is no local file (source or derived) under the directory in which Cons is executed, then the first copy of a same-named file found under a repository directory will be used to build any local derived files.
Cons maintains one global list of repositories directories. Cons will eliminate the current directory, and any non-existent directories, from the list.
Cons will also search for Construct and Conscript files in the
repository tree or trees. This leads to a chicken-and-egg situation,
though: how do you look in a repository tree for a Construct file if the
Construct file tells you where the repository is? To get around this,
repositories may be specified via -R
options on the command line:
% cons -R /usr/experiment/repository -R /usr/product/repository .
Any repository directories specified in the Construct or Conscript
files will be appended to the repository directories specified by
command-line -R
options.
If the source code (include the Conscript file) for the library version of the Hello, World! C application is in a repository (with no derived files), Cons will use the repository source files to create the local object files and executable file:
% cons -R /usr/src_only/repository hello gcc -c /usr/src_only/repository/hello.c -o hello.o gcc -c /usr/src_only/repository/world.c -o world.o ar r libworld.a world.o ar: creating libworld.a ranlib libworld.a gcc -o hello hello.o libworld.a
Creating a local source file will cause Cons to rebuild the appropriate derived file or files:
% pico world.c [EDIT] % cons -R /usr/src_only/repository hello gcc -c world.c -o world.o ar r libworld.a world.o ar: creating libworld.a ranlib libworld.a gcc -o hello hello.o libworld.a
And removing the local source file will cause Cons to revert back to building the derived files from the repository source:
% rm world.c % cons -R /usr/src_only/repository hello gcc -c /usr/src_only/repository/world.c -o world.o ar r libworld.a world.o ar: creating libworld.a ranlib libworld.a gcc -o hello hello.o libworld.a
If a repository tree contains derived files (usually object files, libraries, or executables), Cons will perform its normal signature calculation to decide whether the repository file is up-to-date or a derived file must be built locally. This means that, in order to ensure correct signature calculation, a repository tree must also contain the .consign files that were created by Cons when generating the derived files.
This would usually be accomplished by building the software in the repository (or, alternatively, in a build directory, and then copying the result to the repository):
% cd /usr/all/repository % cons hello gcc -c hello.c -o hello.o gcc -c world.c -o world.o ar r libworld.a world.o ar: creating libworld.a ranlib libworld.a gcc -o hello hello.o libworld.a
(This is safe even if the Construct file lists the /usr/all/repository
directory in a Repository
command because Cons will remove the current
directory from the repository list.)
Now if we want to build a copy of the application with our own hello.c
file, we only need to create the one necessary source file, and use the
-R
option to have Cons use other files from the repository:
% mkdir $HOME/build1 % cd $HOME/build1 % ed hello.c [EDIT] % cons -R /usr/all/repository hello gcc -c hello.c -o hello.o gcc -o hello hello.o /usr/all/repository/libworld.a
Notice that Cons has not bothered to recreate a local libworld.a library (or recompile the world.o module), but instead uses the already-compiled version from the repository.
Because the MD5 signatures that Cons puts in the .consign file contain timestamps for the derived files, the signature timestamps must match the file timestamps for a signature to be considered valid.
Some software systems may alter the timestamps on repository files (by copying them, e.g.), in which case Cons will, by default, assume the repository signatures are invalid and rebuild files unnecessarily. This behavior may be altered by specifying:
Repository_Sig_Times_OK 0;
This tells Cons to ignore timestamps when deciding whether a signature is valid. (Note that avoiding this sanity check means there must be proper control over the repository tree to ensure that the derived files cannot be modified without updating the .consign signature.)
If the repository tree contains the complete results of a build, and we try to build from the repository without any files in our local tree, something moderately surprising happens:
% mkdir $HOME/build2 % cd $HOME/build2 % cons -R /usr/all/repository hello cons: "hello" is up-to-date.
Why does Cons say that the hello program is up-to-date when there is no hello program in the local build directory? Because the repository (not the local directory) contains the up-to-date hello program, and Cons correctly determines that nothing needs to be done to rebuild this up-to-date copy of the file.
There are, however, many times in which it is appropriate to ensure that a
local copy of a file always exists. A packaging or testing script, for
example, may assume that certain generated files exist locally. Instead of
making these subsidiary scripts aware of the repository directory, the
Local
command may be added to a Construct or Conscript file to
specify that a certain file or files must appear in the local build
directory:
Local qw( hello );
Then, if we re-run the same command, Cons will make a local copy of the program from the repository copy (telling you that it is doing so):
% cons -R /usr/all/repository hello Local copy of hello from /usr/all/repository/hello cons: "hello" is up-to-date.
Notice that, because the act of making the local copy is not considered a ``build'' of the hello file, Cons still reports that it is up-to-date.
Creating local copies is most useful for files that are being installed into
an intermediate directory (for sharing with other directories) via the
Install
command. Accompanying the Install
command for a file with a
companion Local
command is so common that Cons provides a
Install_Local
command as a convenient way to do both:
Install_Local $env, '#export', 'hello';
is exactly equivalent to:
Install $env '#export', 'hello'; Local '#export/hello';
Both the Local
and Install_Local
commands update the local .consign
file with the appropriate file signatures, so that future builds are
performed correctly.
Due to its built-in scanning, Cons will search the specified repository trees for included .h files. Unless the compiler also knows about the repository trees, though, it will be unable to find .h files that only exist in a repository. If, for example, the hello.c file includes the hello.h file in its current directory:
% cons -R /usr/all/repository hello gcc -c /usr/all/repository/hello.c -o hello.o /usr/all/repository/hello.c:1: hello.h: No such file or directory
Solving this problem forces some requirements onto the way construction
environments are defined and onto the way the C #include
preprocessor
directive is used to include files.
In order to inform the compiler about the repository trees, Cons will add
appropriate -I
flags to the compilation commands. This means that the
CPPPATH
variable in the construction environment must explicitly specify
all subdirectories which are to be searched for included files, including the
current directory. Consequently, we can fix the above example by changing
the environment creation in the Construct file as follows:
$env = new cons( CC => 'gcc', CPPPATH => '.', LIBS => 'libworld.a', );
Due to the definition of the CPPPATH
variable, this yields, when we
re-execute the command:
% cons -R /usr/all/repository hello gcc -c -I. -I/usr/all/repository /usr/all/repository/hello.c -o hello.o gcc -o hello hello.o /usr/all/repository/libworld.a
The order of the -I
flags replicates, for the C preprocessor, the same
repository-directory search path that Cons uses for its own dependency
analysis. If there are multiple repositories and multiple CPPPATH
directories, Cons will append the repository directories to the beginning of
each CPPPATH
directory, rapidly multiplying the number of -I
flags.
As an extreme example, a Construct file containing:
Repository qw( /u1 /u2 );
$env = new cons( CPPPATH => 'a:b:c', );
Would yield a compilation command of:
cc -Ia -I/u1/a -I/u2/a -Ib -I/u1/b -I/u2/b -Ic -I/u1/c -I/u2/c -c hello.c -o hello.o
In order to shorten the command lines as much as possible, Cons will
remove -I
flags for any directories, locally or in the repositories,
which do not actually exist. (Note that the -I
flags are not included
in the MD5 signature calculation for the target file, so the target will
not be recompiled if the compilation command changes due to a directory
coming into existence.)
Because Cons relies on the compiler's -I
flags to communicate the
order in which repository directories must be searched, Cons' handling
of repository directories is fundamentally incompatible with using
double-quotes on the #include
directives in any C source code that
you plan to modify:
#include "file.h" /* DON'T USE DOUBLE-QUOTES LIKE THIS */
This is because most C preprocessors, when faced with such a directive, will
always first search the directory containing the source file. This
undermines the elaborate -I
options that Cons constructs to make the
preprocessor conform to its preferred search path.
Consequently, when using repository trees in Cons, always use angle-brackets for included files in any C source (.c or .h) files that you plan to modify locally:
#include <file.h> /* USE ANGLE-BRACKETS INSTEAD */
Code that will not change can still safely use double quotes on #include lines.
Cons provides a Repository_List
command to return a list of all
repository directories in their current search order. This can be used for
debugging, or to do more complex Perl stuff:
@list = Repository_List; print join(' ', @list), "\n";
Cons' handling of repository trees interacts correctly with other Cons features--which is to say, it generally does what you would expect.
Most notably, repository trees interact correctly, and rather powerfully,
with the 'Link' command. A repository tree may contain one or more
subdirectories for version builds established via Link
to a source
subdirectory. Cons will search for derived files in the appropriate build
subdirectories under the repository tree.
Until now, we've demonstrated invoking Cons with an explicit target to build:
% cons hello
Normally, Cons does not build anything unless a target is specified, but specifying '.' (the current directory) will build everything:
% cons # does not build anything
% cons . # builds everything under the top-level directory
Adding the Default
method to any Construct or Conscript file will add
the specified targets to a list of default targets. Cons will build
these defaults if there are no targets specified on the command line.
So adding the following line to the top-level Construct file will mimic
Make's typical behavior of building everything by default:
Default '.';
The following would add the hello and goodbye commands (in the same directory as the Construct or Conscript file) to the default list:
Default qw( hello goodbye );
The Default
method may be used more than once to add targets to the
default list.
Cons provides two methods for reducing the size of given build. The first is by specifying targets on the command line, and the second is a method for pruning the build tree. We'll consider target specification first.
Like make, Cons allows the specification of ``targets'' on the command line. Cons targets may be either files or directories. When a directory is specified, this is simply a short-hand notation for every derivable product--that Cons knows about--in the specified directory and below. For example:
% cons build/hello/hello.o
means build hello.o and everything that hello.o might need. This is from a previous version of the Hello, World! program in which hello.o depended upon export/include/world.h. If that file is not up-to-date (because someone modified src/world/world.h), then it will be rebuilt, even though it is in a directory remote from build/hello.
In this example:
% cons build
Everything in the build directory is built, if necessary. Again, this may cause more files to be built. In particular, both export/include/world.h and export/lib/libworld.a are required by the build/hello directory, and so they will be built if they are out-of-date.
If we do, instead:
% cons export
then only the files that should be installed in the export directory will be
rebuilt, if necessary, and then installed there. Note that cons build
might build files that cons export
doesn't build, and vice-versa.
With Cons, make-style ``special'' targets are not required. The simplest analog with Cons is to use special export directories, instead. Let's suppose, for example, that you have a whole series of unit tests that are associated with your code. The tests live in the source directory near the code. Normally, however, you don't want to build these tests. One solution is to provide all the build instructions for creating the tests, and then to install the tests into a separate part of the tree. If we install the tests in a top-level directory called tests, then:
% cons tests
will build all the tests.
% cons export
will build the production version of the system (but not the tests), and:
% cons build
should probably be avoided (since it will compile tests unecessarily).
If you want to build just a single test, then you could explicitly name the test (in either the tests directory or the build directory). You could also aggregate the tests into a convenient hierarchy within the tests directory. This hierarchy need not necessarily match the source hierarchy, in much the same manner that the include hierarchy probably doesn't match the source hierarchy (the include hierarchy is unlikely to be more than two levels deep, for C programs).
If you want to build absolutely everything in the tree (subject to whatever options you select), you can use:
% cons .
This is not particularly efficient, since it will redundantly walk all the trees, including the source tree. The source tree, of course, may have buildable objects in it--nothing stops you from doing this, even if you normally build in a separate build tree.
In conjunction with target selection, build pruning can be used to reduce
the scope of the build. In the previous peAcH and baNaNa example, we have
already seen how script-driven build pruning can be used to make only half
of the potential build available for any given invocation of cons
. Cons
also provides, as a convenience, a command line convention that allows you
to specify which Conscript files actually get ``built''--that is,
incorporated into the build tree. For example:
% cons build +world
The +
argument introduces a Perl regular expression. This must, of
course, be quoted at the shell level if there are any shell meta-characters
within the expression. The expression is matched against each Conscript
file which has been mentioned in a Build
statement, and only those
scripts with matching names are actually incorporated into the build
tree. Multiple such arguments are allowed, in which case a match against any
of them is sufficient to cause a script to be included.
In the example, above, the hello program will not be built, since Cons will have no knowledge of the script hello/Conscript. The libworld.a archive will be built, however, if need be.
There are a couple of uses for build pruning via the command line. Perhaps the most useful is the ability to make local changes, and then, with sufficient knowledge of the consequences of those changes, restrict the size of the build tree in order to speed up the rebuild time. A second use for build pruning is to actively prevent the recompilation of certain files that you know will recompile due to, for example, a modified header file. You may know that either the changes to the header file are immaterial, or that the changes may be safely ignored for most of the tree, for testing purposes.With Cons, the view is that it is pragmatic to admit this type of behavior, with the understanding that on the next full build everything that needs to be rebuilt will be. There is no equivalent to a ``make touch'' command, to mark files as permanently up-to-date. So any risk that is incurred by build pruning is mitigated. For release quality work, obviously, we recommend that you do not use build pruning (it's perfectly OK to use during integration, however, for checking compilation, etc. Just be sure to do an unconstrained build before committing the integration).
Cons provides a very simple mechanism for overriding aspects of a build. The
essence is that you write an override file containing one or more
Override
commands, and you specify this on the command line, when you run
cons
:
% cons -o over export
will build the export directory, with all derived files subject to the
overrides present in the over file. If you leave out the -o
option,
then everything necessary to remove all overrides will be rebuilt.
The override file can contain two types of overrides. The first is incoming
environment variables. These are normally accessible by the Construct
file from the %ENV
hash variable. These can trivially be overridden in
the override file by setting the appropriate elements of %ENV
(these
could also be overridden in the user's environment, of course).
The second type of override is accomplished with the Override
command,
which looks like this:
Override <regexp>, <var1> => <value1>, <var2> => <value2>, ...;
The regular expression regexp is matched against every derived file that is a candidate for the build. If the derived file matches, then the variable/value pairs are used to override the values in the construction environment associated with the derived file.
Let's suppose that we have a construction environment like this:
$CONS = new cons( COPT => '', CDBG => '-g', CFLAGS => '%COPT %CDBG', );
Then if we have an override file over containing this command:
Override '\.o$', COPT => '-O', CDBG => '';
then any cons
invocation with -o over
that creates .o files via
this environment will cause them to be compiled with -O
and no -g
. The
override could, of course, be restricted to a single directory by the
appropriate selection of a regular expression.
Here's the original version of the Hello, World! program, built with this environment. Note that Cons rebuilds the appropriate pieces when the override is applied or removed:
% cons hello cc -g -c hello.c -o hello.o cc -o hello hello.o % cons -o over hello cc -O -c hello.c -o hello.o cc -o hello hello.o % cons -o over hello cons: "hello" is up-to-date. % cons hello cc -g -c hello.c -o hello.o cc -o hello hello.o
It's important that the Override
command only be used for temporary,
on-the-fly overrides necessary for development because the overrides are not
platform independent and because they rely too much on intimate knowledge of
the workings of the scripts. For temporary use, however, they are exactly
what you want.
Note that it is still useful to provide, say, the ability to create a fully optimized version of a system for production use--from the Construct and Conscript files. This way you can tailor the optimized system to the platform. Where optimizer trade-offs need to be made (particular files may not be compiled with full optimization, for example), then these can be recorded for posterity (and reproducibility) directly in the scripts.
As previously mentioned, a construction environment is an object that has a set of keyword/value pairs and a set of methods, and which is used to tell Cons how target files should be built. This section describes how Cons uses and expands construction environment values to control its build behavior.
Construction variables from a construction environment are expanded
by preceding the keyword with a %
(percent sign):
Construction variables: XYZZY => 'abracadabra',
The string: "The magic word is: %XYZZY!" expands to: "The magic word is: abracadabra!"
A construction variable name may be surrounded by {
and }
(curly
braces), which are stripped as part of the expansion. This can
sometimes be necessary to separate a variable expansion from trailing
alphanumeric characters:
Construction variables: OPT => 'value1', OPTION => 'value2',
The string: "%OPT %{OPT}ION %OPTION %{OPTION}" expands to: "value1 value1ION value2 value2"
Construction variable expansion is recursive--that is, a string
containing %-
expansions after substitution will be re-expanded until
no further substitutions can be made:
Construction variables: STRING => 'The result is: %FOO', FOO => '%BAR', BAR => 'final value',
The string: "The string says: %STRING" expands to: "The string says: The result is: final value"
If a construction variable is not defined in an environment, then the null string is substituted:
Construction variables: FOO => 'value1', BAR => 'value2',
The string: "%FOO <%NO_VARIABLE> %BAR" expands to: "value1 <> value2"
A doubled %%
will be replaced by a single %
:
The string: "Here is a percent sign: %%" expands to: "Here is a percent sign: %"
When you specify no arguments when creating a new construction environment:
$env = new cons();
Cons creates a reference to a new, default construction environment. This contains a number of construction variables and some methods. At the present writing, the default construction variables on a UNIX system are:
CC => 'cc', CFLAGS => '', CCCOM => '%CC %CFLAGS %_IFLAGS -c %< -o %>', CXX => '%CC', CXXFLAGS => '%CFLAGS', CXXCOM => '%CXX %CXXFLAGS %_IFLAGS -c %< -o %>', INCDIRPREFIX => '-I', INCDIRSUFFIX => '', LINK => '%CXX', LINKCOM => '%LINK %LDFLAGS -o %> %< %_LDIRS %LIBS', LINKMODULECOM => '%LD -r -o %> %<', LIBDIRPREFIX => '-L', LIBDIRSUFFIX => '', AR => 'ar', ARFLAGS => 'r', ARCOM => ['%AR %ARFLAGS %> %<', '%RANLIB %>'], RANLIB => 'ranlib', AS => 'as', ASFLAGS => '', ASCOM => '%AS %ASFLAGS %< -o %>', LD => 'ld', LDFLAGS => '', PREFLIB => 'lib', SUFLIB => '.a', SUFLIBS => '.so:.a', SUFOBJ => '.o', SIGNATURE => [ '*' => 'build' ], ENV => { 'PATH' => '/bin:/usr/bin' },
And on a Win32 system (Windows NT), the default construction variables are (unless the default rule style is set using the DefaultRules method):
CC => 'cl', CFLAGS => '/nologo', CCCOM => '%CC %CFLAGS %_IFLAGS /c %< /Fo%>', CXXCOM => '%CXX %CXXFLAGS %_IFLAGS /c %< /Fo%>', INCDIRPREFIX => '/I', INCDIRSUFFIX => '', LINK => 'link', LINKCOM => '%LINK %LDFLAGS /out:%> %< %_LDIRS %LIBS', LINKMODULECOM => '%LD /r /o %> %<', LIBDIRPREFIX => '/LIBPATH:', LIBDIRSUFFIX => '', AR => 'lib', ARFLAGS => '/nologo ', ARCOM => "%AR %ARFLAGS /out:%> %<", RANLIB => '', LD => 'link', LDFLAGS => '/nologo ', PREFLIB => '', SUFEXE => '.exe', SUFLIB => '.lib', SUFLIBS => '.dll:.lib', SUFOBJ => '.obj', SIGNATURE => [ '*' => 'build' ],
These variables are used by the various methods associated with the
environment. In particular, any method that ultimately invokes an external
command will substitute these variables into the final command, as
appropriate. For example, the Objects
method takes a number of source
files and arranges to derive, if necessary, the corresponding object
files:
Objects $env 'foo.c', 'bar.c';
This will arrange to produce, if necessary, foo.o and bar.o. The
command invoked is simply %CCCOM
, which expands, through substitution,
to the appropriate external command required to build each object. The
substitution rules will be discussed in detail in the next section.
The construction variables are also used for other purposes. For example,
CPPPATH
is used to specify a colon-separated path of include
directories. These are intended to be passed to the C preprocessor and are
also used by the C-file scanning machinery to determine the dependencies
involved in a C Compilation.
Variables beginning with underscore are created by various methods,
and should normally be considered ``internal'' variables. For example,
when a method is called which calls for the creation of an object from
a C source, the variable _IFLAGS
is created: this corresponds to the
-I
switches required by the C compiler to represent the directories
specified by CPPPATH
.
Note that, for any particular environment, the value of a variable is set
once, and then never reset (to change a variable, you must create a new
environment. Methods are provided for copying existing environments for this
purpose). Some internal variables, such as _IFLAGS
are created on demand,
but once set, they remain fixed for the life of the environment.
The CFLAGS
, LDFLAGS
, and ARFLAGS
variables all supply a place
for passing options to the compiler, loader, and archiver, respectively.
The INCDIRPREFIX
and INCDIRSUFFIX
variables specify option
strings to be appended to the beginning and end, respectively, of each
include directory so that the compiler knows where to find .h files.
Similarly, the LIBDIRPREFIX
and LIBDIRSUFFIX
variables specify the
option string to be appended to the beginning of and end, respectively,
of each directory that the linker should search for libraries.
Another variable, ENV
, is used to determine the system environment during
the execution of an external command. By default, the only environment
variable that is set is PATH
, which is the execution path for a UNIX
command. For the utmost reproducibility, you should really arrange to set
your own execution path, in your top-level Construct file (or perhaps by
importing an appropriate construction package with the Perl use
command). The default variables are intended to get you off the ground.
Within a construction command, construction variables will be expanded according to the rules described above. In addition to normal variable expansion from the construction environment, construction commands also expand the following pseudo-variables to insert the specific input and output files in the command line that will be executed:
%>
.
%1
, %2
, etc.), then
those will be deleted from the list provided by %<
. Consider the
following command found in a Conscript file in the test directory:
Command $env 'tgt', qw(foo bar baz), qq( echo %< -i %1 > %> echo %< -i %2 >> %> echo %< -i %3 >> %> );
If tgt needed to be updated, then this would result in the execution of the following commands, assuming that no remapping has been established for the test directory:
echo test/bar test/baz -i test/foo > test/tgt echo test/foo test/baz -i test/bar >> test/tgt echo test/foo test/bar -i test/baz >> test/tgt
Any of the above pseudo-variables may be followed immediately by one of the following suffixes to select a portion of the expanded path name:
:a the absolute path to the file name :b the directory plus the file name stripped of any suffix :d the directory :f the file name :s the file name suffix :F the file name stripped of any suffix
Continuing with the above example, %<:f
would expand to foo bar baz
,
and %>:d
would expand to test
.
There are additional %
elements which affect the command line(s):
%[
and %]
. This will call the
construction variable named as the first word enclosed in the brackets
as a Perl code reference; the results of this call will be used to
replace the contents of the brackets in the command line. For example,
given an existing input file named tgt.in:
@keywords = qw(foo bar baz); $env = new cons(X_COMMA => sub { join(",", @_) }); Command $env 'tgt', 'tgt.in', qq( echo '# Keywords: %[X_COMMA @keywords %]' > %> cat %< >> %> );
This will execute:
echo '# Keywords: foo,bar,baz' > tgt cat tgt.in >> tgt
%(
and %)
, however, will be ignored for MD5 signature calculation.
Internally, Cons uses %(
and %)
around include and library
directory options (-I
and -L
on UNIX systems, /I
and
/LIBPATH
on Windows NT) to avoid rebuilds just because the directory
list changes. Rebuilds occur only if the changed directory list causes
any included files to change, and a changed include file is detected
by the MD5 signature calculation on the actual file contents.
Cons expands construction variables in the source and target file names passed to the various construction methods according to the expansion rules described above:
$env = new cons( DESTDIR => 'programs', SRCDIR => 'src', ); Program $env '%DESTDIR/hello', '%SRCDIR/hello.c';
This allows for flexible configuration, through the construction environment, of directory names, suffixes, etc.
Cons supports several types of build actions that can be performed to construct one or more target files. Usually, a build action is a construction command--that is, a command-line string that invokes an external command. Cons can also execute Perl code embedded in a command-line string, and even supports an experimental ability to build a target file by executing a Perl code reference directly.
A build action is usually specified as the value of a construction variable:
$env = new cons( CCCOM => '%CC %CFLAGS %_IFLAGS -c %< -o %>', LINKCOM => '[perl] &link_executable("%>", "%<")', ARCOM => sub { my($env, $target, @sources) = @_; # code to create an archive } );
A build action may be associated directly with one or more target files
via the Command
method; see below.
A construction command goes through expansion of construction variables
and %-
pseudo-variables, as described above, to create the actual
command line that Cons will execute to generate the target file or
files.
After substitution occurs, strings of white space are converted into single blanks, and leading and trailing white space is eliminated. It is therefore currently not possible to introduce variable length white space in strings passed into a command.
If a multi-line command string is provided, the commands are executed sequentially. If any of the commands fails, then none of the rest are executed, and the target is not marked as updated, i.e. a new signature is not stored for the target.
Normally, if all the commands succeed, and return a zero status (or whatever platform-specific indication of success is required), then a new signature is stored for the target. If a command erroneously reports success even after a failure, then Cons will assume that the target file created by that command is accurate and up-to-date.
The first word of each command string, after expansion, is assumed to be an
executable command looked up on the PATH
environment variable (which is,
in turn, specified by the ENV
construction variable). If this command is
found on the path, then the target will depend upon it: the command will
therefore be automatically built, as necessary. It's possible to write
multi-part commands to some shells, separated by semi-colons. Only the first
command word will be depended upon, however, so if you write your command
strings this way, you must either explicitly set up a dependency (with the
Depends
method), or be sure that the command you are using is a system
command which is expected to be available. If it isn't available, you will,
of course, get an error.
Cons normally prints a command before executing it. This behavior is
suppressed if the first character of the command is @
. Note that
you may need to separate the @
from the command name or escape it to
prevent @cmd
from looking like an array to Perl quote operators that
perform interpolation:
# The first command line is incorrect, # because "@cp" looks like an array # to the Perl qq// function. # Use the second form instead. Command $env 'foo', 'foo.in', qq( @cp %< tempfile @ cp tempfile %> );
If there are shell meta characters anywhere in the expanded command line,
such as <
, >
, quotes, or semi-colon, then the command
will actually be executed by invoking a shell. This means that a command
such as:
cd foo
alone will typically fail, since there is no command cd
on the path. But
the command string:
cd $<:d; tar cf $>:f $<:f
when expanded will still contain the shell meta character semi-colon, and a
shell will be invoked to interpret the command. Since cd
is interpreted
by this sub-shell, the command will execute as expected.
If any command (even one within a multi-line command) begins with
[perl]
, the remainder of that command line will be evaluated by the
running Perl instead of being forked by the shell. If an error occurs
in parsing the Perl code, or if the Perl expression returns 0 or undef,
the command will be considered to have failed. For example, here is a
simple command which creates a file foo
directly from Perl:
$env = new cons(); Command $env 'foo', qq([perl] open(FOO,'>foo');print FOO "hi\\n"; close(FOO); 1);
Note that when the command is executed, you are in the same package as
when the Construct or Conscript file was read, so you can call
Perl functions you've defined in the same Construct or Conscript
file in which the Command
appears:
$env = new cons(); sub create_file { my $file = shift; open(FILE, ">$file"); print FILE "hi\n"; close(FILE); return 1; } Command $env 'foo', "[perl] &create_file('%>')";
The Perl string will be used to generate the signature for the derived
file, so if you change the string, the file will be rebuilt. The contents
of any subroutines you call, however, are not part of the signature,
so if you modify a called subroutine such as create_file
above,
the target will not be rebuilt. Caveat user.
Cons supports the ability to create a derived file by directly executing a Perl code reference. This feature is considered EXPERIMENTAL and subject to change in the future.
A code reference may either be a named subroutine referenced by the
usual \&
syntax:
sub build_output { my($env, $target, @sources) = @_; print "build_output building $target\n"; open(OUT, ">$target"); foreach $src (@sources) { if (! open(IN, "<$src")) { print STDERR "cannot open '$src': $!\n"; return undef; } print OUT, <IN>; } close(OUT); return 1; } Command $env 'output', \&build_output;
or the code reference may be an anonymous subroutine:
Command $env 'output', sub { my($env, $target, @sources) = @_; print "building $target\n"; open(FILE, ">$target"); print FILE "hello\n"; close(FILE); return 1; };
To build the target file, the referenced subroutine is passed, in order: the construction environment used to generate the target; the path name of the target itself; and the path names of all the source files necessary to build the target file.
The code reference is expected to generate the target file, of course,
but may manipulate the source and target files in any way it chooses.
The code reference must return a false value (undef
or 0
) if
the build of the file failed. Any true value indicates a successful
build of the target.
Building target files using code references is considered EXPERIMENTAL due to the following current limitations:
Cons does not print anything to indicate the code reference is being called to build the file. The only way to give the user any indication is to have the code reference explicitly print some sort of ``building'' message, as in the above examples.
Cons does not generate any signatures for code references, so if the code in the reference changes, the target will not be rebuilt.
Cons has no public method to allow a code reference to extract construction variables. This would be good to allow generalization of code references based on the current construction environment, but would also complicate the problem of generating meaningful signatures for code references.
Support for building targets via code references has been released in this version to encourage experimentation and the seeking of possible solutions to the above limitations.
The list of default construction methods includes the following:
new
constructorThe new
method is a Perl object constructor. That is, it is not invoked
via a reference to an existing construction environment reference, but,
rather statically, using the name of the Perl package where the
constructor is defined. The method is invoked like this:
$env = new cons(<overrides>);
The environment you get back is blessed into the package cons
, which
means that it will have associated with it the default methods described
below. Individual construction variables can be overridden by providing
name/value pairs in an override list. Note that to override any command
environment variable (i.e. anything under ENV
), you will have to override
all of them. You can get around this difficulty by using the copy
method
on an existing construction environment.
clone
methodThe clone
method creates a clone of an existing construction environment,
and can be called as in the following example:
$env2 = $env1->clone(<overrides>);
You can provide overrides in the usual manner to create a different environment from the original. If you just want a new name for the same environment (which may be helpful when exporting environments to existing components), you can just use simple assignment.
copy
methodThe copy
method extracts the externally defined construction variables
from an environment and returns them as a list of name/value
pairs. Overrides can also be provided, in which case, the overridden values
will be returned, as appropriate. The returned list can be assigned to a
hash, as shown in the prototype, below, but it can also be manipulated in
other ways:
%env = $env1->copy(<overrides>);
The value of ENV
, which is itself a hash, is also copied to a new hash,
so this may be changed without fear of affecting the original
environment. So, for example, if you really want to override just the
PATH
variable in the default environment, you could do the following:
%cons = new cons()->copy(); $cons{ENV}{PATH} = "<your path here>"; $cons = new cons(%cons);
This will leave anything else that might be in the default execution environment undisturbed.
Install
methodThe Install
method arranges for the specified files to be installed in
the specified directory. The installation is optimized: the file is not
copied if it can be linked. If this is not the desired behavior, you will
need to use a different method to install the file. It is called as follows:
Install $env <directory>, <names>;
Note that, while the files to be installed may be arbitrarily named, only the last component of each name is used for the installed target name. So, for example, if you arrange to install foo/bar in baz, this will create a bar file in the baz directory (not foo/bar).
InstallAs
methodThe InstallAs
method arranges for the specified source file(s)
to be
installed as the specified target file(s). Multiple files should be
specified as a file list. The installation is optimized: the file is not
copied if it can be linked. If this is not the desired behavior, you will
need to use a different method to install the file. It is called as follows:
InstallAs
works in two ways:
Single file install:
InstallAs $env TgtFile, SrcFile;
Multiple file install:
InstallAs $env ['tgt1', 'tgt2'], ['src1', 'src2'];
Or, even as:
@srcs = qw(src1 src2 src3); @tgts = qw(tgt1 tgt2 tgt3); InstallAs $env [@tgts], [@srcs];
Both the target and the sources lists should be of the same length.
Precious
methodThe Precious
method asks cons not to delete the specified file or
list of files before building them again. It is invoked as:
Precious <files>;
This is especially useful for allowing incremental updates to libraries
or debug information files which are updated rather than rebuilt anew each
time. Cons will still delete the files when the -r
flag is specified.
AfterBuild
methodThe AfterBuild
method evaluates the specified perl string after
building the given file or files (or finding that they are up to date).
The eval will happen once per specified file. AfterBuild
is called
as follows:
AfterBuild $env 'foo.o', qq(print "foo.o is up to date!\n");
The perl string is evaluated in the script
package, and has access
to all variables and subroutines defined in the Conscript file in
which the AfterBuild
method is called.
Command
methodThe Command
method is a catchall method which can be used to arrange for
any build action to be executed to update the target. For this command, a
target file and list of inputs is provided. In addition, a build action
is specified as the last argument. The build action is typically a
command line or lines, but may also contain Perl code to be executed;
see the section above on build actions for details.
The Command
method is called as follows:
Command $env <target>, <inputs>, <build action>;
The target is made dependent upon the list of input files specified, and the inputs must be built successfully or Cons will not attempt to build the target.
To specify a command with multiple targets, you can specify a reference to a list of targets. In Perl, a list reference can be created by enclosing a list in square brackets. Hence the following command:
Command $env ['foo.h', 'foo.c'], 'foo.template', q( gen %1 );
could be used in a case where the command gen
creates two files, both
foo.h and foo.c.
Objects
methodThe Objects
method arranges to create the object files that correspond to
the specified source files. It is invoked as shown below:
@files = Objects $env <source or object files>;
Under Unix, source files ending in .s and .c are currently
supported, and will be compiled into a name of the same file ending
in .o. By default, all files are created by invoking the external
command which results from expanding the CCCOM
construction variable,
with %<
and %>
set to the source and object files,
respectively. (See the section above on construction variable expansion
for details). The variable CPPPATH
is also used when scanning source
files for dependencies. This is a colon separated list of pathnames, and
is also used to create the construction variable _IFLAGS,
which will
contain the appropriate list of -I
options for the compilation. Any
relative pathnames in CPPPATH
is interpreted relative to the
directory in which the associated construction environment was created
(absolute and top-relative names may also be used). This variable is
used by CCCOM
. The behavior of this command can be modified by
changing any of the variables which are interpolated into CCCOM
, such
as CC
, CFLAGS
, and, indirectly, CPPPATH
. It's also possible
to replace the value of CCCOM
, itself. As a convenience, this file
returns the list of object filenames.
Program
methodThe Program
method arranges to link the specified program with the
specified object files. It is invoked in the following manner:
Program $env <program name>, <source or object files>;
The program name will have the value of the SUFEXE
construction
variable appended (by default, .exe
on Win32 systems, nothing on Unix
systems) if the suffix is not already present.
Source files may be specified in place of objects files--the Objects
method will be invoked to arrange the conversion of all the files into
object files, and hence all the observations about the Objects
method,
above, apply to this method also.
The actual linking of the program will be handled by an external command
which results from expanding the LINKCOM
construction variable, with
%<
set to the object files to be linked (in the order presented),
and %>
set to the target. (See the section above on construction
variable expansion for details.) The user may set additional variables
in the construction environment, including LINK
, to define which
program to use for linking, LIBPATH
, a colon-separated list of
library search paths, for use with library specifications of the form
-llib, and LIBS
, specifying the list of libraries to link against
(in either -llib form or just as pathnames. Relative pathnames in
both LIBPATH
and LIBS
are interpreted relative to the directory
in which the associated construction environment is created (absolute
and top-relative names may also be used). Cons automatically sets up
dependencies on any libraries mentioned in LIBS
: those libraries will
be built before the command is linked.
Library
methodThe Library
method arranges to create the specified library from the
specified object files. It is invoked as follows:
Library $env <library name>, <source or object files>;
The library name will have the value of the SUFLIB
construction
variable appended (by default, .lib
on Win32 systems, .a
on Unix
systems) if the suffix is not already present.
Source files may be specified in place of objects files--the Objects
method will be invoked to arrange the conversion of all the files into
object files, and hence all the observations about the Objects
method,
above, apply to this method also.
The actual creation of the library will be handled by an external
command which results from expanding the ARCOM
construction variable,
with %<
set to the library members (in the order presented),
and %>
to the library to be created. (See the section above
on construction variable expansion for details.) The user may set
variables in the construction environment which will affect the
operation of the command. These include AR
, the archive program
to use, ARFLAGS
, which can be used to modify the flags given to
the program specified by AR
, and RANLIB
, the name of a archive
index generation program, if needed (if the particular need does not
require the latter functionality, then ARCOM
must be redefined to not
reference RANLIB
).
The Library
method allows the same library to be specified in multiple
method invocations. All of the contributing objects from all the invocations
(which may be from different directories) are combined and generated by a
single archive command. Note, however, that if you prune a build so that
only part of a library is specified, then only that part of the library will
be generated (the rest will disappear!).
Module
methodThe Module
method is a combination of the Program
and Command
methods. Rather than generating an executable program directly, this command
allows you to specify your own command to actually generate a module. The
method is invoked as follows:
Module $env <module name>, <source or object files>, <construction command>;
This command is useful in instances where you wish to create, for example, dynamically loaded modules, or statically linked code libraries.
Depends
methodThe Depends
method allows you to specify additional dependencies for a
target. It is invoked as follows:
Depends $env <target>, <dependencies>;
This may be occasionally useful, especially in cases where no scanner exists (or is writable) for particular types of files. Normally, dependencies are calculated automatically from a combination of the explicit dependencies set up by the method invocation or by scanning source files.
A set of identical dependencies for multiple targets may be specified using a reference to a list of targets. In Perl, a list reference can be created by enclosing a list in square brackets. Hence the following command:
Depends $env ['foo', 'bar'], 'input_file_1', 'input_file_2';
specifies that both the foo and bar files depend on the listed input files.
RuleSet
methodThe RuleSet
method returns the construction variables for building
various components with one of the rule sets supported by Cons. The
currently supported rule sets are:
On systems with more than one available compiler suite, this allows you to easily create side-by-side environments for building software with multiple tools:
$msvcenv = new cons(RuleSet("msvc")); $cygnusenv = new cons(RuleSet("unix"));
In the future, this could also be extended to other platforms that have different default rule sets.
DefaultRules
methodThe DefaultRules
method sets the default construction variables that
will be returned by the new
method to the specified arguments:
DefaultRules(CC => 'gcc', CFLAGS => '', CCCOM => '%CC %CFLAGS %_IFLAGS -c %< -o %>'); $env = new cons(); # $env now contains *only* the CC, CFLAGS, # and CCCOM construction variables
Combined with the RuleSet
method, this also provides an easy way
to set explicitly the default build environment to use some supported
toolset other than the Cons defaults:
# use a UNIX-like tool suite (like cygwin) on Win32 DefaultRules(RuleSet('unix')); $env = new cons();
Note that the DefaultRules
method completely replaces the default
construction environment with the specified arguments, it does not
simply override the existing defaults. To override one or more
variables in a supported RuleSet
, append the variables and values:
DefaultRules(RuleSet('unix'), CFLAGS => '-O3'); $env1 = new cons(); $env2 = new cons(); # both $env1 and $env2 have 'unix' defaults # with CFLAGS set to '-O3'
Ignore
methodThe Ignore
method allows you to ignore explicitly dependencies that
Cons infers on its own. It is invoked as follows:
Ignore <patterns>;
This can be used to avoid recompilations due to changes in system header files or utilities that are known to not affect the generated targets.
If, for example, a program is built in an NFS-mounted directory on
multiple systems that have different copies of stdio.h, the differences
will affect the signatures of all derived targets built from source files
that #include <stdio.h>
. This will cause all those targets to
be rebuilt when changing systems. If this is not desirable behavior, then
the following line will remove the dependencies on the stdio.h file:
Ignore '^/usr/include/stdio\.h$';
Note that the arguments to the Ignore
method are regular expressions,
so special characters must be escaped and you may wish to anchor the
beginning or end of the expression with ^
or $
characters.
Salt
methodThe Salt
method adds a constant value to the signature calculation
for every derived file. It is invoked as follows:
Salt $string;
Changing the Salt value will force a complete rebuild of every derived file. This can be used to force rebuilds in certain desired circumstances. For example,
Salt `uname -s`;
Would force a complete rebuild of every derived file whenever the
operating system on which the build is performed (as reported by uname
-s
) changes.
UseCache
methodThe UseCache
method instructs Cons to maintain a cache of derived
files, to be shared among separate build trees of the same project.
UseCache("cache/<buildname>") || warn("cache directory not found");
SourcePath
methodThe SourcePath
mathod returns the real source path name of a file,
as opposted to the path name within a build directory. It is invoked
as follows:
$path = SourcePath <buildpath>;
ConsPath
methodThe ConsPath
method returns true if the supplied path is a derivable
file, and returns undef (false) otherwise.
It is invoked as follows:
$result = ConsPath <path>;
SplitPath
methodThe SplitPath
method looks up multiple path names in a string separated
by the default path separator for the operating system (':' on UNIX
systems, ';' on Windows NT), and returns the fully-qualified names.
It is invoked as follows:
@paths = SplitPath <pathlist>;
The SplitPath
method will convert names prefixed '#' to the
appropriate top-level build name (without the '#') and will convert
relative names to top-level names.
DirPath
methodThe DirPath
method returns the build path name(s)
of a directory or
list of directories. It is invoked as follows:
$cwd = DirPath <paths>;
The most common use for the DirPath
method is:
$cwd = DirPath '.';
to fetch the path to the current directory of a subsidiary Conscript file.
FilePath
methodThe FilePath
method returns the build path name(s)
of a file or
list of files. It is invoked as follows:
$file = FilePath <path>;
Help
methodThe Help
method specifies help text that will be displayed when the
user invokes cons -h
. This can be used to provide documentation
of specific targets, values, build options, etc. for the build tree.
It is invoked as follows:
Help <helptext>;
The Help
method may only be called once, and should typically be
specified in the top-level Construct file.
There are several ways of extending Cons, which vary in degree of
difficulty. The simplest method is to define your own construction
environment, based on the default environment, but modified to reflect your
particular needs. This will often suffice for C-based applications. You can
use the new
constructor, and the clone
and copy
methods to create
hybrid environments. These changes can be entirely transparent to the
underlying Conscript files.
For slightly more demanding changes, you may wish to add new methods to the
cons
package. Here's an example of a very simple extension,
InstallScript
, which installs a tcl script in a requested location, but
edits the script first to reflect a platform-dependent path that needs to be
installed in the script:
# cons::InstallScript - Create a platform dependent version of a shell # script by replacing string ``#!your-path-here'' with platform specific # path $BIN_DIR.
sub cons::InstallScript { my ($env, $dst, $src) = @_; Command $env $dst, $src, qq( sed s+your-path-here+$BIN_DIR+ %< > %> chmod oug+x %> ); }
Notice that this method is defined directly in the cons
package (by
prefixing the name with cons::
). A change made in this manner will be
globally visible to all environments, and could be called as in the
following example:
InstallScript $env "$BIN/foo", "foo.tcl";
For a small improvement in generality, the BINDIR
variable could be
passed in as an argument or taken from the construction environment--as
%BINDIR
.
Instead of adding the method to the cons
name space, you could define a
new package which inherits existing methods from the cons
package and
overrides or adds others. This can be done using Perl's inheritance
mechanisms.
The following example defines a new package cons::switch
which
overrides the standard Library
method. The overridden method builds
linked library modules, rather than library archives. A new
constructor is provided. Environments created with this constructor
will have the new library method; others won't.
package cons::switch; BEGIN {@ISA = 'cons'}
sub new { shift; bless new cons(@_); }
sub Library { my($env) = shift; my($lib) = shift; my(@objs) = Objects $env @_; Command $env $lib, @objs, q( %LD -r %LDFLAGS %< -o %> ); }
This functionality could be invoked as in the following example:
$env = new cons::switch(@overrides); ... Library $env 'lib.o', 'foo.c', 'bar.c';
The cons
command is usually invoked from the root of the build tree. A
Construct file must exist in that directory. If the -f
argument is
used, then an alternate Construct file may be used (and, possibly, an
alternate root, since cons
will cd to Construct file's containing
directory).
If cons
is invoked from a child of the root of the build tree with
the -t
argument, it will walk up the directory hierarchy looking for a
Construct file. (An alternate name may still be specified with -f
.)
The targets supplied on the command line will be modified to be relative
to the discovered Construct file. For example, from a directory
containing a top-level Construct file, the following invocation:
% cd libfoo/subdir % cons -t target
is exactly equivalent to:
% cons libfoo/subdir/target
If there are any Default
targets specified in the directory hierarchy's
Construct or Conscript files, only the default targets at or below
the directory from which cons -t
was invoked will be built.
The command is invoked as follows:
cons <arguments> -- <construct-args>
where arguments can be any of the following, in any order:
+
arguments are accepted.
ARG
hash passed to the top-level
Construct file.
-cc
-cd
-cr
-cs
-d
-f
<file>-h
-k
-o
<file>-p
-pa
-pw
-q
-q
options may be specified.
A single -q
options suppress messages about Installing and Removing
targets.
Two -q
options suppress build command lines and target up-to-date
messages.
-r
-R
<repos>-S
<pkg>If the specified package ends in <::debug>, signature debug information
will be printed to the file name specified in the CONS_SIG_DEBUG
environment variable, or to standard output if the environment variable
is not set.
-t
Internally, cons
will change its working directory to the directory
which contains the top-level Construct file and report:
cons: Entering directory `top-level-directory'
This message indicates to an invoking editor (such as emacs) or build
environment that Cons will now report all file names relative to the
top-level directory. This message can not be suppressed with the -q
option.
-v
cons
version and continue processing.
-V
cons
version and exit.
-wf
<file>-x
And construct-args can be any arguments that you wish to process in the Construct file. Note that there should be a -- separating the arguments to cons and the arguments that you wish to process in the Construct file.
Processing of construct-args can be done by any standard package like Getopt or its variants, or any user defined package. cons will pass in the construct-args as @ARGV and will not attempt to interpret anything after the --.
% cons -R /usr/local/repository -d os=solaris +driver -- -c test -f DEBUG
would pass the following to cons
-R /usr/local/repository -d os=solaris +driver
and the following, to the top level Construct file as @ARGV
-c test -f DEBUG
Note that cons -r .
is equivalent to a full recursive make clean
,
but requires no support in the Construct file or any Conscript
files. This is most useful if you are compiling files into source
directories (if you separate the build and export directories,
then you can just remove the directories).
The options -p
, -pa
, and -pw
are extremely useful for use as an aid
in reading scripts or debugging them. If you want to know what script
installs export/include/foo.h, for example, just type:
% cons -pw export/include/foo.h
QuickScan allows simple target-independent scanners to be set up for source files. Only one QuickScan scanner may be associated with any given source file and environment, although the same scanner may (and should) be used for multiple files of a given type.
A QuickScan scanner is only ever invoked once for a given source file, and it is only invoked if the file is used by some target in the tree (i.e., there is a dependency on the source file).
QuickScan is invoked as follows:
QuickScan CONSENV CODEREF, FILENAME [, PATH]
The subroutine referenced by CODEREF is expected to return a list of filenames included directly by FILE. These filenames will, in turn, be scanned. The optional PATH argument supplies a lookup path for finding FILENAME and/or files returned by the user-supplied subroutine. The PATH may be a reference to an array of lookup-directory names, or a string of names separated by the system's separator character (':' on UNIX systems, ';' on Windows NT).
The subroutine is called once for each line in the file, with $_ set to the current line. If the subroutine needs to look at additional lines, or, for that matter, the entire file, then it may read them itself, from the filehandle SCAN. It may also terminate the loop, if it knows that no further include information is available, by closing the filehandle.
Whether or not a lookup path is provided, QuickScan first tries to lookup the file relative to the current directory (for the top-level file supplied directly to QuickScan), or from the directory containing the file which referenced the file. This is not very general, but seems good enough--especially if you have the luxury of writing your own utilities and can control the use of the search path in a standard way.
Here's a real example, taken from a Construct file here:
sub cons::SMFgen { my($env, @tables) = @_; foreach $t (@tables) { $env->QuickScan(sub { /\b\S*?\.smf\b/g }, "$t.smf", $env->{SMF_INCLUDE_PATH}); $env->Command(["$t.smdb.cc","$t.smdb.h","$t.snmp.cc", "$t.ami.cc", "$t.http.cc"], "$t.smf", q(smfgen %( %SMF_INCLUDE_OPT %) %<)); } }
The subroutine above finds all names of the form <name>.smf in the file. It will return the names even if they're found within comments, but that's OK (the mechanism is forgiving of extra files; they're just ignored on the assumption that the missing file will be noticed when the program, in this example, smfgen, is actually invoked).
[NOTE that the form $env->QuickScan ...
and $env->Command
...
should not be necessary, but, for some reason, is required
for this particular invocation. This appears to be a bug in Perl or
a misunderstanding on my part; this invocation style does not always
appear to be necessary.]
Here is another way to build the same scanner. This one uses an explicit code reference, and also (unecessarily, in this case) reads the whole file itself:
sub myscan { my(@includes); do { push(@includes, /\b\S*?\.smf\b/g); } while <SCAN>; @includes }
Note that the order of the loop is reversed, with the loop test at the end. This is because the first line is already read for you. This scanner can be attached to a source file by:
QuickScan $env \&myscan, "$_.smf";
This final example, which scans a different type of input file, takes over the file scanning rather than being called for each input line:
$env->QuickScan( sub { my(@includes) = (); do { push(@includes, $3) if /^(#include|import)\s+(\")(.+)(\")/ && $3 } while <SCAN>; @includes }, "$idlFileName", "$env->{CPPPATH};$BUILD/ActiveContext/ACSCLientInterfaces" );
Cons is maintained by the user community. To subscribe, send mail to [email protected] with body subscribe.
Please report any suggestions through the [email protected] mailing list.
Sure to be some. Please report any bugs through the [email protected] mailing list.
Information about CONS can be obtained from the official cons web site http://www.dsmit.com/cons/ or its mirrors listed there.
The cons maintainers can be contacted by email at [email protected]
Originally by Bob Sidebotham. Then significantly enriched by the members of the Cons community [email protected].
The Cons community would like to thank Ulrich Pfeifer for the original pod documentation derived from the cons.html file. Cons documentation is now a part of the program itself.