Acovea 1.0.1 review
DownloadAcovea implements a genetic algorithm for finding the "best" options for compiling programs with the GCC C and C++ compilers. ACOV
|
|
Acovea implements a genetic algorithm for finding the "best" options for compiling programs with the GCC C and C++ compilers.
ACOVEA (Analysis of Compiler Options via Evolutionary Algorithm) implements a genetic algorithm to find the "best" options for compiling programs with the GNU Compiler Collection (GCC) C and C++ compilers.
"Best", in this context, is defined as those options that produce the fastest executable program from a given source code. Acovea is a C++ framework that can be extended to test other programming languages and non-GCC compilers.
I envision Acovea as an optimization tool, similar in purpose to profiling. Traditional function-level profiling identifies the algorithms most influential in a program's performance; Acovea is then applied to those algorithms to find the compiler flags and options that generate the fastest code.
Acovea is also useful for testing combinations of flags for pessimistic interactions, and for testing the reliability of the compiler.
Modern software is difficult to understand and verify by traditional means. Millions of lines of code produce applications containing intricate interactions, defying simple description or brute-force investigation.
A guided, deterministic approach to testing relies on human testers to envision every possible combination of actions -- an unrealistic proposition given software complexity. Yet, despite that complexity, we need answers to important questions about modern, large-scale software.
What sort of important questions? Consider the GNU Compiler Collection. I write articles that benchmark code generation, a task fraught with difficulties due to the myriad options provided by different compilers. For my benchmarks to have any meaning, I need to know which combination of options produces the fastest code for a given application.
Finding the "best" set of options sounds like a simple task, given the extent of GCC documentation and the conventional wisdom of the GCC developer community. Ah, if it were only so easy! The GCC documentation, while extensive, is also honestly imprecise.
I appreciate this style of documentation; unlike many commercial vendors, who make absolute statements about the "quality" of their products, GCC's documenters admit uncertainties in how various options alter code generation. Indeed, code generation is entirely dependent on the type of application being compiled and the target platform. An option that produces fast executable code for one source code may be detrimental to the performance of another program.
"Conventional wisdom" arrives in my inbox whenever I publish a new article. Ranging from the polite to the insistent to the rude, these e-mails contain contradictory suggestions for producing fast code.
In the vast majority of cases, such anecdotal assertions lack any formal proof of their validity, and, more often than not, the suggested "improvement" is ineffective or detrimental. It has become increasingly obvious that no one --myself included -- knows precisely how all these GCC options work together in generating program code.
I seek the Holy Grail of Optimization -- but exactly what is optimization? Understanding the problem is the first step in finding a solution.
Optimization attempts to produce the "best" machine code from source code. "Best" means different things to different applications; a database shovels chunks of information, while a scientific application is concerned with fast and accurate results; the first concern for an embedded system may be code size.
And it is quite possible that small code is fast, or fast code accurate. Optimization is far from being an exact science, given the diversity of hardware and software configurations.
An optimization algorithm may be as simple as removing a loop invariant, or as complex as examining an entire program to eliminate global common sub-expressions. Many optimizations change what the programmer wrote into a more efficient form, producing the same result while altering underlying details for efficiency; other "optimizations" produce code that uses specific characteristics of the underlying hardware, such as special instruction sets.
Memory architectures, pipelines, on- and off-chip caches -- all affect code performance in ways that are not obvious to programmers using a high-level language. An optimization that may seem to produce faster code may, in fact, create large code that causes more cache misses, thus degrading performance.
Even the best hand-tuned C code contains areas of interpretation; there is no absolute, one-to-one correspondence between C statements and machine instructions. Almost any sequence of source code can be compiled into different -- but functionally equivalent -- machine instruction streams with different sizes and performance characteristics.
Inlining functions is a classic example of this phenomena: replacing a call to a function with the function code itself may produce a faster program, but may also increase program size. Increased program size, may, in turn, prevent an algorithm from fitting inside high-speed cache memory, thus slowing a program due to cache misses.
Notice my use of the weasel word "may" -- inlining small functions sometimes allows other optimization algorithms a chance to further improve code for local conditions, producing faster and smaller code.
Optimization is not simple or obvious, and combinations of algorithms can lead to unexpected results. Which brings me back to the question: For any given application, what are the most effective optimization options?
What's New in This Release:
Minor changes in the non-free license.
Support has been added for the latest versions of libcoyotl and libevocosm.
Acovea 1.0.1 keywords