HOW TO DOWNLOAD IT

The latest source code of BenchErl is freely available on github.

HOW TO BUILD IT

In order to build BenchErl, run the following commands:

$ cd bencherl
$ make

If you need to clean up from any previous builds, run the following command first:

$ make clean

If you want to build the web interface of BenchErl, run the following command:

$ make ui

HOW TO USE IT

Execute the bencherl script:

$ cd bencherl
$ ./bencherl

The bencherl script has the following options:

-h Display a short help message and exit.
-l List all the available benchmarks.
-m <MNEMONIC> Use MNEMONIC as the mnemonic name of this run (otherwise the current date and time will be used to construct the mnemonic name of this run).

HOW TO CONFIGURE IT

In order to specify what you want to run and how you want to run it, you can use the conf/run.conf file (which is essentially a BASH script). Below you may find information about all the available variables that you may set in this particular file.

CHECK_SANITY=[0|1] If set to 1, a sanity check will be performed on the results that each benchmark produced during its execution.
COOKIE=<Cookie> The cookie that will be set on all Erlang nodes that will be used for running the benchmark. The default cookie is cookie.< /td>
ERL_ARGS=<Alias1=Args1,Alias2=Args2,…> A comma-separated list of command-line argument sets to pass to the erl program. An alias must be specified for each argument set. The default value is DEF_ARGS=.
EXCLUDE_BENCH=<Bench1,Bench2,…> A comma-separated list of the benchmarks that you do not want to run. By default, no benchmark is excluded.
INCLUDE_BENCH=<Bench1,Bench2,…> A comma-separated list of the benchmarks that you want to run. By default, all benchmarks are executed.
ITERATIONS=<Num> A positive integer that controls how many times the execution of a benchmark in a specific runtime environment will be repeated. The default value is 1.
MASTER_NODE=<Name> The long or the short name for the master node. The default long name is master@`hostname -f`, whereas the default short name is master@`hostname`.
NUMBER_OF_SCHEDULERS=[Num1,Num2,…|Num1..Num2] How many schedulers to use for running each benchmark. The value can be either a comma-separated list of integers or a range of integers. The default value is the number of the CPU cores of the system.
NUMBER_OF_SLAVE_NODES=[Num1,Num2,…|Num1..Num2] How many slave nodes to use for running each benchmark. The value can be either a comma-separated list of integers or a range of integers. The default value is 0.
OTPS=<Alias1=Path1,Alias2=Path2,…> A comma-separated list of Erlang/OTP versions to compile and run the benchmarks with. For each Erlang/OTP version, you must specify a unique alias and the path that leads to it. The default value is DEF_OTP=.
PLOT=[0|1] If set to 1, time and speedup diagrams will be produced. The default value is 1.
SLAVE_NODES=<Name1,Name2,…> A comma-separated list of the long or the short names of the slave nodes that participate in the execution of the benchmarks.
USE_LONG_NAMES=[0|1] If set to 1, long node names will be used. The default value is 1.
VERSION=[short|intermediate|long] Which version of the benchmarks to run. The default value is short.

HOW TO EXTEND IT

BenchErl can be enhanced with new benchmarks, both synthetic and real-world.

If the new benchmark is written for a real-world, open-source application, then add this application in a directory under the app/ directory.

Create a directory for the new benchmark under the bench/ directory.

In the benchmark directory, create an src/ directory. This is where the new benchmark handler must reside. A benchmark handler is a standard Erlang module that has the same name with the benchmark and exports the following functions:

%% Returns the arguments to use for running the specified version of the
%% benchmark under the specified configuration settings.
bench_args(Version, Conf) -> Args
    when
        Version :: short | intermediate | long.
        Conf    :: [{Key :: atom(), Val :: term()}, ...],
        Args    :: [[term()]],
%% Runs the benchmark using the specified arguments, the specified slave nodes
%% and the specified configuration settings.
run(Args, Slaves, Conf) -> ok | {error, Reason}
    when
        Args   :: [term()],
        Slaves :: [node()],
        Conf   :: [{Key :: atom(), Val :: term()}, ...],
        Reason :: term().

The benchmark directory can also contain a conf/ directory. If you want to specify different configuration settings for the benchmark (that will override those of the suite), then create a bench.conf file in this directory. Only the following variables can be overriden by the benchmarks:

In the bench.conf file, you may also set the following variables:
DEPENDENCIES=<App1,App2,…> A comma-separated list of the internal applications that the benchmark depend on. By default, the benchmark depends on no application.
EXTRA_CODE_PATH=<Path1 Path2…> A space-separated list of directories to add to the code path when executing the benchmark. By default, no extra directories are added to the code path.
EXTRA_ERL_ARGS=<Arg1 Arg2…> A space-separated list of command-line arguments to pass to the erl program when running the benchmark. By default, no extra arguments are passed.

If you want to perform any actions before or after the execution of the benchmark, then create a pre_bench or a post_bench file, respectively, in the benchmark’s conf directory.

If the benchmark needs any external data, create a data/ directory in the benchmark directory and put them in there.

If you consider your new benchmark worth for inclusion in BenchErl, send a mail to release AT softlab DOT ntua DOT gr, or issue a pull request on github.