CREW offers Open Access

CREW is in continuous Open Access phase to support your experiments free of charge!

Final public event & Globecom tutorial

CREW will present its final results at the Wireless Community event (Leuven, Belgium, 29 October 2015, more info) and organises a hands-on tutorial at Globecom (San Diego, USA, 10 December 2015, more info)

CREW PORTAL: access the CREW facilities

Interested in using the CREW facilities?
[Start here] - [Browse by name] - [Overview images] - [Advanced info] - [WTA GitHub].

Setting up your own benchmarking experiments

The w-iLab.t testbed provides a set of tools to support benchmarking and repeatable experiments in general. Currently, these tools can be used separately or in conjunction to create a complete benchmarking workflow.

Repeatable Wi-Fi experiments

Repeatability is a strong concern when considering wireless experiments. Through the iPlatform concept the testbed provides the repeatable execution of scripts on the iNodes. Using iPlatforms, a user defines a remote mount of the w-iLab.t fileserver on each node, where an executable file, start_mount_script, will be executed after node booting.

By design choice there is no strict synchronization present in the execution of these start scripts, but the code for using the shared directories for synchronization is available for download here. By using this or a user chosen method is is possible to schedule and forget your benchmarks, and analyse them afterwards.

For more information on using the iNodes for your experiments, please see the detailed iNode documentation.

Creating a repeatable environment

Part of the CREW benchmarking goals is the creation of repeatable environments. On the w-iLab.t testbed we currently can provide a repeatable home environment that can be customized to a user's needs. Variations of this and future environments are to be released as well. To see a demonstration of this environment, please see the advanced tutorial.

To use the Home Environment in your experiments, download the iPlatform scripts here, unzip them in an iPlatform directory of your choosing and give start_mount_code executable permissions. Before running, please take a look at the variables file, containing all adjustable parameters for the experiment. The file is annotated, but the most important variables are listed in the following table

VARIABLE Purpose
USERNAME w-iLab.t database username
USERPASS w-iLab.t database password
USERDB w-iLab.t personal database (often equals username)
NCSERVERDIR Your iPlatform directory
CHANNEL The 802.11g channel used
TXPOWER Transmission power
DURATION Total runtime of the script
EMAILINTERVAL Duration between email checks
DATAWAIT1 Start first data download after x seconds
DATADURATION1 First data download will take x seconds
DATAWAIT2 Start second data download x seconds after the first
DATADURATION2 Second data download will take x seconds
VIDEOWAIT Start video stream after x seconds
VIDEODURATION Video stream will take x seconds
VIDEOBW UDP bandwidth used by the video stream in Mbps

We are currently transitioning to a new experimentation control framework for w-iLab.t (OMF), where the experiments themselves can be parametrized, allowing a more generic approach to defining an environment. When available, a detailed explanation to this new approach will also be available here.

 

Benchmarking Wireless Sensor Networks

The w-iLab.t testbed provides different facilities for a WSN protocol developer to benchmark their own code. The only requirement is that your developed code is compatible with the telosb mote. Any WSN code can be benchmarked using our repeatable environments if the variables that need to be varied are exposed as global variables in your WSN code (see how to change global variables at schedule time). However, a benchmarking API is provided that takes care of the repeatable execution of your WSN code and reports all logged data in a standardized format to the w-iLab.t database for quick visualization.

This API is implemented as TinyOS modules that should be included in your compiled TinyOS image, or for the IDRA framework, which is a networking focused modular development framework. The IDRA API is closely supported and follows the latest features of IDRA, and can be downloaded here. To learn more about IDRA, its purpose and how to configure it, please visit the official website, idraproject.net. The TinyOS modules are currently being updated to support the same features and will be available soon.

To schedule benchmarks using the provided API w-iLab.t uses a BNF syntax to define parameters or parameter traversals. More information on the BNF system is available here.

The full benchmarking API is given in the following table. for IDRA, these variables are available as Benchmarking_P.<parameter_name>

Parameter name (default value)

description

range

node_purpose (0)

Send packets? 0:no 1:yes

0 - 1

target (1)

Node id of packet destination

0 - 65535

data_size (15)

Size of application payload in B

6 - 255 (6B needed for tracking)

send_interval (15000)

Packet Interval (PI) in ms

0 - 232-1

send_variation (15000)

Wait before first packet in ms

0 - 232-1

random (0)

Random Packet Interval?

0 - 1

random_mindelay (500)

Minimal random PI in ms

0 - 65535

random_window (500)

Random PI window in ms

0 - 65535

retry (0)

Retry failed send attempt

0 - 1

retry_mindelay (150)

Minimal retry delay in ms

0 - 65535

retry_window (150)

Retry window in ms

0 - 65535

anycast (0)

Ignore destination at receiver

0 – 1

debug_mode (3)

Logging method

0:none - 1:aggregates -
2:only network info - 3:all

aggregation_interval (10000)

When to output aggregated logs

0 - 232-1

Benchmarking analysis

The final step in the benchmarking process, is also supported by the w-iLab.t testbed. When using the WSN API all the logs from a benchmark are automatically inserted into a separate database table using a fixed format. This table is not restricted to WSN results, but then the data has to be inserted following the described logging format below. This is however the only requirement to use the provided analysis tools

Column name

description

range

version

Versioning number

0 - 255

type

Type of log message

0 - 255

arg

Argument of message

0 - 65535

msg_uid

Uid of logged packet

0 - 65535

origin

Origin of logged packet

0 - 65535

other_node

Destination of logged packet

0 - 65535

Msg_seq

Sequence no. of logged packet

0 - 65535

seqno

Sequence no. of log message

0 - 232-1

motelabMoteId

Node id (db generated)

0 - 65535

motelabSeqNo

Global sequence no. (db generated)

0 - 232-1

insert_time

Log time (db generated)

0 - 232-1

Following events can currently be logged, according to the benchmarking API: node purpose, boot time, send attempt, send success, send failure, radio sleep duration, total sent packets, total received packets, debug statistics (total debug msgs/fails)

All log data is processed using sql instructions and presented in a barchart, linechart or a 2d map of a testbed using the analyser and visualiser tools. For more information how to use and configure these tools can be found here.

Following metrics and visualisations are implemented and available for download here

  • Reliability: calculated on application level, each packet that is sent by a SUT should be received by the destined SUT to reach 100% reliability for the sending SUT. Available as analyser and visualiser tool.
  • Packets sent/received: The total amount of packets sent or received by the radio adapter, also available as packets sent/received per second. Available as analyser and visualiser tool.
  • Radio sleep percentage: The fraction of the benchmark that the radio adapter spends sleeping, this is the primary energy efficiency metric for WSN and similar networks of embedded devices. Available as analyser and visualiser tool.
  • Wifi throughput: Visualises the network wide wifi throughput, as logged by the environment. Can only be used when using one of the repeatable environments. Available as analyser.
  • Application level events: the amount of network wide events visualised over time, includes packet sending, receiving, errors and boot times. Available as analyser.
AttachmentSize
synchronization.zip677 bytes
IDRA_Benchmarking.zip1.99 MB
visualiser_analyser_config.zip10.95 KB