HEADS project started on October 1st 2013.
The HEADS project is organized in four phases; Baseline, Innovation, Development and Consolidation. The Baseline phase ended in January 2014 and during this period use cases were defined and the initial requirements from each use case were derived. In addition state of the arts surveys were performed and the first milestone, “MS1 Project initialization, baseline and requirements” was reached. The Innovation phase ended in November 2014 and the second milestone, “MS2 Initial HEADS languages and techniques” as well as the third milestone, “MS3 Initial HEADS IDE and method”, have been reached meaning that the Initial HEADS languages and techniques have been delivered and that the initial HEADS IDE and method have been gathered in a first integrated environment. These milestones constitute the initial HEADS implementation. Towards the end of this phase, “Use cases, requirements and validation” activities provided feedback and evaluation of the initial use case implementation to the technical activities. This implied reaching the fourth milestone, “MS4 First HEADS evaluation by industrial case studies”.
The Development phase ended in December 2015, implying that the consolidated HEADS implementation has been delivered (MS5 Consolidated HEADS languages and techniques and MS6 Consolidated HEADS IDE and method), which includes the consolidated HEADS modelling language and code generation framework, the consolidated framework for resource constrained devices and networks, the consolidated cloud based platforms for testing and data management and the Consolidated release of the IDE for HD services development. Moreover, the seventh milestone “MS7 Second HEADS evaluation by industrial case studies”, has been reached and the Intermediate evaluation has been delivered. The main baseline for the evaluation is the consolidated use case implementations, thus, based on the consolidated implementations of the use cases, evaluation feedback to the technical activities based on the experiences applying the HEADS technologies to accomplish the second increment of the use case implementation has been provided. This evaluation will be a main input for the final developments in HEADS that will take place in the third and final period of the project.
In terms of dissemination and exploitation, updated status and plans are reported in Dissemination, Exploitation and Open-Source Report and Plans - Period 2. In particular more concrete exploitation plans are provided both in terms of individual plans and joint exploitations. In terms of dissemination particular focus has been given on the dissemination through tutorials, hackathons and hands on workshops, in order to also elicit feedback and evaluation from external users. Six such events have been organised by HEADS in the second period with the total of more than 250 participants. Furthermore, four scientific publications have been published in this reporting period, including one journal article in IEEE Software. The consortium has also been active in disseminating HEADS results in various significant events such as Mobile World Congress 2015 and Eclipse IoT Days 2015.
The project website was launched in November 2013 and is updated regularly. Also, a twitter account has been created for the project and is used for news updates from the project. Logos, clip-art, templates, a brochure and other dissemination materials have been produced and distributed.
Some highlights of the second period of the project (M12-M27) are:
- The Use cases have been refined to be better suited for the evaluation of the HEADS technologies
- Clarified goals and roadmap for HEADS IDE in HEADS IDE: Workbench for developing HD-services
- Provisioning of the consolidated version of the HEADS technologies
- Provisioning of the consolidated use case implementations applying HEADS technologies and reporting of the corresponding experiences and evaluation of the HEADS technologies
- Organization of six hands on sessions (tutorials, hackathons and hands on workshops) for dissemination and evaluation by external users with more than 250 participants,
- Paper publication in the IEEE Software journal
- One HEADS open source component (the NodeJS package that gives a simpler API to npm install for programmatically installing "things") is reaching between 1000 and 2000 downloads per day.
- Significant contributions of open source software from externals (about 75 components)
- Initial Interaction with open source communities in particular OW2 to prepare for exploitation and sustainability of HEADs results
One of the main objective of project’s first year was the definition of the validation scenarios resulted from the two industrial use cases and the consolidation of requirements and user level specifications for the HEADS methods and tools. Within this period, a common evaluation methodology and criteria was defined. HEADS evaluation framework will enable the direct assessment of the HEADS technical results through the development of use cases and against the identified requirements. Moreover, the approach for achieving the mapping between technical objectives and user based requirements is iterative.
Driven by project objectives and considering envisioned results from contextual and industrial point of view, the use case providers, namely ATC and Tellu, have chosen two application domains (a media system and a personal security system) in order to cover different technical requirements which address all the challenges of building HD-Services. The media domain requires global aggregation and distribution of information from various sources and deals with high volumes of data while the personal security system needs communications with a variety of heterogeneous devices and sensors and has stronger real-time requirements. Furthermore, this deliverable describes the business and technological domains of the case study owners as well as the development processes relevant to each case study. Furthermore, the scenarios resulted in the extraction and definition of an initial set of requirements from the use case providers that will act as the spearhead towards the implementation of HEADS tools.
During project’s first year the HEADS consorium managed to put clarity on what will become the technical functionalities of the HEADS project. The interpretation of each user-based requirement made by the HEADS technical team created a list of detailed requirements that served as a guideline to define a full HEADS methodology. The functional analysis of the initial requirements that was performed by the HEADS consortium provided a consolidated version of detailed requirements and established the validation methodology which will enable evaluation of the HEADS platform in order to verify if the project objectives are fulfilled.
HEADS evolves ThingML for modelling the behaviour of a network of devices and Kevoree for modelling and managing the deployment of distributed systems. A brief description of this baseline can be found here.
HEADS makes the future ThingML and Kevoree – modelling languages and transformations
ThingML is a domain-specific language and compilers for the Internet of Things, which provides concepts to define the implementation (white box) of the different nodes composing a HD-Service.
Kevoree is a models@runtime approach for the deployment and dynamic management (black box) of heterogeneous and distributed services.
For ThingML the recent developments achieved in HEADS are:
- A new ThingML-to-Java compiler, which generates plain Java SE 6 sources (for simple integration on Android), and which by default can compile to Java SE 8 Compact Profile 1 (the most constrained profile), to address a large range of Java devices.
- An automated testing framework to assess the consistency of the different ThingML compilers.
- An extensive set of bug fixes (from 61% of test success up to 85%) and improvement (notably the new ThingML-to-Java compiler)
- A complete re-design of the prototype ThingML compilers into the HEADS code generation framework, which will be implemented during the second year of HEADS.
- A set of applications of ThingML targeting the open-source community and involving open-source, open-hardware, EU-designed and EU-produced platforms Arduino (Uno and Yun) and Raspberry Pi , .
- A case study from ARTEMIS ArrowHead in collaboration with two Norwegian companies to develop a prototypical smart-home gateway based on the Z-Wave protocol.
For Kevoree the recent developments achieved in HEADS are:
- A complete refactoring of the Java platform to ease the development model (Kevoree V3)
- A refactoring of the Kevoree metamodel
- Several new components, channels and groups in the standard library for MQTT and CEP.
- A new design of the Kevoree script language
- A new Kevoree Eclipse plugin for the HEADS IDE
- A new Kevoree web model editor (editor.kevoree.org)
- Improvement in the documentation (a web-book is now online)
- A tutorial that has been presented to Middleware, Comparch and EJCP.
At the moment, a proof-of-concept integration for the Java platform is now available (and will be consolidated and improved in the following days).
Programming languages and frameworks to interact with the physical world
- Reliability: small nodes cannot operate without being driven by a gateway i.e. the overall service will fail when connectivity fails.
- Power consumption: as the small nodes need to continuously communicate with the gateway, it cannot sleep or turn off its communication chips, hence always consuming energy.
Different from previous approaches, the Eclipse M2M Koneki and Mihini projects proposes to use Lua to ease the development of embedded and M2M systems, rather than directly using low-level C code. However, Lua cannot run on the most constrained nodes (such as Arduino) as it requires running on top of an operating systems like Linux. This excludes a wide range of nodes of the computing continuum, which HEADS is addressing.
Development environment for the Internet-of-Things and M2M apps:
Modeling Languages: A common way to tame platform heterogeneity is to abstract all the business logic away from the platforms and rely on generative techniques to automatically target different platforms. However, modeling approaches often tend to re-model everything or migrate the legacy code into models, as the REMICS and HEADS projects propose. The migration approach, while beneficial for HEADS, is however out of the scope of the HEADS project. Another challenge that modeling approaches are facing is related to their abstraction level. While abstraction makes it possible to “unify” different platforms in a single set of concepts, it should still be possible to compile the models to the different platforms. In practice however, modeling approaches tend to abstract away most “implementation details” and do not provide a proper action language able to express fine-grained behavior. For instance in UML, fine-grained behavior is usually written into opaque behavior (i.e. Java code written in a text field, with no support), with the following issues:
- Usually the designer must have a good knowledge about how the rest of the model is compiled, so that the code he wrote does not conflict with the generated code and can actually interact with it;
- This fundamentally breaks the MDA philosophy, since Java code (for example) has been written in the model, it becomes difficult to actually target another platform, for example based on C/C++.
ThingML Code Generation Framework
The experience acquired in Year 1 of the project has led to propose architecture for the HEADS code generation framework which allows platform experts to efficiently customize the existing code generators to their new platforms. This architecture and the different extension points of the code generation framework are described below. In the second period of the project, all the initial code generators with be re-engineered according to this architecture.
ThingML Code Generation Framework
The previous figure presents the 8 variation points of the ThingML code generation framework. These variation points are separated in two groups: the ones corresponding to the generation of code for "Things" and the ones corresponding to the generation of code for a Configuration (or applications). In the ThingML metamodel, the coupling between those two items is through the instances of Things which are contained in configurations. In the generated code, the idea is to also keep a separation between the reusable code which is generated for Things and the code generated to combine instances of Things together into an application. During the second period of the project, the ThingML compiler will be evolved in order to provide explicit and "easy to use" extension mechanisms for those 8 variation points. The next paragraphs briefly describe each of the extension points.
(1) Actions / Expressions / Functions: This part of the code generator corresponds to the code generated for actions, expressions and functions contained in a Thing. The generated code mostly depends on the language supported by the target platform (C, Java, etc.), and the code generators should be quite reusable across different platforms supporting the same language. The implementation of this extension point consists of a visitor on the Actions and Expressions part of the ThingML metamodel. New code generators can be created by inheriting from that abstract visitor and implementing all its methods. Alternatively, if only a minor modification of an existing code generator is needed, it is possible to inherit from the existing visitor and only override a subset of its methods.
(2) State machine implementation: This part of the code generator corresponds to the code generated from the state machine structures contained in Things. There are main strategies and frameworks available in the literature in order to implement state machines. Depending on the capabilities, languages and libraries available on the target platform, the platform expert should have the flexibility of specifying how the ThingML state machines are mapped to executable code. In some cases, the code generator can produce the entire code for the state machines, for example using a state machine design pattern in C++ or Java, and in other cases the code generator might rely on an existing state machine framework available on the target platform. To allow for this flexibility, the ThingML code generation framework should provide a set of helpers to traverse the ThingML state machines and leave the freedom of creating new concrete state machine generators and/or customizing existing code generator templates. In order to check the "correctness" of a particular code generator with respect to the ThingML language semantics, a set of reusable test cases has been created and should pass on any customized code generator.
(3) Ports / Messages / Thing APIs: This part of the code generator corresponds to the wrapping of ThingML things into reusable components on the target platform. Depending on the target platform, the language and the context in which the application is deployed, the code generated for a ThingML "thing" can be tailored to generate either custom modules or to fit particular coding constraints or middleware to be used on the target platform. At this level, a Thing is a black box which should offer an API to send and receive messages through its ports. In practice this should be customized by the platform experts in order to fit the best practices and frameworks available on the target platform. As a best practice, the generated modules and APIs for things should be manually usable in case the rest of the system (or part of it) is written directly in the target language. For example, in object oriented languages, a facade and the observer pattern can be used to provide an easy to use API for the generated code. In C, a module with the proper header with structures and call-backs should be generated.
(4) Connectors / Channels: This part of the code generator is in charge of generating the code corresponding to the connectors and transporting messages from one Thing to the next. This is the client side of the APIs generated for the Things. In practice the connector can connect 2 things running in the same process on a single platform or things which are remotely connected through some sort of network (from a simple serial link to any point to point communication over a network stack). The way the code is generated should be tailored to the specific way messages should be serialized, transmitted and de-serialized. In order to customize this part of the code generator, the ThingML framework offers a set of helpers which allow listing all messages to be transported and pruning unused messages in order to generate only the necessary code. The dispatch and queuing of the messages has been separated out from the serialization and transport in order to allow for more flexibility.
(5) Message Queuing / FIFOs: This part of the generator is related to the connectors and channels but is specifically used to tailor how messages are handled when the connectors are between two things running on the same platform. When the connectors are between things separated by a network or some sort of inter-process communication, the asynchronous nature of ThingML messages is ensured by construction. However, inside a single process specific additional code should be generated in order to store messages in FIFOs and dispatch them asynchronously. Depending on the target platform, the platform expert might reuse existing message queues provided by the operating system or a specific framework. If no message queuing service is available, like on the Arduino platform for example, the code for the queues can be fully generated.
(6) Scheduling / Dispatch: This part of the code generator is in charge of generating the code which orchestrates the set of Things running on one platform. The generated code should activate successively the state machines of each component and handle the dispatch of messages between the components using the channels and message queues. Depending on the target platform, the scheduling can be based on the use of operating system services, threads, an active object design pattern or any other suitable strategy. In ThingML the typical "unit of execution" is the processing of one message and the execution of a transition.
(7) Initialization and "Main": This part of the code generator is in charge of generating the entry point and initialization code in order to set up and start the generated application on the target platform. The ThingML framework provides some helpers to list the instances to be created, the connections to be made and the set of variables to be initialized together with their initial values.
(8) Project structure / build script: The last variation point is not generating code as such but the required file structure and build scripts in order to make the generated code better packaged and easy to compile and deploy on the target platform. The ThingML code generation framework provides access to all the buffers in which the code has been generated and allows creating the file structure which fits the particular target platform. For example, the Arduino compiler concatenates all the generated code into a single file which can be opened by the Arduino IDE. The Linux C code generator creates separate C modules with header files and generates a Makefile to compile the application. The Java and Scala code generators create Maven project and pom.xml files in order to allow compiling and deploying the generated code. The platform expert can customize the project structure and build scripts in order to fit the best practices of the target platform.
Resource-constrained devices and networks
In the first year of the project M2Mzone have setup a dedicated live server based on the M2Mzone platform and specifically for the HEADS partner’s integrations. The Platform can be accessed at http://live.heads-project.eu. ATC & Tellu demonstrators will be added during year 2 of the project. M2Mzone have also installed a zWave network and a serial based network. The zWave network has the following battery operated sensors installed, 3 temperature and humidity sensors, 5 motion sensors, 1 energy meter, 7 remote light switches with energy monitoring, 1 remote door lock and 8 windows and doors sensors. M2Mzone used web services to implement a weather station demo. The serial based network which is collecting utilities information which includes energy, water and oil usage across 5 remote sites in Ireland. These sites also include serial based air conditioning unit controls.
The main objective for the first project year has been to design and implement the initial version of HEADS technology for resource constrained devices. This task has a bottom up approach, hence work is based on SINTEF Instrumentation physiological sensor devices platform and the Arduino family devices. Initial designs have been made for these platforms to support dynamic reconfiguration. The SINTEF physiological sensor design is enhanced with point-to-point communication channels for both internal and external communication to facilitate modelling and reconfiguration. To enable functionality update over the air firmware upgrade will be added. Finally a simple internal Complex Event Processor (CEP) will be added to enable distributed event processing all the way to the sensor modules. These methods will be implemented using HEADS technology ThingML and Kevoree, however the initial implementations may be simple and manual. The Arduino platform will be used for ThingML modelling and code generation and Kevoree for deployment test before using these tools and methods for the physiological sensors.
Embedded Complex Event Processing
To address issues like battery operational time the amount of data to be transmitted from the resource constraint devices should be reduced to a minimum. However to maintain the HEADS architecture configurability an embedded CEP can be implemented in the device. The framework for such an embedded CEP has been defined and will be implemented later in the project using the HEADS tools.
The SINTEF physiological sensors units sample sensors at a constant rate in the range 0.01Hz to 1 kHz. The result is a constant rate of Pulse Code Modulated (PCM) data. For high sample rates the result is data with limited information only leading to a few events. If this high rate PCM data is fed directly to the CEP, this will result in a constant high load of the embedded CEP engine. The embedded CEP is combining sets of events and is fairly demanding on CPU and memory resources. Instead, low overhead pre-processing of the high frequency data can be done before data is sent to the CEP. This pre-processing, here called event extraction, is running at the high frequency of sensor data acquisition, while the CEP engine handling more complex rules can be run at a much lower frequency.
The event extraction can be a set of efficient predefined functions to handle data range checks, multi-zone classification, sliding window storage and retrieval or more advanced data processing functions.
Event extraction and CEP must be implemented to fit with the limited resources in the embedded devices. The implementation is tightly linked with the communication channel and process architecture described in the previous chapter.
The embedded CEP architecture has not yet been implemented.
Cloud based platforms for testing and data management
The goal of the cloud based platform for testing and data management was to define the foundation for simplifying the configuration of complex distributed and heterogeneous systems in order to enable code generator testing and data management in the cloud. To achieve this goal, we first report within the state of the art cloud based solution for testing and data management. Within this state of the art we also report on techniques for managing deployment configuration on the cloud.
Based on this report, we build different platform implementation to support the automatic deployment and configuration of HEADS apps in the cloud. We also build the architecture of a software architecture synthesiser for Data-Management and code generator testing in the cloud. The main idea is to support the on-demand creation of a distributed complex event engine based on the complex event queries. The distribution of the CEP engine will ensure its scalability. The second part consists of synthesising code generator testing engine for the heads platform. When a new code generator is written, we will support the creation of a code generator tester that compares this new implementation with exiting code generator regarding non-functional properties.
In the 12 first months, a first version of the baseline technologies has been provided.
As a key result, we achieve six internal milestones.
- Documentation for the basis tooling for the testing platforms and the Heads Workbench were improved.
- A first sandbox to demonstrate the HEADS works were built. This sandbox was built with open-source hardware (cubieboard, cubietruck, raspberry pi, arduino, beagle board) represents an example of target platforms for HEADS. More info here.
- An initial Virtual Machine to all the partners with the results of this Work package was provided. It mainly contains a simple example of applications deployed on a set of linux container and the current version of the HEADS Workbench. This VirtualMachine is available here .
- A new implementation to drive system container from the Kevoree description language in particular to drive docker container (docker.io) was provided.
- A first test generator for ThingML code generator using search based techniques was provided.
- An initial code recommender for complex event query using model driven engineering techniques was build.
Methodology and tool integration
In the first half-year of the project the baseline for the project has been established. The main activity was to gather the state-of-the-art for the main technologies and concepts to be used in HEADS. This state-of-the art is not only for the methodology work planned later in this work package but also the ground-work for the technical work packages.
Additionally the development infrastructures together with guidelines for its use have been developed. We relied here on established components already used by some of the partners.
In the second half-year of the project the focus was on developing the initial version of the HEADS IDE. It consists specially of design-time tools available as Eclipse features. By offering correctly build Eclipse features on a commonly agreed Eclipse version it was possible to create an HEADS IDE where the chosen tools from three HEADS partners worked successfully together in a common tool environment.
After finishing the deliverables the technology partners identified available components for a first version of the HEADS IDE in the form of Eclipse features. First discussions on the methodology for HD-Service developers have also been started.
In parallel to the development of the first tools for the HEADS IDE discussions on the usage patterns with these tools lead to a first version of the HEADS methodology available online .
HEADS Safe@Home case study at Telenor EXPODec 15
HEADS project consortium is setting up a stand of eHealth services related to the HEADS Safe@Home case study at...
HEADS upcoming meeting and tutorial, and Deutsche Welle presenceNov 14
On 22-24 November 2016, HEADS project consortium is hosting its 11th plenary meeting at the...
HEADS Paper accepted at GPCE 2016Sep 08
HEADS paper "Automatic Non-functional Testing of Code Generators Families" has been accepted for presentation at the 15th International Conf