Employees have the freedom to select in which language they want to express themselves.
You can find more entries selecting Spanish language in the top left corner link.

Virtualizing sensor network: final Video presentation

1 Malo2 Mejorable3 Normal4 Bueno5 Excelente (Votos: 1. Media: 5.00/5)
Loading ... Loading ...

Author: Javier Lucio

The EU FP7 VITRO project (http://www.vitro-fp7.eu), about virtualization of sensor networks, is reaching its end. It has been working on the virtualization of sensor networks as shown in older posts: http://www.lacofa.es/index.php/general/english-virtualizing-sensor-network-vitro-practical-scenarios?lang=en and http://www.lacofa.es/index.php/general/english-virtualizing-sensor-networks?lang=en.

Now, you can see a video explaining the concept developed in the project through a practical use case:

Notification Server

1 Malo2 Mejorable3 Normal4 Bueno5 Excelente (Votos: 10. Media: 4.80/5)
Loading ... Loading ...

Authors: Fernando Rodríguez Sela, Guillermo Lopez Leal, Ignacio Eliseo Barandalla Torregrosa


Today mobile applications retrieve asynchronously information from multiple sites. Developers have two ways to retrieve this information:

  • Poll: Periodically query for information from the server.
  • Push: The server sends the information to the client when the required information is available.

The first method is strongly discouraged due to the large number of connections made to the server needlessly, because information is not available and you lose time and resources.

This is why the Push methods are widely used for information retrieval. Anyway, how Push platforms are currently developed are misusing mobile radio resources and also consuming too much battery.

This article aims to explain how to manage this kind of messaging, problems with existing solutions and finally how Telefónica Digital, within the framework of the development of Firefox OS operating system, ideated a new solution designed to be friendly to the network and use few battery on mobile terminals.

State of the art

Historically, mobile operators offered (and offer) real mechanisms of Push notifications, also known as WAP PUSH. This technology can “wake up” applications when any action is required by them from the server side (without interaction from the user). Sending WAP PUSH messages is done in the circuit-switching domain, the same used for voice and SMS, and that is why the user does not have the need to establish a data connection. These kind of messages work properly out of the box.

WAP PUSH solutions works great when the user is registered in the mobile network, but if you are out of coverage or connected to a WiFi hotspot instead of a celular network, you can not receive the messages.

Also, if we add that sending messages imply an economic cost (basically it is a normal SMS) the effect is that major smartphone operating systems (Apple iOS and Google Android) have implemented a parallel solution that would work regardless of mobile networks to which the user belongs and it can run smoothly when they are using WiFi networks.

Current solutions issues

These alternative solutions are based on a public server that is accesible from the Internet, from which all Push messages are canalized for the mobile devices and act as a intermediate between all platforms between third party app servers and the devices.

These new platforms of Push messaging had a big problem to solve, because usually mobile devices belonging to users are inside private networks (not accessible from the Internet) and their communications are managed by NAT servers and firewalls that block the access to the phone from outside of the private network, or to call it, from the Internet. To solve this, different operating systems has decided to open a connection from the device to their proprietary servers, and keep the connection open using this channel as a platform to communicate and receive Push notifications.

Both Apple and Google has used this approach. But that forces the phone to keep a permanent open connection with their server and to avoid the connection to be closed, either by the own TCP timers or by the NAT timers that they need to pass thought to reach the platform server, by sending small data packets, known as keep-alive, that do not have any valuable data and they are used only to say that some data is being transferred across the connection.

This solution has some serious issues:

  • Keeping open connections on intermediate routers reduce notably the performance of these devices causing big scalability problems on mobile networks. For example, in Spain there are more than 15 million of smartphones connected to the Internet.
  • Signaling storms: Mobile networks use a lot of signaling messages to manage the location of a device, their status, stablising new connections… Each time they send a small data packet, there are generated a big number of signaling events. The problem is that, multiplying the number of smartphones by the signaling produced by each device, networks are saturated just by the signaling messages, that lower the quality of service provided by the mobile networks.
  • On the other hand, these solutions that a first glance are valid on traditional networks (like wired or WiFi home networks), are not valid on a cellular network, due to how operates a radio modem chosen for the mobile phones, because they are designed to lower the battery usage when they are not transmitting any data, however, this Push solutions used by the main operating systems, need to send keep-alive messages so they prevent the phone to enter to that idle state, or low energy consumption mode.

Collateral effects of these solutions are that the device batteries are consumed at quick speed to perform redundant operations (like sending small messages to prevent that their connections are closed).

Those problems are worse when we see that a lot of applications (most of them, on the top 10 of each market) create their own connections without using the technique provided by the operating systems, so they are literally multiplying problems with each new application.

So, what is happening on the mobile network?

As we said before, mobile networks do not work as the same way as a WiFi network, and designing previous solutions without considering first how this network work derives on the problems told in the previous section.

So, to really understand the problem, we need to know the different states under the radio modem can be:

On the 3GPP TS 23.060 specification, we can see all the things related with the states of the GMM layer (GPRS Mobility Management), that corresponds to the packet-switching domain on mobile devices. On the 3GPP TS 25.331 we can consult the information relative to the RRC layer (Radio Resource Control) where those radio levels are defined.

Joining the radio states and GPRS states we can know what is the actual state of the terminal.

NOTE: To simplify, in the next table we consider that there are no activity on the domain-switching circuit, so there are not any voice calls:




2G state 3G state


READY PMM-CONNECTED Cell_DCH Phone is transmitting or receiving data using a dedicated channel or a HSPA shared channel.

Cell_DCH timers are really small, so if there is no data transmitting or receiving during the past seconds, the timer will bring us to the Cell_FACH.

This timer is known as T1 and can vary between 5 and 20 seconds.

Cell_FACH The phone has been transmitting or receiving data some seconds ago, and due to inactivity (>T1) it has been moved from the state Cell_DCH to Cell_FACH.

If inactivity remains for more than T2 seconds, RRM will order the phone to move to the states Cell_PCH, URA_PCH or Radio Idle.

It’s also possible that the phone is transmitting or receiving small data packets, like pings, keep-alives or cell updates…

Usually, the T2 timer lasts for around 30 seconds.

Cell_PCH o URA_PCH Phone has been on the Cell_FACH state some seconds ago and due to inactivity (>T2), RRM has moved from Cell_FACH to Cell_PCH or URA_PCH.

However, the signaling connection is still available, despite no data will be send right now.

If new data must be sent by the connection, it is not necessary to re create the connection, but reuse the old one.

STANDBY PMM-IDLE Cell_PCH o URA_PCH Phone is not transmitting data and signalling connection has been erased, however, the PDP context is still active and the phone has a valid IP address.

Those are the reasons why this state is one of the most interesting ones to keep the phone on, because battery usage is really low and it maintains their IP address so it can receive data from the network

On this state there are no resources wasted: nor network, nor battery, nor traffic… Despite this, phones can send and receive data at any moment.

As soon as the data link need to be stablished, by the phone or by the network, this can be easily changed the radio state from  Cell_PCH or URA_PCH to Cell_FACH or Cell_DCH. This change is made in less than a half a second and consumes few signaling.

RRC IDLE This state is the same as above but in this case the radio is in Idle mode.

When the phone is in Cell_PCH or URA_PCH without any activity for more than the time stablished on T3, the RRM will move the radio from *_PCH to Idle.

Reestablish the radio link from this state will spend more than 2 seconds and a lot of signaling.

IDLE PMM-DETACHED RRC IDLE Phone is not transmitting any data and there are no connection stablished, nor signaling. Phone does not have any PDP context also.

In this state, the phone does not have any IP address.

If a phone has a PDP context, probably it will be closed automatically after 24h of inactivity (not receiving or sending anything)

Battery consumption on each of this states is:

  • RRC Idle – 1 relative unit of battery usage.
  • Cell_PCH – less than 2 relative units.
  • URA_PCH – less than Cell_PCH on mobility scenarios and the same on scenarios where there are no mobility
  • Cell_FACH – 40 times IDLE consumption.
  • Cell_DCH – 100 times IDLE consumption.

It is easy to see how previous solutions and the keep-alive packets prevent the device to stay long times on the IDLE state of low battery consumption.

Solution proposed by Telefónica

With all those problems on the table, and with the intention of making an operating system with new and innovative solutions, Telefónica Digital, with collaboration of Mozilla, has designed a new notification system that avoids to keep open connections inside a mobile network.

This new solution, implemented and distributed integrally open source, defines, not only how the notification server must communicate with the devices, but also the different APIs that need to communicate with itself.

To solve the previous problems, this solution is able to keep two different communication channels with the mobile devices, so when the device is on a network not managed by the carrier, for example, on the Wi-Fi at home, the connection is kept open, like others solutions do, using the HTML5 standard WebSockets, however, when the device is on a mobile network that is managed by the carrier, the WebSocket connection is closed by the server, and will wake up the device when there will be messages to it.

To wake up the phone, the notification platform is based in a single but elegant solution:

  • We know that when the device establish a data connection, establishing a PDP context, its IP address (public or private) is kept by the carrier servers (GGSM) and not by the terminal itself, so, even if the device enters in a low comsumption mode, the IP address is not lost.
  • When the device in on IDLE mode, but with a data connection enabled and the network need to send it some data (the GGSM has received a TCP or UDP packet for the terminal’s IP address) it sends a signaling message, known as PAGING used to “wake up” the phone and changing the IDLE mode. This PAGING message is similar to the used by the cellular network to notify the phone that it needs to attend a circuit-switching call (voice, SMS…)
  • Using this way of working of the mobile networks, the only piece we have left to finish the puzzle is the ability to send a direct message to the phone, but, being inside the mobile network (with a private IP) it’s necessary to put a server inside each OB, or mobile networks.

So, this solution is used by the WakeUp server inside a mobile network, that will send a small UDP packet to the IP of the phone to “wake up” it. The phone, once received this message, will be waked up by the network and will connect to the notification server using the connection based on WebSockets to retrieve the pending messages.

Some examples about actual apps

We cannot disclose the names of the apps analyzed here, nor the names of the terminals in which they have been tested. But we can say that they are pretty popular apps, that you probably have installed on your phone, and the most popular phones in the market, with the most used operating systems.

On the following graphs, we can see the data consumption this applications have when they are on idle mode, which is the same to say: doing nothing. Colors indicate different terminals.

We can observe that some applications send small punctual sends, but other ones are transmitting constantly.

Study made by: Telefonica – NSN Smartphones lab

As we can see, the apps try to keep their connections open by sending “pings”, made in short intervals, stable in time. Meanwhile the first graphic shows an app that sends messages each 10 minutes, the second and the last one messages are sent continuously, making the phone to be always in the maximum state of the network (remember the table and the relative consumption), wasting resources just to say: “I’m alive”.

So, with our solution, this regular “pings” are completely removed, making a connection only when a notification is received by the phone, improving the use of the network, lowering the signaling and also making that phones’ battery leasts more, due to statying in less excited radio states.

Virtualizing sensor network: VITRO practical scenarios

1 Malo2 Mejorable3 Normal4 Bueno5 Excelente (Votos: 1. Media: 4.00/5)
Loading ... Loading ...

Author: Javier Lucio

The virtualization of sensor networks concept was explained in an older post .

In that post, the concept under the title of the virtualization of sensor networks and the EU fp7 project VITRO, were introduced and described.

Now, a set of practical scenarios (6) covering the virtualization concept are described here for a better understanding and an open door for conceptualizing new services, functionalities and business models in the development, deploying and use of sensor networks.

Each of the selected scenarios is intended to demonstrate relevant VITRO functionalities. Namely, we concentrate on service virtualization, resource virtualization, service composition, service provisioning and security among additional functionalities.

Scenario 1: Service virtualization

A virtual service is obtained by extending, merging and reconfiguring the behaviour of existing services and by composing involved resources in a transparent way with respect to the underlying technology.

A number of different sensor islands are connected. These heterogeneous islands are administrated by different administrative domains and provide access to existing services and resources e.g. Energy Efficient Smart Schools and Buildings in general. The end-user would be able to monitor observations and measurements on room temperature, humidity and lighting for multiple buildings at disparate locations.

Each Wireless Sensor Island (WSI) is deployed in order to form a unique Virtual Sensor Network (VSN). Each WSI is equipped with different types of sensors, therefore information about one metric of observation may come from one out of the different WSIs.


Scenario 2: Service virtualization II

This scenario consists on the formation of 2 Virtual Sensor Networks (VSN) using the same sensor network infrastructure. Two VSN instantiations are formed within the same Wireless Sensor Island (WSI). A sensor participating in the first VSN e.g. providing one of its sensing capabilities can be part of the second VSN providing a supporting service such as routing for data packets of the second VSN.

Service Virtualization scenario can be distinguished in two different cases: 1) two users requesting data from the same node and 2) users request data from different nodes but one or more of the traversed nodes serve as routers, supporting both network instances.

This scenario could be applicable for use cases as controlling the traffic lights and managing the traffic in a city.


Scenario 3: Resource virtualization

The third scenario demonstrates the concept of node resources virtualization in order to provide relaying capabilities and form a Virtual Sensor Network (VSN) able to recover from network disconnections.

A Delay Tolerant Networking (DTN) scheme is used when a formerly connected Wireless Sensor Island (WSI) becomes disconnected for a long period of time, and therefore partitioned in a number of smaller WSI. DTN basically relies on some nodes’ mobility in order to deliver data among these WSIs and establish end‐to‐end communication.

This scenario could be applicable for the management of disasters or security in public spaces where the recovery of the network connection is an issue. For that, the scenario uses the employment of DTN mechanisms supporting the delivery of data when end-to-end connectivity within a VSN is intermitted or unavailable.


Scenario 4: Security related virtualization

This scenario is based on trust-aware routing protocols, and could be applicable to services as monitoring the production in sales chains or security in public spaces.

The scenario deals with the case when a Wireless Sensor Island (WSI) has some malicious nodes. The routing metrics that are utilized by the routing protocol for establishing the routes to forward the data packets, will be shown to efficiently detect any malicious nodes (e.g. grey or black nodes that drop part of or all of their incoming traffic) and re-route the data traffic around them. Also, this scenario demonstrates the effectiveness of excluding a node that does not support encryption capabilities, providing robustness and attack resistance to the WSI.

Scenario 5: Service composition

This scenario consists on the composition of a new high-level service from a set of available primary services. For example:

  • Composition of a new service called “fire detection” from two primary services: C02 measurements and temperature/light intensity measurements.
  • Composition of a “public security” service from a composed motion-detection and light intensity observation services.

In both cases, a wireless sensor network (WSN) is formed by a set of nodes (CO2 and light sensors or motion-detection and light sensors), which are deployed in order to monitor a given area. The sensors run the CoAP protocol, so they are able both to periodically send data to a specific CoAP client, who is querying the information and to answer to specific interrogation.

The nodes are connected to a Gateway which acts as a bridge between the sensor and the service provider in charge of processing the data received from the WSN, combining the observations carried out by the WSN and, eventually, raising an alarm if a target event is detected. In the scenario of “fire detection”, the SP will aggregate the data received from the light intensity and the fire temperature. When both these values are too elevated, exceeding a given threshold, an alarm is raised. In the scenario of “public security”, the behaviour of the SP is similar: the sensor publishes the measurements collected by the sensor and the SP processes received data, notifying the end-user in case of event detection.

Scenario 6: Service provisioning

This last scenario is devoted to demonstrating the continuation of service provisioning under adverse circumstances. Let’s use as the example service a composite service providing average temperature measurements from a set of temperature sensors in a room or another predefined area. The scenario demonstrates the robust and continued provision of this service in the event that one of the sensors participating in the corresponding Virtual Sensor Network (VSN) goes “down” (for example, due to energy depletion or malfunction). The system shall be able to select another sensor (in the vicinity) or group of sensors that provides an equivalent operational service and integrate it to the VSN, thus allowing the continuation of service provisioning, in a transparent way to the final user.

An end-user requests from a Service Provider an average temperature monitoring service for one or more locations, additionally specifying the number of sensors that will be providing the measurements per location. For example, the user can define that it is required that the average temperature result is reliable only when it is calculated based on 5 measurements from different sensors in a location.

After the service is initiated and the end-user receives the first results for the average temperature at each location, some sensors involved in the formed VSN will become unavailable (e.g. when they have exhausted their power resources and shut down). In this case, the gateway should be able to detect that the required number of sensors supporting the VSN is not met, based on a “disappearing node” event and inquire the “Resource and Services Controller” for other nodes within its Wireless Sensor Island (WSI) that can also provide temperature measurements. Finally, the required number of these equivalent temperature sensing nodes will be tasked to join the VSN, so that the service will continue to be provided to the end-user uninterrupted.

Fast, open and smart: Why Firefox OS is a game-changer

1 Malo2 Mejorable3 Normal4 Bueno5 Excelente (Votos: 2. Media: 3.00/5)
Loading ... Loading ...

By Carlos Domingo, Director, Product Development & Innovation, Telefónica Digital

Originally posted in Telefonica Digital blog


The unveiling today of Mozilla’s Firefox OS is another big step forward – for smartphones, for HTML5 and for bringing the wonders of mobility to the entire world and not just the privileged few. Working with Mozilla, we showed off the technology at Mobile World Congress earlier this year and are proud to have played a part in what we believe will be a very compelling and useful new option for customers.

By focusing on HTML5, Mozilla has created a platform for very fast yet affordable devices based on open source technology.

Firefox OS combines HTML5 with a Linux kernel, allowing every function of the mobile phone to be developed as a web app and run through a special version of the Firefox web browser. Making a call, sending a text, taking a photo would all be done by an HTML5 app.

The result is a better experience and performance, even at low end price points. And this is crucial if we’re going to get smartphones into the hands of more and more customers in the developing world. Take Brazil where current smartphone penetration is approximately 14% and only a small percentage of people have the money to buy top-end smartphones from the mega-brands. Yes there are low end Android devices for $100 but they tend to use older versions of the OS which rapidly become out of date and performance is not that great.

Firefox OS can deliver a far better experience at these price points and that’s why are so keen to bring it to markets like Brazil.

The early signs for Firefox OS are positive: strong carrier and device support. But also, the openness of the architecture bodes well for the third leg of the stool: developer support. The history of technology suggests that being open and offering great value are common ingredients of success – just look at how Mozilla drove innovation in web browsing through Firefox.

And that’s why we’re bullish on the prospects for Firefox OS.

Follow Telefonica Digital on Twitter @tefdigital for more updates

FINSENY: Future Internet Technologies for the Smart Energy (II)

1 Malo2 Mejorable3 Normal4 Bueno5 Excelente (Votos: 1. Media: 5.00/5)
Loading ... Loading ...

FINSENY Mission & Strategy:

The FINSENY Mission and Strategy, which define what the project plans are, and how they will be accomplished:

« Demonstrate, by 2015, how open Future Internet Technologies can enable the European energy system to combine adaptive intelligence with reliability and cost-efficiency to meet, sustainably, the demands of an increasingly complex and dynamic energy landscape. »

 FINSENY is specifying use cases, ICT requirements and architectures in the Smart Energy domain for five scenarios which have been identified as strongly benefiting from Future Internet technologies:

a)      Distribution Networks,
Smart Buildings,
Electric Mobility and
Electronic Marketplace for Energy.

Distribution System (DS):

Problem Statements

  • The increasing amount of volatile DER generation in the Distribution System will change today’s well planned, standard load profile based operation to a dynamic demand response based approach based on actual status information from the medium down to the low voltage parts of the grid.
  • Energy flows are increasingly bidirectional, intensifying person safety and grid overload issues and demanding an increase of efficiency of energy distribution.
  • New smart energy applications are to be incorporated which handle the need for varying generation and demand levels to be balanced while optimally utilising grid resources.
  • Ubiquitous ICT solutions need to be defined, designed, financed, built, operated & maintained (see design principles in the above section).


  • “Design a future ICT solution for Distribution System automation & control to increase energy quality, reliability, robustness and safety and to ease integration of Distributed Energy Resources.”


  • Automated fault restoration, power analysis & control, grid maintenance.

Customers and Benefits

  • DSOs will get the solutions to optimally handle grid capacity and energy flows.
  • Service providers will get the interfaces to provide innovative energy services.
  • Prosumers can optimise their generation and consumption based on a stable distribution grid.

Key factors to judge the quality of the outputs

  • Reliability, safety, security and cost-efficiency of the solutions.

Key Features and Technologies

  • Decentralised operation, connectivity and control by scalable ICT solutions.

Outputs Critical Factors

  • Interoperability and integration (with legacy systems), scalability, allowing both centralised and de-centralised control, open & secure ICT solutions.


Problem Statements

  • How to operate local low voltage or even mid-voltage distribution systems with Distributed Energy Resources (DERs) and storage devices to satisfy demand of energy consumers in an autonomous (islanding mode) or semi-autonomous (interconnected to the main grid) way.
  • How to build the Microgrid platform on Future Internet technologies to be more cost-efficient, flexible, secure, scalable, reliable, robust and modular.


  • “Design a reliable and cost-efficient Microgrid platform which ensures flexibility, scalability and robustness. The design will be modular and applications/services will be loosely coupled. Devices in or at the edge of the grid (e.g. DERs) will be easily integrated and control/communication networks will be managed to ensure the right level of QoS.”


  • Microgrid Control Centre and interface to the control and communication network to operate the system and integrate prosumers with e.g. DERs or Smart Buildings.
  • Configuration, monitoring & control, data management & processing.

Customers and Benefits

  • Microgrid operator has a flexible Microgrid platform to deploy in his environment.
  • Prosumers have an aggregation platform to include their DERs and flexible demand.

Key factors to judge the quality of the outputs

  • Internet of Things technologies for device & resource management, connectivity services at the interface to networks, data management and security.

Key Features and Technologies

  • Decentralised operation, connectivity and control by scalable ICT solutions.

Outputs Critical Factors

  • Regulatory hurdles, but already aligned with the governmental goals for an increased share of renewable energy and higher reliability by providing cost-efficiency.

Smart Buildings:

Problem Statements

Comprehensive building energy management:

  • With the combined goals of building-scale optimisation (local source-load-storage balancing and efficiency and grid-scale optimisation (demand-response)).
  • Under constraints of scalability, separation of concerns and auto-configuration.


  • “Design of future comprehensive Building Energy Management Systems as flexible edge of the Smart Energy system and as key element for shared Future Internet platforms.”


  • Make it possible to monitor and control all energy-relevant building subsystems, appliances and other physical entities operating on top of a shared platform, a building “operating system” provided to all building applications.
  • Managing all entities through common interfaces based on a generic model akin to that of a peripheral driver in a computer operating system.

Customers and Benefits

  • Building owners and stakeholders
  • Facilities managers and building services providers
  • Building end-users
  • All shall benefit from a horizontal building energy management system that interoperates fully with other building automation systems operating on top of the same shared building operating system.

Key factors to judge the quality of the outputs

  • Use smart building “operating system” providing interface to the buildings physical entities and common service layer to be shared by all building applications.

Key Features and Technologies

  • Provide information models and interfaces that encompass all energy-relevant legacy building hardware and equipment. Demonstrate corresponding monitoring and control interfaces for key types of such equipment.
  • Provide information models and interfaces that make it possible to interoperate with existing building ICT systems. Demonstrate corresponding monitoring and control interfaces for such systems.
  • Specify application layer that combines local and global energy optimisation.

Electric Vehicles:

Problem Statements

  • As the number of electric vehicles on our roads increases, the charging of electric vehicles will become a major load on the electricity grid and it’s management poses challenges to the energy system as well as offering a contribution to solutions to balancing the volatility of energy generation from renewable sources.
  • The provision of a seamless infrastructure for charging electric vehicles in Europe poses challenges to the transport, the energy and the payment infrastructure owners as well as well as to regulators. At the same time, electric vehicles, if connected wirelessly to the transport infrastructures, offer the potential to support multi-modal transport solutions.


  • “Design Smart Energy solutions so that electric vehicles will be an integrated part of the energy infrastructure, maximising their benefits to the energy infrastructure.”


  • Defining the role that electric vehicles can play in the Smart Energy infrastructure.

Customers and Benefits

  • Energy stakeholders are provided with scenarios for integrating electric vehicles into their plans for evolution towards Smart Energy solutions.
  • Energy stakeholders are given an overview of the ICT requirements and functional architecture issues from the perspective of electrical vehicle usage.
  • Users can charge in a user friendly way and can use electric vehicles as part of multi-modal transport solutions.
  • Energy stakeholders could possibly use the control of charging times for the vehicles to assist in energy grid management.

Key factors to judge the quality of the outputs

  • Defined ICT requirements and functional architecture enabling the integration of electric vehicles into the energy infrastructure.

Key Features and Technologies

  • Scalable solutions as number of vehicles grow, access to services wherever the user is via cloud computing and defined network interfaces. Wireless and fixed converged networks, infrastructure as a service.

Outputs Critical Factors

  • Open interfaces and secure ICT solutions are needed. The high speed of change in the market as the commercial side of electric vehicles develops is already evident creating timing issues for the introduction of common solutions. Regulatory issues will play a key role in this emerging market.

Electronic Market Place for Smart Energy:

Problem Statements

  • Those who are going to participate more actively in the energy supply, such as DSM customers, Prosumers, Microgrids, DERs, need a kind of electronic market place (for information and services). The services should be offered via the Future Internet. This electronic market place could in particular give information useful for balancing supply and demand and to check grid restrictions.


  • “Design ICT systems to extend web based energy information, demand shaping and energy trading services for the emerging energy market players.”


  • ICT systems to enhance contract negotiation, competitive price awareness and energy trading also at regional-level.

Customers and Benefits

  • Final Energy customers should be more aware and have a broader choice of Energy supply; energy trading at micro levels will be available for new prosumers, as well as better management of grid stability and planning for utilities.

Key factors to judge the quality of the outputs

  • Very flexible and secure web based energy services.

Key Features and Technologies

  • Large scale data gathering and management via web, Internet of Energy linked objects and customers-prosumers.

Outputs Critical Factors

  • Perceived marketplace trustworthiness from energy stakeholders will be fundamental together with the user engagement via the Internet in Smart Energy services promoted in FINSENY.


FINSENY is a Future Internet (FI) project studying innovative new FI technologies to apply them to the Smart Energy landscape. The need for more ICT as described in this paper is widely agreed in the Smart Energy community to accomplish the challenges of the envisioned energy system. Future Internet technologies offer several opportunities for Smart Energy solutions, including connectivity, management, service enablement, distributed intelligence as well as security and privacy.

In the FINSENY project, key players from the ICT and energy sectors teamed-up to analyse the most relevant Smart Energy scenarios, identify prominent ICT requirements, develop reference architectures and prepare pan-European trials. As part of the FI-PPP, FINSENY will demonstrate over the three phases of the programme that Smart Energy needs can be fulfilled through an ecosystem of generic and Smart Energy specific ICT enablers running on top of an open Future Internet platform.

FINSENY will shape the European Future Internet ICT platform(s) to support the emerging European Smart Energy era. The growing Smart Grid Stakeholder Group will provide broad visibility of the on-going project work in the energy community, enhancing the acceptability of the project results and facilitating the development of the Smart Energy landscape.

FINSENY: Future Internet Technologies for the Smart Energy (I)

1 Malo2 Mejorable3 Normal4 Bueno5 Excelente (Votos: 1. Media: 5.00/5)
Loading ... Loading ...

Author: Javier Lucio

Planning of the future energy supply means defining optimum trade-offs between reliability, sustainability and costs. The increasing use of renewable energy sources is creating new challenges for energy providers. Peaks in energy generation are happening more frequently and require new solutions to maintain the reliability of the supply. At the same time, users are being empowered to take an active role in the energy arena as prosumers and operators of micro-grids.

Future Internet technologies will play a critical role in the development of Smart Energy infrastructures, enabling new functionality while reducing costs.

Telefonica I+D is participating in the FINSENY project, where key actors from the ICT and energy sectors team-up to identify the ICT requirements of Smart Energy Systems. This will lead to the definition of new solutions and standards, verified in a large scale pan-European Smart Energy trial. Project results will contribute to the emergence of a sustainable Smart Energy infrastructure, based on new products and services, to the benefit of all European citizens and the environment.

As part of the FI-PPP programme, FINSENY will analyse energy-specific requirements, develop solutions to address these requirements, and prepare for a Smart Energy trial in phase two of the programme.

FINSENY vision:

« A sustainable Smart Energy system in Europe, combining critical infrastructure reliability and security with adaptive intelligence, enabled by open Future Internet Technologies »

This Vision statement provides the context for the FINSENY project. It is based on the likely and potential evolution of both, Smart Energy and the Future Internet, as well as their interaction.

How is the Smart Energy landscape likely to evolve?

There are 3 different paths in the evolution of the energy system in Europe in terms of its information system, provision system and market system. The broad trends are from centralised to decentralised control and generation, in parallel with market liberalization and open energy markets.



The evolution of the Smart Energy landscape will differ from country to country, and is subject to a level of uncertainty which increases as one looks further ahead in time. With the focus on ICT for Smart Energy, it is beyond the scope of FINSENY either to predict the likelihood of each evolutionary step or to evaluate the different combinations. However, it seems clear that the Smart Energy system of the future will include the following critical features:

  • Reliability – minimal interruptions to supply at all customer levels.
  • Safety – all members of society will be protected from dangerous occurrences.
  • Security – ensure compliance in the use of information and protect the network from unwanted intrusions whether physical or cyber systems.
  • Adaptability – be capable of operating with a wide mix of different energy sources and be self-healing through decision-making on a local level.
  • Utilisation – improved utilisation of assets through monitoring and control.
  • Intelligence – the gathering and management of information relating to customers and assets throughout the network and using such information to deliver the features above.

The evolutionary process will begin with local smart solutions to specific problems relating to the mix of energy sources, demand management, electric vehicles (EVs) and/or the operation of Microgrids. Solutions capable of supporting local solutions must be scalable in order that the local solutions join together to form larger and larger smart networks. The speed of such growth will be governed by many factors relating to the policy within an energy company, regulatory conditions nationally and across Europe, growth and cost of renewable energy, proliferation of EVs and above all, by choices customers will make which the utility cannot influence but to which it must respond in order to deliver on the critical features identified above.

Taking this a step further, the customer, whether large or small, will be a major player in the Smart Energy networks of the future. This is considered in the FINSENY scenarios and use cases.

How is the Future Internet likely to evolve?

The ICT landscape exhibits very short innovation cycles and is continuously evolving. A number of new trends are observable and already shaping upcoming ICT industry solutions. These developments can be summarized by the term “Future Internet”:

  • Evolution of communication networks LTE (4G) in the wireless domain as well as new wired technologies (e.g. Fibre-to-the-X) offer not only increased bandwidth but also different Classes of Services approaching real-time requirements. Furthermore, with innovations in network virtualisation, new flexibility for network control emerges.
  • Internet of Things New mechanisms are developed to easily manage huge numbers of interconnected devices which interact with their environment. Sensor data can be collected, aggregated, processed and analysed to derive contextual awareness and improved control decisions.
  • Internet of Services facilitates the establishment of complex business relationships between multiple stakeholders, paving the way for innovative business applications.
  • Cloud Computing private or public, supports a transition of business models towards the “as a service” paradigm.

Additionally, with steadily increasing volumes of data, a combination of these technologies support the exchange, processing and analysis of massive amounts of data to derive useful information. Finally, the Future Internet includes the seamless integration of the plethora of new technologies to realise secure solutions for systems with increasing complexity.

How can the Future Internet enable Smart Energy?

The ICT challenge of Smart Energy is to exchange information across multiple domains, among devices and between subsystems of diverse complexity. In addition to interoperable and standardized communications between such elements, future Smart Energy systems will rely on the availability of access and correct configuration of systems across ownership and management boundaries such as between energy management systems, energy markets, electricity distribution with distributed resources. Interactive customers with smart meters, building energy management systems, intelligent appliances and electric vehicles have to be integrated. Future Internet technologies offer several opportunities for Smart Energy solutions, including:

Connectivity Future Internet will bring end-to-end connectivity between large varieties of grid elements, including distributed energy resources, building energy management systems and active resources such as electric vehicles. For a general and cost-effective approach the use of common and public communication infrastructures has to be targeted. While current 2G/3G networks are sufficient as access technology for first generation Smart Grid applications such as metering, LTE, IPv6 and other Future Internet communication technologies offer the capabilities for demanding and delay sensitive applications. For certain Smart Energy applications real-time communication plays an important role to fulfil the requirements for synchronisation and guaranteed reaction times of control actions. Advanced Future Internet forwarding and control plane solutions have to be considered in order to fulfil these requirements. Network virtualisation techniques can provide means to run dedicated Smart Energy communication networks, e.g. mission critical communication, on top of public infrastructures.

Management Smart Energy introduces a lot of new managed elements with dramatically increased data volume in the network and data centres, resulting in additional management burden, complexity and cost. There are opportunities to utilise elements of the Future Internet architectures and concepts in Smart Energy management: (i) Device management and flexible object registries / repositories support mass provisioning, software maintenance. (ii) Flexible secure data management: aggregation, correlation and mediation. (iii) Network management evolves towards scalable multi-tenant (cloud) operations where organisations can manage their own objects as required. (iv) Services enabling platforms will simplify service application development. (v) Local management and decentralised data processing solutions will support Microgrids and islanding operation modes. (vi) Telecom billing and rating solutions support already huge capacities and flexibility for the various post and prepaid business models. Dynamic load and time based pricing of energy can easily be added.

Service enablement Future Internet provides novel technologies for instant collaboration between suppliers, network operators, and prosumers. Timely, reliable and highly confidential information on the status of the grid will become accessible for all relevant stakeholders. But beyond monitoring as currently performed with smart meters, the Future Internet enables new web services based on bi-directional communication and interaction between suppliers and consumers: demand response, balancing and ancillary services, dynamic pricing, buying and selling of power are just a few of the promising future applications, which will be enabled by advanced ICT solutions.

Decentralised and distributed intelligence of the grid Future Internet technologies will introduce new techniques in hardware and even more in software; injecting effectively intelligence into the grid. The electricity system that we inherited from the 19th and 20th centuries has been a reliable but centrally coordinated system. With the liberalisation of European markets and the spreading of local, distributed and intermittent renewable energy resources, top-down central control of the grid no longer meets modern requirements. Tomorrow’s grid needs decentralised ways for information, coordination, and control to serve the customer. ICT is essential to achieving these innovations as we have already seen in today’s networks – telecommunication networks and the Internet itself being noteworthy examples.

Security & privacy Electricity grids are a critical public infrastructure. It is very important to underline the importance of security and trust, expressed as both reliability and privacy. Future Internet technologies will provide new and improved means to support security and privacy. Authentication and integrity protection of the control communication and data exchange plays an important role for the Smart Grid operation. Highest security standards have to be applied especially for mission critical infrastructure. Privacy of the European Union citizens personal life will be considered in depth as any privacy loophole will strongly impact the acceptance of Smart Energy services and solutions by the public. Future Internet privacy solutions have to be provided especially for the Smart Building and Electric Mobility scenarios and whenever user data is handled.

The Future Internet will enable Smart Energy systems by fulfilling the stringent requirements of the energy system and by providing high quality: reliability, scalability and security.


Design principles for ICT to enable Smart Energy

From the FINSENY vision it is obvious that the future Smart Energy systems will be guided by a set of design principles which in turn will impose a set of design principles on the corresponding ICT (and FI) architecture.

The list of the prominent Smart Energy design principles is:

  • Openness to new service providers and business models.
  • Flexibility to incorporate new customers (generation and consumption) as well as mobile loads.
  • Decentralisation of control to support distributed generation and consumption.
  • Introduction of autonomous energy cells (e.g. Microgrids, Smart Buildings).
  • Security and safety mechanisms on all customer-, system- and service-levels.
  • Automation of critical control processes.
  • High reliability and availability of all systems and services.
  •  Cost efficiency.

These future Smart Energy design principles can be translated into those for supporting the future ICT infrastructures. These deduced ICT and Future Internet design principles will help the developers of enablers to properly tackle the right priorities while conceptualising the functional blocks of the solutions envisaged within the FINSENY domains. A selective set of ICT and Future Internet design principles is:

  • Open (and standardised) Interfaces guarantee the compatibility and extensibility of the system.
  • Simplicity limits system complexity and improves maintainability.
  • Flexibility allows for adaptability and loosely coupled systems.
  • Scalability ensures that the system continues to function under changes in e.g. volume or size without affecting its performance.
  • Modularity promotes vertical encapsulation of functions.
  • Maintainability and Upgradability lead to suitable manageable and sustainable designs.
  • Security & Privacy by Design comprises the complete system development process.
  • Support of Heterogeneity is a major design principle.
  • Reasonable Dimensioning to optimise cost-efficiency without compromising overall performance.
  • Robustness ensures systems to survive failures.
  • Locality guides the design of self-healing and robust logical systems.
  • Encapsulation/Isolation of faults supports the concept of Locality.
  • Auto-configuration supports the concept of Plug&Play.
  • Quality of Service (QoS) classes guarantee well-defined performance metrics.
  • The Decentralisation of Control Structures supports scalability, performance and locality.
  • The Decentralisation of Processing demands for distributed data storage and processing units, and may introduce hierarchies.
  • End-to-end Connectivity ensures that a network provides a general transport service that is transparent to applications.
  • The Networks of Networks principle supports the decomposition into a collection of autonomous networked sub-systems.

(Español) ¿”Super” Wi-Fi? no se lo cree ni la Wi-Fi Alliance

1 Malo2 Mejorable3 Normal4 Bueno5 Excelente (No valorado aún)
Loading ... Loading ...

Crece la ceremonia de la confusión en torno de las muy mal llamadas Super Wi-Fi. Aparece una nueva noticia de un despliegue de una de estas lo-que-sean Wi-Fi y los detalles que se describen son cuando menos cuestionables. La noticia es:

SuperWiFi launched in Canada, but it’s not U.S. Super Wi-Fi

Se trata de una red que va a desplegar una empresa llamada Navigata Communications 2009, a través de su subsidiaria kuMobile, y van a emplear una licencia que tienen en la banda de 3,5 GHz. ¡¿3,5 GHz?! pero si eso no es espectro de televisión, no es el Espectro Blanco sin usar por los operadores de TV. Es espectro que entre otras cosas se licenció para WiMax. Si esto suena muy raro, lo que viene luego es mucho más llamativo. La noticia dice:

The kuMobile SuperWiFi network will support the 802.11n standard delivering throughput up to 160 Mbps. SuperWiFi users will be able to access the Internet throughout the city from any standard WiFi-enabled device. Voice, video, data and web browsing will all be supported on a high-speed, low-latency network”

Vamos a “desestructurar” la noticia:

  1. Si esa “lo-que-sea Wi-Fi” funciona en la banda de 3,5 GHz, no es estándar 802.11n
  2. Esa “lo-que-sea Wi-Fi” no puede dar 160 Mb/sg en una zona de cobertura amplia. Ya lo hemos contado en otras aportaciones a este blog. Lo que no puede ser no puede ser, y además es imposible. Si el 802.11f, un Wi-Fi 11n “tuneado” para funcionar en zonas amplias no puede, menos un 11n normal.
  3. ¡Pero cómo se va a poder conectar un usuario con un dispositivo normal Wi-Fi 11n a esa “lo-que-sea Wi-Fi”! ¡Si no funcionan en la misma banda de frecuencia!. El 11n estándar trabaja en las bandas de 2,4 y 5 GHz, pero no en 3,5 GHz

Este tipo de noticias han conseguido incluso “irritar” a la Wi-Fi Alliance, guardiana de la implementación estándar de Wi-Fi y certificadora de todos los productos que llevan ese nombre. Han realizado un comunicado oficial muy esclarecedor:

Wi-Fi Alliance® statement regarding “Super Wi-Fi”

Inaccurate moniker will lead to consumer confusion

Los dos puntos principales de su comunicado son los siguientes:

  1. The technology touted as “Super Wi-Fi” does not interoperate with the billions of Wi-Fi devices in use today
  2. Today’s deployments in Television White Spaces do not deliver the same user experience as is available in Wi-Fi hotspots and home networks

Palabra de la Wi-Fi Alliance

Crece la ceremonia de la confusión en torno de las muy mal llamadas Super Wi-Fi. Aparece una nueva noticia de un despliegue de una de estas lo-que-sean Wi-Fi y los detalles que se describen son cuando menos cuestionables. La noticia es:

SuperWiFi launched in Canada, but it’s not U.S. Super Wi-Fi

Se trata de una red que va a desplegar una empresa llamada Navigata Communications 2009, a través de su subsidiaria kuMobile, y van a emplear una licencia que tienen en la banda de 3,5 GHz. ¡¿3,5 GHz?! pero si eso no es espectro de televisión, no es el Espectro Blanco sin usar por los operadores de TV. Es espectro que entre otras cosas se licenció para WiMax. Si esto suena muy raro, lo que viene luego es mucho más llamativo. La noticia dice:

The kuMobile SuperWiFi network will support the 802.11n standard delivering throughput up to 160 Mbps. SuperWiFi users will be able to access the Internet throughout the city from any standard WiFi-enabled device. Voice, video, data and web browsing will all be supported on a high-speed, low-latency network”

Vamos a “desestructurar” la noticia:

  1. Si esa “lo-que-sea Wi-Fi” funciona en la banda de 3,5 GHz, no es estándar 802.11n
  2. Esa “lo-que-sea Wi-Fi” no puede dar 160 Mb/sg en una zona de cobertura amplia. Ya lo hemos contado en otras aportaciones a este blog. Lo que no puede ser no puede ser, y además es imposible. Si el 802.11f, un Wi-Fi 11n “tuneado” para funcionar en zonas amplias no puede, menos un 11n normal.
  3. ¡Pero cómo se va a poder conectar un usuario con un dispositivo normal Wi-Fi 11n a esa “lo-que-sea Wi-Fi”! ¡Si no funcionan en la misma banda de frecuencia!. El 11n estándar trabaja en las bandas de 2,4 y 5 GHz, pero no en 3,5 GHz

Este tipo de noticias han conseguido incluso “irritar” a la Wi-Fi Alliance, guardiana de la implementación estándar de Wi-Fi y certificadora de todos los productos que llevan ese nombre. Han realizado un comunicado oficial muy esclarecedor:

Wi-Fi Alliance® statement regarding “Super Wi-Fi”

Inaccurate moniker will lead to consumer confusion

Los dos puntos principales de su comunicado son los siguientes:

  1. The technology touted as “Super Wi-Fi” does not interoperate with the billions of Wi-Fi devices in use today
  2. Today’s deployments in Television White Spaces do not deliver the same user experience as is available in Wi-Fi hotspots and home networks

Palabra de la Wi-Fi Alliance

(Español) ¿Super? Wi-Fi. No nos dejemos confundir por los nombres comerciales

1 Malo2 Mejorable3 Normal4 Bueno5 Excelente (Votos: 4. Media: 5.00/5)
Loading ... Loading ...

Aprovecho la noticia del primer condado en EE.UU en instalar una llamada “Super Wi-Fi”, para intentar acotar cómo de súper es esta Super Wi-Fi. Mucho me temo que es un impactante nombre comercial que puede llevar a engaño.

Los proponentes de esta nueva (aunque realmente no demasiado) Wi-Fi le han puesto el prefijo de súper por su gran cobertura. Esto es cierto. Los espacios blancos de TV en las bandas  de VHF/UHF permiten una mayor cobertura que en las bandas tradicionales de 2,5 y 5 GHz usadas por la Wi-Fi convencional. También es cierto que las potencias de transmisión que se permiten, de hasta 4 vatios de potencia radiada equivalente, son superiores a la de la Wi-Fi normal. Estos dos hechos pueden permitir una cobertura importante, siempre y cuando haya espectro blanco disponible, lo que no es tan evidente, tal como comentamos en otra entrada este blog.

Por lo demás, esta Super Wi-Fi tiene poco de súper, al menos si se compara con un buen estándar radio para exteriores. Decíamos ayer:

“El estándar 802.11af es  básicamente una Wi-Fi 11n con ciertos ajustes para funcionar (algo mejor) en entornos de exteriores amplios, donde hay muchos caminos de propagación”

Vamos a entrar en profundidades.

El estándar Wi-Fi (802.11 y variantes) se diseñó como una extensión inalámbrica de la LAN (IEEE 802) para funcionamiento en interiores. En interiores pasa lo siguiente:

  • La diferencia de retardos entre los diferentes caminos de propagación es pequeña, de modo que la interferencia entre símbolos (por ejemplo, los datos transportados por una subportadora OFDM) es moderada.
  • Por razones similares al punto anterior, cuando se produce un desvanecimiento de la señal, afecta a todo el ancho de banda de la misma (por ejemplo a todas sus subportadoras OFDM). Es lo que se conoce cómo desvanecimiento plano.

Por esta razón, el Wi-Fi con modulación OFDM (11g y 11n) se diseñó con relativamente pocas subportadoras para el ancho de banda disponible, que llevan una tasa de transmisión elevada por subportadora (total, no pasa nada, no hay mucha interferencia entre símbolos),  con pocas subportadoras piloto por frecuencia (total, para qué, el desvanecimiento es plano), y cuando en la capa MAC un terminal se hace con el uso de una trama Wi-Fi la emplea en su totalidad (”si la pillo yo, es para nadie más, da igual si hay otros usuarios en mejores condiciones radio”). La Super Wi-Fi 802.11af hace básicamente lo mismo.
Maravilloso si estamos en interiores, ¿pero que le puede pasar en exteriores si las distancias de propagación son grandes?. Veamos las posibles vicisitudes de la “Super” Wi-Fi:

  •  Se producirá mayor interferencia entre símbolos en cada subportadora. Para mitigar esto la Wi-Fi tendrá que emplear un índice de modulación más bajo en cada subportadora (p.e. pasar de 64 QAM a 16 QAM) y reducir así la tasa de transmisión disponible. No es casualidad que las interface radio diseñadas expresamente para funcionamiento en exteriores (cómo DVB-T, LTE) tengan muchas más subportadora, menos espaciadas.
  • El desvanecimiento de la señal no será plano, unas subportadoras llegarán bien al receptor, y otras no. Como el transmisor Wi-Fi no tiene ni idea de cuales llegan bien y cuáles no, al mandar todas las subportadoras con la misma modulación, tasa de transmisión, potencia, estará desperdiciando recursos radio; se mandan datos en subportadoras que nunca podrán recibirse bien. LTE “rellena” solo las portadoras que el receptor le indica que va a recibir bien.
  • Si un dispositivo “Super Wi-Fi” se hace con el uso de una trama, durante ese tiempo la usará el solito, no importa si otros dispositivos están en mejores condiciones y pudieran usar parte de ese espectro a la vez. En Wi-Fi “nadie pone orden” en distribuir el uso de los recursos radio, ni se puede usar un “slot” temporal a la vez para varios usuarios. En LTE si se puede hacer todo esto.

En resumen, IEEE 802.11af (ya sin súper ni entre comillas) aprovecha la capa física y parte de la MAC de un estándar muy barato, lo que está muy bien, y que permite reusar mucha de la tecnología Wi-Fi existente (lo que también está muy bien, en particular para los dueños de los IPR), pero hay otros interfaces radio que son más súper para exteriores.

Un Super LTE en espacios blancos funcionará mejor.

Cross-Platform Mobile Development Tools Analysis

1 Malo2 Mejorable3 Normal4 Bueno5 Excelente (Votos: 1. Media: 5.00/5)
Loading ... Loading ...

Author: Alberto Crespell

Nowadays, there are several mobile operating systems on the market that hinder the creation of applications since each system uses a different programming language. To make an application reaching as many users as possible and avoid the problems caused by fragmentation, regarding to the resources needed to develop such applications and to minimize the time to market, some companies are offering solutions that aim to write code once and develop the application to all platforms.

The analysis is focused on evaluate the main solutions to write cross platform applications: PhoneGap, Titanium Appcelerator, Marmalade and Strawberry.

The way followed to analyze each tool consisted on develop a simple application using the Android SDK (Java) and then try to achieve the same results using each of the mentioned cross platform frameworks.

The application implemented with the Android SDK is a simple application to see weather and forecast of any city around the world. When the application start is shown a splash screen while saved weather forecast data of a city is loaded or during 4 seconds at least. If no forecast data is stored a help screen is showed, else forecast screen. In both cases there is a toolbar where the user can choose to load the weather forecast of its current location using GPS or any location by typing a city name. Forecast data and images are loaded from WorldWeatherOnline and data is stored serialized to check if the different frameworks support it. Finally, when the user clicks on any element of the forecast list is performed a cube animation with the corresponding weather icon as a texture using OpenGL.


PhoneGap provides an API to access device specific features not available in HTML / JS and some are not supported depending on the device’s operating system. IDE is not provided to help developing.

There is a service for compiling in the cloud called “PhoneGap build” where developers can upload a compressed file with HTML / JavaScript / CSS code and other resources in order to get the binaries for the different operating systems. Some operating system is not available for cloud compiling yet, it is limited and payment is required depending on private apps, users and builds per month.

A key point of PhoneGap is that it helps to develop an application in different OS without the need of learning different languages. But some considerations have to be taken because mobile applications are conceptually very different than web ones and development methodologies and techniques have to be different.

The main problem of PhoneGap remains on the leak of standardization of HTML / CSS implementation, which use to be different along different OS and even in different versions of the same OS. The browser fragmentation slows down the development process and force to use dirty workarounds sometimes, even will force to write different code depending on the target OS.

The cost of using standard HTML rendered in a native web view is reflected in the fact that some features available in all OS that are implemented in different ways are not available when developing in HTML. For example: internationalization, native UI elements, native buttons or actions, notification system, communication between different applications, etc.

The use of a scripting language over the native layer leads to a lower performance.

Before using PhoneGap is very important to know which kind of application is going to be developed, which native features are going to be used and which are the target OS / versions.

There are performance problems with running tasks in background (background downloads, synchronization, processing, etc) as browsers run webpages in a single thread environment.

The application works as expected just with Android, with Iphone there is some layout error and a problem managing the virtual keyboard, Bada has errors with scroll and doesn’t support file access, WebOs doesn’t support files, javascript cause errors on BlackBerry and has layout errors too and Symbian just render the application as expected from version Symbian^3.

Titanium Appcelerator

Titanium uses a Mozilla library to parse the JavaScript code and instantiate the native UI components dynamically in runtime for Android and IOs systems. The compiling process involves three stages (JS code is not translated into native):

–      Pre-compiler: process JavaScript code optimizing it (reduce whitespace, reduce the size of symbols, etc) and creating a dependency hierarchy of all the Titanium APIs used by the app.

–      Front-end compiler: generate the appropriate platform-specific native code, native project and build any specific code that is necessary to compile Titanium for a given platform compiler.

–      Platform compiler & packager: launch compiler and packager specific of each OS.

Titanium has a development IDE that helps to autocomplete, deploy and debug.

Not standard HTML + CSS + JS. Framework is based on JS + JSS (style sheets), what increases the learning curve until developer become familiar with the Titanium API and the way to implement layouts.

Documentation API can be better, most method’s properties are defined by the data type (string, int, etc) without specifying which values are valid (for example font is defined as an object and height is defined as a float but it can be a string representing a percentage value as well). Some methods and classes have a description indicating availability in IPhone and Android but some not. The result is that lot of time was spend on Titanium forum to check values of the properties, search if some feature works in Android, searching workarounds, etc.

Overall impression is that Titanium is more mature regarding to IPhone than Android as it has more features, plugins and is less buggy.

Titanium framework includes features specific of each platform by enabling the use of Android Intents and Services, providing folders to place specific assets for IOs (IPhone / IPad) and Android (drawables of multiple densities) and detecting Android buttons events. On the other side, the benefit of enabling specific platform features leads to fragment the coding style with if statements to evaluate if the application is running under IOs or Android systems.

Despite of allowing the use of features specific from each platform there are other simple features that are in the common API that are not supported in one platform instead that the native platform covers that feature. For example gradients for Android are not supported using Titanium but yes with the SDK.

Programming style based on object oriented architecture.

From all the time needed to create an application the 80% of this time is wasted on implementing UI layouts instead of implementing the logic of the application.

While the native and PhoneGap weather applications are around 200 / 300KB, using Titanium it needs over 5MB (framework library include apache http, mozilla, jax, w3c dom).

Titanium allows writing Android and IPhone using the same language but this does not mean that writing code once will work on two systems; a  lot of work is required to get same results on both platforms.


Marmalade is the re-branded evolution of Airplay SDK, a mobile cross platform to build mobile applications with standard C++. It consists of two main components:

–      Marmalade System: it is an operating system abstraction API, it features a C style API that provides access and control of core device services, events and sub device properties. The developer is thus entirely abstracted from a given platform’s programming model and relieved of having to consider device-specific issues or even install any device specific development kits.

–      Marmalade Studio: it is a suite of tools and runtime components focused on high-performance 2D/3D graphics and animation:

It implements a kind of virtual OS or middleware rendering with OpenGl, what makes possible that the application looks the same independently of the OS running on the device. So the application can be developed and debugged using the PC (x86) emulator and then deployed to devices ensuring the same results.

The applications have similar performance and graphics look the same in all OS, from old mobiles to the latest smartphones. In standard applications the developer has a virtual screen and does not to be worried about different layouts depending on the screen size. On the other side, the look & feel of the UI components is poor and doesn’t have the look&feel of the native UI.

Marmalade comes with a UI Builder, which at the moment is under beta version and when trying to design something fairly straight forward as screen composed by a background, one image and some labels, it takes a lot of time reading docs and dragging elements and layouts around and is very difficult to get anything as expected.

Marmalade is very different to the previous frameworks seen due to the fact that it is based on C language and developers are required to have strong programming skills, while the other frameworks are based on web technologies and are more accessible to a wide range of developers or layout designers familiarized with HTML, CSS and JavaScript.

There are still some important points to keep in mind when using Marmalade. For example, devices with limited memory may behave unexpectedly when not managed correctly and devices with hardware limitations may have performance issues.


Strawberry is a cross platform framework very similar to Marmalade using C++ and OpenGL but adding support to instantiate the UI components of the target operating system, so applications have a native look & feel. Currently it is in private beta and it’s not known yet the available licenses and their costs. Being a beta version, support & documentation is very limited, but the C++ API is less complex than Marmalade, and more focused on app development.

Development is only supported on Mac OS via Xcode IDE.

Although all programming is developed using standard C++, interface layout is done using HTML & CSS. In this way developers can completely separate the logic from the presentation, and use different presentations for different kind of devices.

The HTML / CSS code is not W3C compliant, is totally custom and extensible, so there are elements that are not standard but facilitate the design of the application interface (button, slider, scroll view, etc.)

Being able to extend the HTML tags facilitates the reuse of custom components.

Strawberry has a standalone player that allows parallelizing the work of the layout designer and the programmer.

Strawberry uses OpenGL for drawing the UI components, which makes the presentation consistent across different platforms and resolutions.

Internationalization is supported using Strings files like in iPhone.

Multiresolution images are supported via image packages. Each package contains the images for each supported resolution. The framework will load at runtime the images at the closest resolution available on the package.


The analysis reveals three different types of frameworks:

-        Web based: frameworks using a native web view to render standard HTML / CSS / JS code enabling the access to some native capabilities (access to the file system, GPS, etc.) in some platforms. This kind of frameworks doesn’t allow to use a native UI but can be a good option for rapid prototyping because doesn’t require much learning when the developer is familiar with HTML. The lack of standardization of the HTML browsers along different platforms and different versions of the same platform as well, makes difficult to achieve the same UI using the same code.

-        Translators: frameworks using a layer to translate custom HTML into native code at runtime or with a pre-compiler. These frameworks can use native graphic components and have better performance than the web ones. Start to use these frameworks doesn’t require much learning because are similar to standard HTML / CSS / JS.

-        Virtual SDK: frameworks implementing a virtual system. Like translators, these frameworks can have a native look & feel, but they require a learning process as it is like a new operating system.

Cross platform mobile frameworks are tools that are growing fast but are not mature yet, not enough to replace native development.

Before using a framework is important to set the requirements of the application (from development and UX points of view) and the target platform and versions in order to ensure that it will be useful.

As said before, web based frameworks can be useful for rapid prototyping but, surely, when the application grows it must be replaced by a native development or another kind of framework.

On the other hand, other frameworks are oriented to make code more consistent along the different platforms / versions and will contribute to ease the code maintenance when the application is in a production environment because there is one code for all platforms and fewer resources are required. Unfortunately, there is still a long way to go to ensure that one code can run in different platforms without doing some particular implementations to cover different native user experiences.

There is a limitation when new versions of native systems appear; developers have to wait until the frameworks are updated. Some frameworks allow developers to write their own modules, thus the new features can be added but it will represent an overwork for the developer and work that will be discarded when the framework adds the functionality.

For more details visit SlideShare @ http://www.slideshare.net/telefonicaid/cross-platform-mobile-dev-tools

Source code can be found @ https://github.com/telefonicaid/MobileCrossPlatformTools

Thinking about REDIS

1 Malo2 Mejorable3 Normal4 Bueno5 Excelente (Votos: 6. Media: 5.00/5)
Loading ... Loading ...

Author: Marcos Reyes

Disclaimer: This post is not intended to provide an exhaustive study of the whole REDIS functionality, if you are looking for technical detailed info I would recommend to visit the official documentation repository at http://redis.io/documentation.

Redis is yet another NoSQL Data Base. Of course REDIS is fast, indeed it is used as an alternative to Memcache often, since both use memory based storage. But it is necessary to remark that REDIS provides full data persistence also (but it really shines when the data fits in system memory).

Besides, REDIS has something special that makes it different from the rest of the Data Bases (SQL and NoSQL). More than a big key-value repository, REDIS is a Data Structure Repository. Of course we have hash tables, but we can also take advantage of other kind of high level data structures like Lists/Queues, Sets, Shorted Sets, Counters or PubSub Channels and all the “ad-hoc” abstractions needed to manipulate them.

Hashes have commands to do simple or multiple inserts, delete and retrieve field values. Perfect to store objects. Lists may push/pop elements, return a given range or atomically move elements thought different lists, moreover they can act as high performance queues for task dispatching. Sets may be intersected, subtracted, ordered… and so on.

REDIS provide also commands to manipulate Strings, Keys, the Database itself and Transactions (mostly based on atomicity, however it can not be applied to all transactional scenarios yet). It is worth to take a look to the different implemented commands at their official site http://redis.io/commands where it is clearly specified, including the expected order of complexity of every operation. Probably the best way to know what REDIS can do for you.

The difference may seems small, but it implies to change somehow our mental model on how we use our data storage and how we should allocate the data into our software. The best part is that it feels so natural to integrate REDIS in our code, it is like an extension of it. A shared heap for all our processes. In this way REDIS is positioned like a data base for developers, let’s say an “operational database”. It should be used more like we use the memory heap and not like we use a classic data base.

Anyway the community still figuring out how to use REDIS, it’s so easy and flexible that allows a lot of valid interpretations in many scenarios. There is no single opinion on this, there is no methodology and still appearing best practices. What is clear is that if you need to “freely query” your information you shouldn’t use REDIS. We can extend this to any NoSql storage, where you should keep what you expect to retrieve from a “developer point of view”. But if you need a fast shared and versatile storage for your operational and/or cached data, REDIS is a really nice option.


REDIS executes over an event loop, with no blocking i/o (everything seems to run on an event loop these days), this approach is providing really nice achievements to other platforms (e.g. node.js) and it’s working well here also. REDIS is fast on Read operations, and faster on Writes.

  • The Linux box is running Linux 2.6, it’s Xeon X3320 2.5 GHz.
  • Test executed using the loopback interface (

Results: about 110000 SETs per second, about 81000 GETs per second.

Of course in order to obtain this metrics accessed data must fully reside in main memory, if the set of data exceeds the hosted RAM several scenarios may occur: if we need random access to all the dataset REDIS will need to get paginated memory too often. This will impact in the performance so bad. So it’s mandatory to avoid this scenario, well adding more REDIS sharded  databases, or well redesigning the data model architecture so “operational” data (fast and often usage) resides in REDIS, while “historical/business” data (with slower and more sporadic usage) should reside in other “traditional” database.

Remark that if the REDIS data size exceed the Main Memory capacity but we only made a common usage of a subset of this data (that is, REDIS don’t need to paginate continuously) we can allow bigger data sets than memory without any expected drawback.

Distributing REDIS

Like in many NoSQL DB data may be distributed and/or replicated for faster accesses and fault tolerance support. REDIS replication schemes are quite well defined in current REDIS documentation:

Extract from http://redis.io/topics/replication

Redis replication is a very simple to use and configure master-slave replication that allows slave Redis servers to be exact copies of master servers. The following are some very important facts about Redis replication:

  • A master can have multiple slaves.
  • Slaves are able to accept other slaves connections. Aside from connecting a number of slaves to the same master, slaves can also be connected to other slaves in a graph-like structure.
  • Redis replication is non-blocking on the master side, this means that the master will continue to serve queries when one or more slaves perform the first synchronization. Instead, replication is blocking on the slave side: while the slave is performing the first synchronization it can’t reply to queries.
  • Replications can be used both for scalability, in order to have multiple slaves for read-only queries (for example, heavy SORT operations can be offloaded to slaves, or simply for data redundancy.
  • It is possible to use replication to avoid the saving process on the master side: just configure your master redis.conf to avoid saving (just comment all the “save” directives), then connect a slave configured to save from time to time.

REDIS persistence

REDIS is fast like a Memory-DB but it has a good persistence support. It is easy to set up several persistence triggers associated to “number of modified keys” and timers. That is “flush to disk if more than 10 collections have been modified or 1 second passed since last disk store”. Moreover every operation may be kept in an append log . So, in general, it is guaranteed to fully recover a consistent state due to a system crash (note: the “EXPIRE DATE” collections may have a funny behaviour in this case, watch out).

Missing features and concerns

There could be also some integrity problems (in a system crash in the middle of a MULTI transaction), or trying to recover the append-log when volatile keys (EXPIRE) are involved. So don’t use REDIS as a “Store Value Account”. But for the rest of the problems it should be ok. Atomicity of single operations is granted (and something like pop from a list and push into another is also atomic for example), we can use the MULTI directive to execute a transactional block of commands. The WATCH command may be used like an optimistic mutex enhancing transactions and rollbacks… It seems enough for a lot of problems.

Another related drawback is that right now it is not possible to chain REDIS commands in a single transaction, like using the output of one command as the input of other command, e.g.: retrieve a range of hash-keys from a LIST and obtain the information stored in all of them. To do so you may need to made several simple request to REDIS, first to obtain all the keys from the list and later to retrieve the data stored on those keys.

So using REDIS you could find yourself making several command calls for “queries” that you would commonly solve by just one expression in other kind of databases. Of course this could imply a wrong data modelling, sometimes “keep what you need, like you need” is the best option to avoid this problem. But this could drive us to too much data replication, which is not evil anymore… but must be under control and handled with care.

Notice that stored procedures (based on Lua) solving this issues are implemented in current unstable versions.


Right now REDIS seems to be the only one of its class. REDIS combines ease of use, flexibility, performance and a predictable execution model. It is much easier to know “what is happening inside” than in any “traditional SQL database”.

REDIS is not for all scenarios. For sensible data you will be more comfortable coexisting with a traditional DBMS. So if you need an ACID database REDIS shouldn’t be your only option. To establish a frontier between “what is” and “what isn’t” sensible data is not always easy, but it is a good exercise and a good performance hint try to separate them.

REDIS seems a really valuable product for high performance and high distributed systems. It can be used in many roles within a Cloud-aware architecture, acting as a scalability enabler (notice that VMWARE is their exclusive sponsor), cache, task dispatcher…. In fact this swiss-knife approach could be a problem. Once you start using REDIS you may feel that it is suited almost for everything, so it is important to keep in mind that there are no silver bullets.