Employees have the freedom to select in which language they want to express themselves.
You can find more entries selecting Spanish language in the top left corner link.

Archivo para la categoría 'Information Society'

Virtualizing sensor network: VITRO practical scenarios

1 Malo2 Mejorable3 Normal4 Bueno5 Excelente (Votos: 1. Media: 4.00/5)
Loading ... Loading ...

Author: Javier Lucio

The virtualization of sensor networks concept was explained in an older post .

In that post, the concept under the title of the virtualization of sensor networks and the EU fp7 project VITRO, were introduced and described.

Now, a set of practical scenarios (6) covering the virtualization concept are described here for a better understanding and an open door for conceptualizing new services, functionalities and business models in the development, deploying and use of sensor networks.

Each of the selected scenarios is intended to demonstrate relevant VITRO functionalities. Namely, we concentrate on service virtualization, resource virtualization, service composition, service provisioning and security among additional functionalities.

Scenario 1: Service virtualization

A virtual service is obtained by extending, merging and reconfiguring the behaviour of existing services and by composing involved resources in a transparent way with respect to the underlying technology.

A number of different sensor islands are connected. These heterogeneous islands are administrated by different administrative domains and provide access to existing services and resources e.g. Energy Efficient Smart Schools and Buildings in general. The end-user would be able to monitor observations and measurements on room temperature, humidity and lighting for multiple buildings at disparate locations.

Each Wireless Sensor Island (WSI) is deployed in order to form a unique Virtual Sensor Network (VSN). Each WSI is equipped with different types of sensors, therefore information about one metric of observation may come from one out of the different WSIs.

 

Scenario 2: Service virtualization II

This scenario consists on the formation of 2 Virtual Sensor Networks (VSN) using the same sensor network infrastructure. Two VSN instantiations are formed within the same Wireless Sensor Island (WSI). A sensor participating in the first VSN e.g. providing one of its sensing capabilities can be part of the second VSN providing a supporting service such as routing for data packets of the second VSN.

Service Virtualization scenario can be distinguished in two different cases: 1) two users requesting data from the same node and 2) users request data from different nodes but one or more of the traversed nodes serve as routers, supporting both network instances.

This scenario could be applicable for use cases as controlling the traffic lights and managing the traffic in a city.

 

Scenario 3: Resource virtualization

The third scenario demonstrates the concept of node resources virtualization in order to provide relaying capabilities and form a Virtual Sensor Network (VSN) able to recover from network disconnections.

A Delay Tolerant Networking (DTN) scheme is used when a formerly connected Wireless Sensor Island (WSI) becomes disconnected for a long period of time, and therefore partitioned in a number of smaller WSI. DTN basically relies on some nodes’ mobility in order to deliver data among these WSIs and establish end‐to‐end communication.

This scenario could be applicable for the management of disasters or security in public spaces where the recovery of the network connection is an issue. For that, the scenario uses the employment of DTN mechanisms supporting the delivery of data when end-to-end connectivity within a VSN is intermitted or unavailable.

 

Scenario 4: Security related virtualization

This scenario is based on trust-aware routing protocols, and could be applicable to services as monitoring the production in sales chains or security in public spaces.

The scenario deals with the case when a Wireless Sensor Island (WSI) has some malicious nodes. The routing metrics that are utilized by the routing protocol for establishing the routes to forward the data packets, will be shown to efficiently detect any malicious nodes (e.g. grey or black nodes that drop part of or all of their incoming traffic) and re-route the data traffic around them. Also, this scenario demonstrates the effectiveness of excluding a node that does not support encryption capabilities, providing robustness and attack resistance to the WSI.

Scenario 5: Service composition

This scenario consists on the composition of a new high-level service from a set of available primary services. For example:

  • Composition of a new service called “fire detection” from two primary services: C02 measurements and temperature/light intensity measurements.
  • Composition of a “public security” service from a composed motion-detection and light intensity observation services.

In both cases, a wireless sensor network (WSN) is formed by a set of nodes (CO2 and light sensors or motion-detection and light sensors), which are deployed in order to monitor a given area. The sensors run the CoAP protocol, so they are able both to periodically send data to a specific CoAP client, who is querying the information and to answer to specific interrogation.

The nodes are connected to a Gateway which acts as a bridge between the sensor and the service provider in charge of processing the data received from the WSN, combining the observations carried out by the WSN and, eventually, raising an alarm if a target event is detected. In the scenario of “fire detection”, the SP will aggregate the data received from the light intensity and the fire temperature. When both these values are too elevated, exceeding a given threshold, an alarm is raised. In the scenario of “public security”, the behaviour of the SP is similar: the sensor publishes the measurements collected by the sensor and the SP processes received data, notifying the end-user in case of event detection.

Scenario 6: Service provisioning

This last scenario is devoted to demonstrating the continuation of service provisioning under adverse circumstances. Let’s use as the example service a composite service providing average temperature measurements from a set of temperature sensors in a room or another predefined area. The scenario demonstrates the robust and continued provision of this service in the event that one of the sensors participating in the corresponding Virtual Sensor Network (VSN) goes “down” (for example, due to energy depletion or malfunction). The system shall be able to select another sensor (in the vicinity) or group of sensors that provides an equivalent operational service and integrate it to the VSN, thus allowing the continuation of service provisioning, in a transparent way to the final user.

An end-user requests from a Service Provider an average temperature monitoring service for one or more locations, additionally specifying the number of sensors that will be providing the measurements per location. For example, the user can define that it is required that the average temperature result is reliable only when it is calculated based on 5 measurements from different sensors in a location.

After the service is initiated and the end-user receives the first results for the average temperature at each location, some sensors involved in the formed VSN will become unavailable (e.g. when they have exhausted their power resources and shut down). In this case, the gateway should be able to detect that the required number of sensors supporting the VSN is not met, based on a “disappearing node” event and inquire the “Resource and Services Controller” for other nodes within its Wireless Sensor Island (WSI) that can also provide temperature measurements. Finally, the required number of these equivalent temperature sensing nodes will be tasked to join the VSN, so that the service will continue to be provided to the end-user uninterrupted.

Fast, open and smart: Why Firefox OS is a game-changer

1 Malo2 Mejorable3 Normal4 Bueno5 Excelente (Votos: 2. Media: 3.00/5)
Loading ... Loading ...

By Carlos Domingo, Director, Product Development & Innovation, Telefónica Digital

Originally posted in Telefonica Digital blog

 

The unveiling today of Mozilla’s Firefox OS is another big step forward – for smartphones, for HTML5 and for bringing the wonders of mobility to the entire world and not just the privileged few. Working with Mozilla, we showed off the technology at Mobile World Congress earlier this year and are proud to have played a part in what we believe will be a very compelling and useful new option for customers.

By focusing on HTML5, Mozilla has created a platform for very fast yet affordable devices based on open source technology.

Firefox OS combines HTML5 with a Linux kernel, allowing every function of the mobile phone to be developed as a web app and run through a special version of the Firefox web browser. Making a call, sending a text, taking a photo would all be done by an HTML5 app.

The result is a better experience and performance, even at low end price points. And this is crucial if we’re going to get smartphones into the hands of more and more customers in the developing world. Take Brazil where current smartphone penetration is approximately 14% and only a small percentage of people have the money to buy top-end smartphones from the mega-brands. Yes there are low end Android devices for $100 but they tend to use older versions of the OS which rapidly become out of date and performance is not that great.

Firefox OS can deliver a far better experience at these price points and that’s why are so keen to bring it to markets like Brazil.

The early signs for Firefox OS are positive: strong carrier and device support. But also, the openness of the architecture bodes well for the third leg of the stool: developer support. The history of technology suggests that being open and offering great value are common ingredients of success – just look at how Mozilla drove innovation in web browsing through Firefox.

And that’s why we’re bullish on the prospects for Firefox OS.

Follow Telefonica Digital on Twitter @tefdigital for more updates

FINSENY: Future Internet Technologies for the Smart Energy (II)

1 Malo2 Mejorable3 Normal4 Bueno5 Excelente (Votos: 1. Media: 5.00/5)
Loading ... Loading ...

FINSENY Mission & Strategy:

The FINSENY Mission and Strategy, which define what the project plans are, and how they will be accomplished:

« Demonstrate, by 2015, how open Future Internet Technologies can enable the European energy system to combine adaptive intelligence with reliability and cost-efficiency to meet, sustainably, the demands of an increasingly complex and dynamic energy landscape. »

 FINSENY is specifying use cases, ICT requirements and architectures in the Smart Energy domain for five scenarios which have been identified as strongly benefiting from Future Internet technologies:

a)      Distribution Networks,
b)     
Microgrids,
c)     
Smart Buildings,
d)     
Electric Mobility and
e)     
Electronic Marketplace for Energy.

Distribution System (DS):

Problem Statements

  • The increasing amount of volatile DER generation in the Distribution System will change today’s well planned, standard load profile based operation to a dynamic demand response based approach based on actual status information from the medium down to the low voltage parts of the grid.
  • Energy flows are increasingly bidirectional, intensifying person safety and grid overload issues and demanding an increase of efficiency of energy distribution.
  • New smart energy applications are to be incorporated which handle the need for varying generation and demand levels to be balanced while optimally utilising grid resources.
  • Ubiquitous ICT solutions need to be defined, designed, financed, built, operated & maintained (see design principles in the above section).

Mission

  • “Design a future ICT solution for Distribution System automation & control to increase energy quality, reliability, robustness and safety and to ease integration of Distributed Energy Resources.”

Focus

  • Automated fault restoration, power analysis & control, grid maintenance.

Customers and Benefits

  • DSOs will get the solutions to optimally handle grid capacity and energy flows.
  • Service providers will get the interfaces to provide innovative energy services.
  • Prosumers can optimise their generation and consumption based on a stable distribution grid.

Key factors to judge the quality of the outputs

  • Reliability, safety, security and cost-efficiency of the solutions.

Key Features and Technologies

  • Decentralised operation, connectivity and control by scalable ICT solutions.

Outputs Critical Factors

  • Interoperability and integration (with legacy systems), scalability, allowing both centralised and de-centralised control, open & secure ICT solutions.

Microgrid:

Problem Statements

  • How to operate local low voltage or even mid-voltage distribution systems with Distributed Energy Resources (DERs) and storage devices to satisfy demand of energy consumers in an autonomous (islanding mode) or semi-autonomous (interconnected to the main grid) way.
  • How to build the Microgrid platform on Future Internet technologies to be more cost-efficient, flexible, secure, scalable, reliable, robust and modular.

Mission

  • “Design a reliable and cost-efficient Microgrid platform which ensures flexibility, scalability and robustness. The design will be modular and applications/services will be loosely coupled. Devices in or at the edge of the grid (e.g. DERs) will be easily integrated and control/communication networks will be managed to ensure the right level of QoS.”

Focus

  • Microgrid Control Centre and interface to the control and communication network to operate the system and integrate prosumers with e.g. DERs or Smart Buildings.
  • Configuration, monitoring & control, data management & processing.

Customers and Benefits

  • Microgrid operator has a flexible Microgrid platform to deploy in his environment.
  • Prosumers have an aggregation platform to include their DERs and flexible demand.

Key factors to judge the quality of the outputs

  • Internet of Things technologies for device & resource management, connectivity services at the interface to networks, data management and security.

Key Features and Technologies

  • Decentralised operation, connectivity and control by scalable ICT solutions.

Outputs Critical Factors

  • Regulatory hurdles, but already aligned with the governmental goals for an increased share of renewable energy and higher reliability by providing cost-efficiency.

Smart Buildings:

Problem Statements

Comprehensive building energy management:

  • With the combined goals of building-scale optimisation (local source-load-storage balancing and efficiency and grid-scale optimisation (demand-response)).
  • Under constraints of scalability, separation of concerns and auto-configuration.

Mission

  • “Design of future comprehensive Building Energy Management Systems as flexible edge of the Smart Energy system and as key element for shared Future Internet platforms.”

Focus

  • Make it possible to monitor and control all energy-relevant building subsystems, appliances and other physical entities operating on top of a shared platform, a building “operating system” provided to all building applications.
  • Managing all entities through common interfaces based on a generic model akin to that of a peripheral driver in a computer operating system.

Customers and Benefits

  • Building owners and stakeholders
  • Facilities managers and building services providers
  • Building end-users
  • All shall benefit from a horizontal building energy management system that interoperates fully with other building automation systems operating on top of the same shared building operating system.

Key factors to judge the quality of the outputs

  • Use smart building “operating system” providing interface to the buildings physical entities and common service layer to be shared by all building applications.

Key Features and Technologies

  • Provide information models and interfaces that encompass all energy-relevant legacy building hardware and equipment. Demonstrate corresponding monitoring and control interfaces for key types of such equipment.
  • Provide information models and interfaces that make it possible to interoperate with existing building ICT systems. Demonstrate corresponding monitoring and control interfaces for such systems.
  • Specify application layer that combines local and global energy optimisation.

Electric Vehicles:

Problem Statements

  • As the number of electric vehicles on our roads increases, the charging of electric vehicles will become a major load on the electricity grid and it’s management poses challenges to the energy system as well as offering a contribution to solutions to balancing the volatility of energy generation from renewable sources.
  • The provision of a seamless infrastructure for charging electric vehicles in Europe poses challenges to the transport, the energy and the payment infrastructure owners as well as well as to regulators. At the same time, electric vehicles, if connected wirelessly to the transport infrastructures, offer the potential to support multi-modal transport solutions.

Mission

  • “Design Smart Energy solutions so that electric vehicles will be an integrated part of the energy infrastructure, maximising their benefits to the energy infrastructure.”

Focus

  • Defining the role that electric vehicles can play in the Smart Energy infrastructure.

Customers and Benefits

  • Energy stakeholders are provided with scenarios for integrating electric vehicles into their plans for evolution towards Smart Energy solutions.
  • Energy stakeholders are given an overview of the ICT requirements and functional architecture issues from the perspective of electrical vehicle usage.
  • Users can charge in a user friendly way and can use electric vehicles as part of multi-modal transport solutions.
  • Energy stakeholders could possibly use the control of charging times for the vehicles to assist in energy grid management.

Key factors to judge the quality of the outputs

  • Defined ICT requirements and functional architecture enabling the integration of electric vehicles into the energy infrastructure.

Key Features and Technologies

  • Scalable solutions as number of vehicles grow, access to services wherever the user is via cloud computing and defined network interfaces. Wireless and fixed converged networks, infrastructure as a service.

Outputs Critical Factors

  • Open interfaces and secure ICT solutions are needed. The high speed of change in the market as the commercial side of electric vehicles develops is already evident creating timing issues for the introduction of common solutions. Regulatory issues will play a key role in this emerging market.

Electronic Market Place for Smart Energy:

Problem Statements

  • Those who are going to participate more actively in the energy supply, such as DSM customers, Prosumers, Microgrids, DERs, need a kind of electronic market place (for information and services). The services should be offered via the Future Internet. This electronic market place could in particular give information useful for balancing supply and demand and to check grid restrictions.

Mission

  • “Design ICT systems to extend web based energy information, demand shaping and energy trading services for the emerging energy market players.”

Focus

  • ICT systems to enhance contract negotiation, competitive price awareness and energy trading also at regional-level.

Customers and Benefits

  • Final Energy customers should be more aware and have a broader choice of Energy supply; energy trading at micro levels will be available for new prosumers, as well as better management of grid stability and planning for utilities.

Key factors to judge the quality of the outputs

  • Very flexible and secure web based energy services.

Key Features and Technologies

  • Large scale data gathering and management via web, Internet of Energy linked objects and customers-prosumers.

Outputs Critical Factors

  • Perceived marketplace trustworthiness from energy stakeholders will be fundamental together with the user engagement via the Internet in Smart Energy services promoted in FINSENY.

Conclusion:

FINSENY is a Future Internet (FI) project studying innovative new FI technologies to apply them to the Smart Energy landscape. The need for more ICT as described in this paper is widely agreed in the Smart Energy community to accomplish the challenges of the envisioned energy system. Future Internet technologies offer several opportunities for Smart Energy solutions, including connectivity, management, service enablement, distributed intelligence as well as security and privacy.

In the FINSENY project, key players from the ICT and energy sectors teamed-up to analyse the most relevant Smart Energy scenarios, identify prominent ICT requirements, develop reference architectures and prepare pan-European trials. As part of the FI-PPP, FINSENY will demonstrate over the three phases of the programme that Smart Energy needs can be fulfilled through an ecosystem of generic and Smart Energy specific ICT enablers running on top of an open Future Internet platform.

FINSENY will shape the European Future Internet ICT platform(s) to support the emerging European Smart Energy era. The growing Smart Grid Stakeholder Group will provide broad visibility of the on-going project work in the energy community, enhancing the acceptability of the project results and facilitating the development of the Smart Energy landscape.

FINSENY: Future Internet Technologies for the Smart Energy (I)

1 Malo2 Mejorable3 Normal4 Bueno5 Excelente (Votos: 1. Media: 5.00/5)
Loading ... Loading ...

Author: Javier Lucio

Planning of the future energy supply means defining optimum trade-offs between reliability, sustainability and costs. The increasing use of renewable energy sources is creating new challenges for energy providers. Peaks in energy generation are happening more frequently and require new solutions to maintain the reliability of the supply. At the same time, users are being empowered to take an active role in the energy arena as prosumers and operators of micro-grids.

Future Internet technologies will play a critical role in the development of Smart Energy infrastructures, enabling new functionality while reducing costs.

Telefonica I+D is participating in the FINSENY project, where key actors from the ICT and energy sectors team-up to identify the ICT requirements of Smart Energy Systems. This will lead to the definition of new solutions and standards, verified in a large scale pan-European Smart Energy trial. Project results will contribute to the emergence of a sustainable Smart Energy infrastructure, based on new products and services, to the benefit of all European citizens and the environment.

As part of the FI-PPP programme, FINSENY will analyse energy-specific requirements, develop solutions to address these requirements, and prepare for a Smart Energy trial in phase two of the programme.

FINSENY vision:

« A sustainable Smart Energy system in Europe, combining critical infrastructure reliability and security with adaptive intelligence, enabled by open Future Internet Technologies »

This Vision statement provides the context for the FINSENY project. It is based on the likely and potential evolution of both, Smart Energy and the Future Internet, as well as their interaction.

How is the Smart Energy landscape likely to evolve?

There are 3 different paths in the evolution of the energy system in Europe in terms of its information system, provision system and market system. The broad trends are from centralised to decentralised control and generation, in parallel with market liberalization and open energy markets.

 

 

The evolution of the Smart Energy landscape will differ from country to country, and is subject to a level of uncertainty which increases as one looks further ahead in time. With the focus on ICT for Smart Energy, it is beyond the scope of FINSENY either to predict the likelihood of each evolutionary step or to evaluate the different combinations. However, it seems clear that the Smart Energy system of the future will include the following critical features:

  • Reliability – minimal interruptions to supply at all customer levels.
  • Safety – all members of society will be protected from dangerous occurrences.
  • Security – ensure compliance in the use of information and protect the network from unwanted intrusions whether physical or cyber systems.
  • Adaptability – be capable of operating with a wide mix of different energy sources and be self-healing through decision-making on a local level.
  • Utilisation – improved utilisation of assets through monitoring and control.
  • Intelligence – the gathering and management of information relating to customers and assets throughout the network and using such information to deliver the features above.

The evolutionary process will begin with local smart solutions to specific problems relating to the mix of energy sources, demand management, electric vehicles (EVs) and/or the operation of Microgrids. Solutions capable of supporting local solutions must be scalable in order that the local solutions join together to form larger and larger smart networks. The speed of such growth will be governed by many factors relating to the policy within an energy company, regulatory conditions nationally and across Europe, growth and cost of renewable energy, proliferation of EVs and above all, by choices customers will make which the utility cannot influence but to which it must respond in order to deliver on the critical features identified above.

Taking this a step further, the customer, whether large or small, will be a major player in the Smart Energy networks of the future. This is considered in the FINSENY scenarios and use cases.

How is the Future Internet likely to evolve?

The ICT landscape exhibits very short innovation cycles and is continuously evolving. A number of new trends are observable and already shaping upcoming ICT industry solutions. These developments can be summarized by the term “Future Internet”:

  • Evolution of communication networks LTE (4G) in the wireless domain as well as new wired technologies (e.g. Fibre-to-the-X) offer not only increased bandwidth but also different Classes of Services approaching real-time requirements. Furthermore, with innovations in network virtualisation, new flexibility for network control emerges.
  • Internet of Things New mechanisms are developed to easily manage huge numbers of interconnected devices which interact with their environment. Sensor data can be collected, aggregated, processed and analysed to derive contextual awareness and improved control decisions.
  • Internet of Services facilitates the establishment of complex business relationships between multiple stakeholders, paving the way for innovative business applications.
  • Cloud Computing private or public, supports a transition of business models towards the “as a service” paradigm.

Additionally, with steadily increasing volumes of data, a combination of these technologies support the exchange, processing and analysis of massive amounts of data to derive useful information. Finally, the Future Internet includes the seamless integration of the plethora of new technologies to realise secure solutions for systems with increasing complexity.

How can the Future Internet enable Smart Energy?

The ICT challenge of Smart Energy is to exchange information across multiple domains, among devices and between subsystems of diverse complexity. In addition to interoperable and standardized communications between such elements, future Smart Energy systems will rely on the availability of access and correct configuration of systems across ownership and management boundaries such as between energy management systems, energy markets, electricity distribution with distributed resources. Interactive customers with smart meters, building energy management systems, intelligent appliances and electric vehicles have to be integrated. Future Internet technologies offer several opportunities for Smart Energy solutions, including:

Connectivity Future Internet will bring end-to-end connectivity between large varieties of grid elements, including distributed energy resources, building energy management systems and active resources such as electric vehicles. For a general and cost-effective approach the use of common and public communication infrastructures has to be targeted. While current 2G/3G networks are sufficient as access technology for first generation Smart Grid applications such as metering, LTE, IPv6 and other Future Internet communication technologies offer the capabilities for demanding and delay sensitive applications. For certain Smart Energy applications real-time communication plays an important role to fulfil the requirements for synchronisation and guaranteed reaction times of control actions. Advanced Future Internet forwarding and control plane solutions have to be considered in order to fulfil these requirements. Network virtualisation techniques can provide means to run dedicated Smart Energy communication networks, e.g. mission critical communication, on top of public infrastructures.

Management Smart Energy introduces a lot of new managed elements with dramatically increased data volume in the network and data centres, resulting in additional management burden, complexity and cost. There are opportunities to utilise elements of the Future Internet architectures and concepts in Smart Energy management: (i) Device management and flexible object registries / repositories support mass provisioning, software maintenance. (ii) Flexible secure data management: aggregation, correlation and mediation. (iii) Network management evolves towards scalable multi-tenant (cloud) operations where organisations can manage their own objects as required. (iv) Services enabling platforms will simplify service application development. (v) Local management and decentralised data processing solutions will support Microgrids and islanding operation modes. (vi) Telecom billing and rating solutions support already huge capacities and flexibility for the various post and prepaid business models. Dynamic load and time based pricing of energy can easily be added.

Service enablement Future Internet provides novel technologies for instant collaboration between suppliers, network operators, and prosumers. Timely, reliable and highly confidential information on the status of the grid will become accessible for all relevant stakeholders. But beyond monitoring as currently performed with smart meters, the Future Internet enables new web services based on bi-directional communication and interaction between suppliers and consumers: demand response, balancing and ancillary services, dynamic pricing, buying and selling of power are just a few of the promising future applications, which will be enabled by advanced ICT solutions.

Decentralised and distributed intelligence of the grid Future Internet technologies will introduce new techniques in hardware and even more in software; injecting effectively intelligence into the grid. The electricity system that we inherited from the 19th and 20th centuries has been a reliable but centrally coordinated system. With the liberalisation of European markets and the spreading of local, distributed and intermittent renewable energy resources, top-down central control of the grid no longer meets modern requirements. Tomorrow’s grid needs decentralised ways for information, coordination, and control to serve the customer. ICT is essential to achieving these innovations as we have already seen in today’s networks – telecommunication networks and the Internet itself being noteworthy examples.

Security & privacy Electricity grids are a critical public infrastructure. It is very important to underline the importance of security and trust, expressed as both reliability and privacy. Future Internet technologies will provide new and improved means to support security and privacy. Authentication and integrity protection of the control communication and data exchange plays an important role for the Smart Grid operation. Highest security standards have to be applied especially for mission critical infrastructure. Privacy of the European Union citizens personal life will be considered in depth as any privacy loophole will strongly impact the acceptance of Smart Energy services and solutions by the public. Future Internet privacy solutions have to be provided especially for the Smart Building and Electric Mobility scenarios and whenever user data is handled.

The Future Internet will enable Smart Energy systems by fulfilling the stringent requirements of the energy system and by providing high quality: reliability, scalability and security.

 

Design principles for ICT to enable Smart Energy

From the FINSENY vision it is obvious that the future Smart Energy systems will be guided by a set of design principles which in turn will impose a set of design principles on the corresponding ICT (and FI) architecture.

The list of the prominent Smart Energy design principles is:

  • Openness to new service providers and business models.
  • Flexibility to incorporate new customers (generation and consumption) as well as mobile loads.
  • Decentralisation of control to support distributed generation and consumption.
  • Introduction of autonomous energy cells (e.g. Microgrids, Smart Buildings).
  • Security and safety mechanisms on all customer-, system- and service-levels.
  • Automation of critical control processes.
  • High reliability and availability of all systems and services.
  •  Cost efficiency.

These future Smart Energy design principles can be translated into those for supporting the future ICT infrastructures. These deduced ICT and Future Internet design principles will help the developers of enablers to properly tackle the right priorities while conceptualising the functional blocks of the solutions envisaged within the FINSENY domains. A selective set of ICT and Future Internet design principles is:

  • Open (and standardised) Interfaces guarantee the compatibility and extensibility of the system.
  • Simplicity limits system complexity and improves maintainability.
  • Flexibility allows for adaptability and loosely coupled systems.
  • Scalability ensures that the system continues to function under changes in e.g. volume or size without affecting its performance.
  • Modularity promotes vertical encapsulation of functions.
  • Maintainability and Upgradability lead to suitable manageable and sustainable designs.
  • Security & Privacy by Design comprises the complete system development process.
  • Support of Heterogeneity is a major design principle.
  • Reasonable Dimensioning to optimise cost-efficiency without compromising overall performance.
  • Robustness ensures systems to survive failures.
  • Locality guides the design of self-healing and robust logical systems.
  • Encapsulation/Isolation of faults supports the concept of Locality.
  • Auto-configuration supports the concept of Plug&Play.
  • Quality of Service (QoS) classes guarantee well-defined performance metrics.
  • The Decentralisation of Control Structures supports scalability, performance and locality.
  • The Decentralisation of Processing demands for distributed data storage and processing units, and may introduce hierarchies.
  • End-to-end Connectivity ensures that a network provides a general transport service that is transparent to applications.
  • The Networks of Networks principle supports the decomposition into a collection of autonomous networked sub-systems.

The security area of Telefónica Digital participates in the V conference STIC of CCN-CERT

1 Malo2 Mejorable3 Normal4 Bueno5 Excelente (Votos: 1. Media: 5.00/5)
Loading ... Loading ...

Author: Francisco Jesús Gómez

On December 13-14th, the V conference STIC of CCN-CERT was held in Madrid.

The CCN-CERT is an incident response team under the authority of the National Intelligence Center (CNI), created as a Response Capacity for Security Incidents and as governmental CERT designed to safeguard the systems of all the Government.

Maybe this definition is not clear enough. To help us to understand what a CERT is, the CCN-CERT web gives us the definition below:

The term CERT stands for Computer Emergency Response Team and defines a group of people who take and manage technological measures aimed to mitigate the threat of attacks on the community systems to which the service is provided. It is also known as CSIRT (Computer Security and Incident Response Team) and it offers incident response services and security management.

The first CERT was established in 1988 at Carnegie Mellon University, in the United States (owner of this trademark), and from then on these kind of Teams have been developed around the world and in different aspects of society (Government, University research, company, etc.)

More information in CCN-CERT FAQ.

The conference is aimed at the pooling of work carried out by the CCN-CERT and all the civil services. According to what was said at the event presentation: “in the last 5 years, the conference STIC CCN-CERT has become a “must attend” event for the Government in order to the pooling of knowledge, analysis of threats and study of solutions and ways to face cyber-attacks.”

This is not an open access conference; even on the first day of the conference there is a limited access to civil service staff.

This fifth edition introduced the National Security Scheme and dealt with issues related to Denial of Service, SCADA systems, information commitment, APT, protocol for critical infrastructures and even the eDNI. At the round table “Needs for a coordinated incident management” there was a collaboration of the CERTs: Andalucia-CERT; CCN-CERT; CESICAT; CSIRT-CV; INTECO-CERT; IRIS-CERT.

This year Telefónica Digital was doubly represented at the event:  David Barroso and two members from the Hacking Team, Carlos Díaz and Francisco Gómez, have had the opportunity to participate, dealing with issues related to denial of distributed services (DDoS) and the measures that can be taken to mitigate them, as well as capacities of key Internet protocols such as DNS in the “botnets” and the cybercrime.

https://www.ccn-cert.cni.es/index.php?option=com_content&view=article&id=2583&Itemid=198&lang=es

Clarifying about “black holes in radiocommunications”

1 Malo2 Mejorable3 Normal4 Bueno5 Excelente (Votos: 2. Media: 5.00/5)
Loading ... Loading ...

Author: Wsewolod Warzanskyj

In recent times, there have been some news about “black holes” in radio communications, as well as the theoretical possibility of increasing radio spectrum capacity by using radio “vortices”. Reading the articles is somewhat difficult, and reading the technical references, which explain the concepts, is even more difficult, since you immediately get into electromagnetic theory and vector equations spring up all over. This note aims to explain the meaning of radio vortices and the meaning of the referred increase of a radio transmission capacity from both a theoretical and practical point of view.

It is well-known that there are monomode and multimode optical fibers. And perhaps it is a less well-known fact  that in the waveguide, earthquakes and underwater sound, waves can be propagated in different ways. These different ways are known as modes. You can even say that different modes are different waves, which are transmitted together. And, why can there be different modes? Because nature is just like that, and if propagation equations are resolved, a solution set is obtained, each solution corresponds to a different way  of response.

With this introduction, what comes next should not be surprising: in the free space, waves can be also propagated through different modes. This is a very well-known fact within the optical field, but not so well-known within the radio one. And it isn’t so because the creation and extraction of these different modes in the electromagnetic waves requires an antenna much larger than the wavelength.

Polarization is an exception: vertical polarization is a mode and the horizontal a different one. And in the fixed microwave connections, they have been doubling for 50 years the transmission capacity, emitting in both different polarizations, because each mode is independent and can be separated. This can’t be done in mobile communications, but it could be enough for another post in the blog.

For more than fifteen years ago, these different free-space modes (different from the lifelong propagation, the plane wave) went into fashion in the optical field. And it has been a question of time in order that somebody has wanted to experiment with them in the radio field. And eventually they have carried out an experiment, the one referred to in the hyperlink of the first paragraph. It has been a heroic experiment, with a craft dual antenna, and of course with no practical application, however pioneer experiment and therefore very worthy. For interested readers, these modes are called Laguerre-Gauss.

The name of vortex is associated with these modes because in the propagation axis, the electromagnetic field is null. And creative people have compared this null with a “black hole”, and obviously it isn’t so. There have been creative people with a business sense that have also said “and with this we can increase a great deal the radio capacity, without any needs to buy a spectrum”. This is also a mistake. Putting aside the problems associated with the size of an antenna that support in the radio wave length Laguerre-Gauss modes, what can be finally obtained is a large antenna with several output connectors and by every output connector you will get a signal from a different mode. A sort of multiplexing generalization by polarization. And, of course, this would be eventually (that is, with difficulty and cost) possible in fixed radio links. Not in mobile communications.

Consequently, we can’t advise operators to stop buying spectrum because of the Laguerre-Gauss modes.

Attached fake documents: The value of Antivirus

1 Malo2 Mejorable3 Normal4 Bueno5 Excelente (Votos: 1. Media: 5.00/5)
Loading ... Loading ...

Author: Antonio Manuel Amaya

Just a while ago, I’ve received the following email:

To enhance the email, I told Thunderbird to download the attaching images, after checking, for sure, that the downloaded file was from the police web site. Although at first sight (well, at a quick first sight from a distance) they seem legal, once we put on our glasses, these emails become highly suspicious. Not only the word “procedure” is written into three different ways (even a correct one) but they didn’t even bother hiding the “attachment”. They’ve already notified that it is a “fake doc”, the link will take you to an external server, and they didn’t even bother to conceal the file format (SCR).

On the other hand, why does an average user, for instance my sister, have to know that SCR is a screensaver’s extension and that these are actually executable files whose extensions are simply changed? Particularly, from Windows 200 until now in Microsoft they decided that extensions were ugly and shabby and they decided to hide them.

Anyway, nowadays it is not about ranting and raving against the common habit of mixing “user-friendliness” with “hiding information” without taking into account the consequences that can be drawn from not showing information. And, fortunately for my computer, my mental peace, and probably my current account, I am not my sister.

We don’t even think about using a specific police email to deliver malware. I guess that in this case they are playing with their bad conscience (probably related to their curiosity) rather than with users’ curiosity. And when users receive the emails they’ll wonder what they have done and they will rapidly click on the link overlooking the text, nor thinking the meaning of “research procedure whereby this regional behavior is dealt”

Therefore, let’s think we are somebody who doesn’t know what SCR means, and consequently he clicks compulsively on the links of his emails. But let’s suppose as well that, ironically, we are aware of the fact that Internet network is an untrustworthy site so that we need to install an antivirus. Then trusting in our preventive measures, we click on the link and download the file.

What will happen? Considering the two antivirus I have in this PC, the executable file will be considered good and trustworthy.

Furthermore, if we have a uniform distribution of antivirus sales, there’ll be an 88.4% possibility of being informed by our antivirus that the rejected file is clean and that we can execute it safely:

For this, 3 out of 5 antiviruses must produce loads of false positive results. Only two of the antiviruses (no matter which) identify it as a Trojan-downloader. The remainder identifies it as ”suspicious” simply because it’s packed with Aspack, that is, the real executable is encoded within downloaded executable. I guess that in AsPack Software , they are not enjoying the fact that all the software developed by their customers, whether it is good or bad, is marked as suspicious by some antivirus.

After all this, I have no choice than bowing to security industry agents. I cannot think of any other field whereby a preventive product, which consumes resources continually, is sold. Besides it is annoying, and what’s more, when you really need it, it fails in 90% of the cases.

The best thing is that after  the success of this model in desktop computers to control viruses, the  industry is developing exports of the same model towards new devices: A “mobile antivirus“ search in Google provides nearly 40 million results. And, only in the first one, there are 38 antiviruses to download.

 

 

 

 

 

 

About APTs and men

1 Malo2 Mejorable3 Normal4 Bueno5 Excelente (Votos: 2. Media: 5.00/5)
Loading ... Loading ...

Anyone who is involved in the security world might have heard or read somewhere about APT (especially in recent years). APT stands forAdvanced Persistent Threat , that is, a threat (a motivated person with a clear target), advanced (for the method and the technique used), and persistent (regarding not only time but also the hackers’ methods and objectives). Despite the term comes actually from the military world (apparently the US Air Force made it up in 2006) and the threat can be of any kind, recently it has been related to the attacks carried out by software systems.

Nowadays, due to its overuse, APT has lost its original meaning, since you can usually find it in contexts related to common incidents; what’s more most of the latter attacks that have affected big companies cannot be considered APTs, although attacks, strictly speaking, say the opposite.

Most incidents are massive attacks whose main aim is to infect as many computers as possible, because:

  • Users might connect to “interesting” web pages (financial entities, online payment, auctions, lottery, purchase/sale of shares web page, social networks, etc.) for hackers.
  • We could send more spam email per second.
  • We’ll have a bigger bandwidth to carry out distributed denial services (DDoS).
  • We will earn more money by cheating on users of infected computers saying that their computers are very infected so that they must buy our (fake) antivirus; or we’ll tell them that we’ve encoded all their data and they have to pay in order to decode them (ransomware).
  • We could earn more money with our Pay-per-install, or pay-per-click (clickfraud) campaigns.
  • We’ll get more proxies to render anonymous our later operations.
  • Any other malicious idea.

In short, these are indiscriminate attacks which generally create some kind of malware, and they often resell the control of infected computers to third-parties. There will generally be a connection to a control panel (C&C) in order to be able to manage the computer remotely, besides that C&C could be in any world country, and often in a bullet-proof hosting.

What does this has to do with APT’s? Actually the problem is that when somebody finds an infected computer, with a malware which is doing any of the previous activities mentioned, and it connects to a remote C&C to send data, in many cases the pressure, hysteria or the attempt to sully the reputation forces us to say that it has been an advanced attack aimed at a company/government’s phishing, accusing directly to the trending group/country at that time (i.e. Anonymous, Lulzsec, China or Iran). The rest of the attacks aren’t massive or indiscriminate, although they are targeted attacks, they often differ from an APT strictly speaking. Many phishing attacks, by means ofSQL InjectionRemote File Inclusion (RFI) or misconfigured services,  are targeted attacks, but they are simple in most cases ( efficient though).

Finally, the remainder can be considered APTs, whereby the access to confidential information is often the main reason of those threats.

The actual APTs are very hard to detect, and especially, very hard to respond to them. Therefore, only the expertise of the incident response team will often be able to thwart the effects of those attacks. Most of our current security tools wont’ be very useful to detect those attacks in time and we’ll have to be meticulous about details (mainly focusing on network security monitoring- learn about what your systems and networks usually do and look into something new) in order to link signs which will lead us to detect an attack of this kind.

 

The heart of the matter is that in the security’s industry the word APT is currently used as the word zero-day (0-day) was used, where many manufacturers get bigheaded and claim that their products detect not only zero-days but also APTs. Some other manufacturers, being aware they are making a big mistake, include also APTs stuff in their marketing strategy, since nowadays it seems that your product doesn’t work if it doesn’t detect whatever it turns up.

In a nutshell, APT, like zero-daycyber-perl harbourcyber-9/11, etc. are words which are often used in a different way they were made for. Therefore, we must watch our language to avoid misunderstandings.

Full-disclosure vs responsible disclosure. Following chapter

1 Malo2 Mejorable3 Normal4 Bueno5 Excelente (Votos: 1. Media: 5.00/5)
Loading ... Loading ...

Author: David Barroso

The eternal discussion between full-disclosure vs responsible-disclosure has a relatively brand-new area: the critical infrastructure protection (CIP). It is quite common, that from time to time it is discussed the best way of reporting a  vulnerability to a manufacturer. A procedure that can satisfy both sides(the one who finds the vulnerability and the manufacturer) has not been institutionalized yet . There is all kind of choices; none of them is more successful though: acknowledging the researcher’s help (i.e. Microsoft), paying a certain amount of money (Google), or simply, using some company which works as a broker in order to pay the vulnerabilities (i.e. (iDefense VCP or Tippingpoint ZDI). But the truth is that these methods don’t work; the better example happened last year when vulnerability was discovered by Tavis Ormandy in Windows. This case proves the needs of having some type of procedure which pleases all sides.

Unfortunately, nowadays some manufacturers don’t think security is essential  to avoid risk to their users ( ZDI’s list about unpatched vulnerabilities is quite illustrative). On the other hand, some researchers also think that manufacturers have to fulfill their requirements immediately, even involving extortions to manufacturers. Although there have always been some attempts of proceduralising the vulnerability’s reporting (from the well-known RFPs procedure),ranging from an IETF’S attempt , Responsible Vulnerability Disclosure Process which ended up becoming the  Organization for Internet Safety procedure base, to No More Free Bugs initiative promoted by several researchers. If we deal with critical infrastructure protection, recently we witness what happened some years ago, after trying to do things right, realizing that it doesn’t work often, we come across different positions like  Digital Bond’s, whose vulnerability reporting policy is as simple as: we’ll do what we like; because they have had enough seeing how manufacturers , after incidents such as Stuxnet  or the vulnerabilities found by Dillon Beresford, don’t seem to react , not even when ICS-CERT is involved ( the manufacturer can even report you).

By the end of the day what really matters is how each manufacturer is concerned about handling and coordinating these incidents (communication with researchers and companies), because, if we’ve finally realized that none of the global vulnerability reporting policies works, it is the manufacturer’s task to fix its own policy ,what’s more pleasing both sides. For instance, MozillaBarracudaGoogleFaceBook or Twitter have already done it. And not all of them pay for vulnerability found, but some of them simply acknowledge the help of it.

In short, prevention is better than cure, and all large firms must be running a clear and published policy about the vulnerabilities that third parties find over their products, services or simply, over their webs, and they must recognize as well the work of people that collaborate positively  in enhancing the network security.

Apple and fake certificates

1 Malo2 Mejorable3 Normal4 Bueno5 Excelente (Votos: 2. Media: 4.50/5)
Loading ... Loading ...

Author:   Erik Miguel Fernández

On Tuesday it was found out that a SSL supplanting attack has been detected in Iran against users who wished to connect to Gmail web .

It is a man-in-the-middle attack whereby a fake certificate is used. When users try to connect to Google in a safe way, they are actually connecting to the attacker, which seems to be Google by making use of a fake certificate. The Certificate Authority is legal (DigiNotar), but DigiNotar has no authority to issue Google certificates.

The heart of the matter is that in some systems and browsers the Certificate Authority is checked whether it is legal or not, however it is not checked whether it has authority to issue that certificate specifically, apart from being legal.

The result is that in those browsers the user notices nothing at all and thinks to have a safe connection to Google, actually he does have a safe connection but with the hacker ( who is having a safe connection to Google simultaneously so that the traffic is forwarded and what’s more the user notices nothing). Thus, the hackers gets access to the Internet traffic ( user, password, sent or received mails, addressees, etc).He could also be able to manipulate the connection to send fake information to the user, hide data and even impersonate a user to send information on his/her behalf, etc.

If the browser is well programmed, a security alert is activated to warn the user (it’s up to him if he ignores the alert security and goes ahead with the connection anyway).

On Thursday we found out that apart from Gmail and other Google services, Iranian user connections to Yahoo, Mozilla and WordPress, among others, have also been attacked by the same system. It is suspected that the government (presumably Iranian) can be behind the attack

The Certification Body ( Dignotar, from Holland) has admitted a dozen of certificates have been stolen by an Iranian hacker attack which took place in July. While the problem is solved, Google (Chrome) as well as Microsoft ( Internet Explorer) and Mozilla have removed DigNotar from the reliable certificate authority list.

The problem comes with MacOS X, since as a result of the previous issue it has been discovered a bug whereby MacOS x can’t distrust certificates that are sent by a body revoked by users. This means that, while the problem is sorted out, users are asked not to relied on Dignotar webs whenever they are using Apple Safari

On the other hand, Apple has had the same problem with iOS, since it didn’t check that the Certificate Authority did have the real authority to issue certificates that were sent to the terminal. This affects iPad as well as iPhone in the previous Ios4.3.5 versions.

Therefore, watch out if iPhone (or iPad) has a OS version previous to 4.3.5 !

By the way, the iPhone’s simplest jailbraking is made till the 4.3.5 version so that those who want to have jailbraking in his iPhone must be aware that they will have this lack of security in their SSL connections…