Employees have the freedom to select in which language they want to express themselves.
You can find more entries selecting Spanish language in the top left corner link.

Archivo de September, 2011

Insomnia in the Access: or How to Curb Access Network Related Energy Consumption

1 Malo2 Mejorable3 Normal4 Bueno5 Excelente (Votos: 1. Media: 5.00/5)
Loading ... Loading ...

Author: Eduard Goma

In recent years the high energy consumption of IT devices (computers, servers, networks) and supporting infrastructure (cooling systems, UPS’s, etc.) has raised substantial concern for both environmental and cost control reasons. A large amount of the consumed energy goes to waste due to lack of Energy Proportionality in the energy consumption profile of IT devices which tend to consume close to maximum power independently of their actual workload. Making devices energy proportional is becoming a priority for component and system manufactures but this long term objective is expected to take several years until fully realized. An alternative solution that can be applied relatively fast is to permit groups of devices (server farms, network segments, etc) to behave collectively as an energy proportional ensemble, despite being made of energy un-proportional devices. The key to achieving this objective is putting some of the devices to sleep or lower power modes when the aggregate workload subsides, thus permitting the group to handle the offered load with fewer devices kept online. In this blog post we will focus on the Internet Access Networks.

Access networks include modems, home gateways, and DSL Access Multiplexers (DSLAMs), and are responsible for 70-80% of total network-based energy consumption.

For example, in Europe alone, it has been estimated that broadband equipment will be consuming approximately 50 TWh annually by the year 2015. This compares to the 61TWh in the US annually in data center consumption.

We propose energy saving opportunities in broadband access networks, both on the customer side and on the ISP side. On the user side, the combination of continuous light traffic and lack of alternative paths condemns gateways to being powered most of the time despite having Sleep-on-Idle (SoI) capabilities. To address this, we introduce Broadband Hitch-Hiking (BHH), which takes advantage of the overlap of wireless networks to aggregate user traffic in as few gateways as possible. In current urban settings BHH can power off 65-90% of gateways. Powering off gateways permits the remaining ones to synchronize at higher speeds due to reduced interference from having fewer active lines. Our tests reveal speedup up to 25%.

On the ISP side, in the distribution frame each ADSL is terminated into one of the modems belonging to a DSLAM line card. The inflexibility of current deployments makes impossible to group active lines into a subset of cards letting the remaining ones sleep. We propose introducing simple inexpensive switches at the distribution frame to enable a flexible and energy proportional management of the ISP equipment. Overall, our results show an 80% energy savings margin in access networks. The combination of BHH and switching gets close to this margin, saving 66% on average.

Researchers: Eduard Goma, Alberto Lopez Toledo, Nikos Laoutaris, Pablo Rodriguez, Pablo Yague, Marco Canini (EPFL), Dejan Kostic (EPFL),  Rade Stanojevic (IMDEA)

Papers:

  • Goma E, Canini M, Lopez Toledo A, Laoutaris N, Kostic D, Stanojevic R, Rodriguez P, Yague P. Insomnia in the Access or How to Curb Access Network Related Energy Consumption. In: Proceedings of ACM SIGCOMM ’11. Toronto, Canada: ACM New York, NY, USA; 2011

Patents:

  • Patent application number 61/522,968:”INSOMNIA IN THE ACCESS OR HOW TO CURB ACCESS NETWORK RELATED ENERGY CONSUMPTION”

 

Do White spaces look white?

1 Malo2 Mejorable3 Normal4 Bueno5 Excelente (Votos: 1. Media: 5.00/5)
Loading ... Loading ...

Author: Luis Cucala

 

Keeping posted on previous entries about the so-called White Spaces, the American regulator, FCC, has published a tester data base where the actual use of TV UHF and VHF broadband are displayed, so that operators that want to use some spectrum’s white area could be able to see the channels available.

This data base can be found on this link, and the tester will be available for 45 days, taking effect from September 19th.

Surprise! According to the FCC’s present rules, US available white -space area for outdoor use with reasonable power are scarce. MIT’s report published a map where is difficult to see something in blank, there are rather many no-available black spots which are coincidentally located on density populated areas.

If we are looking for availability in New York, the results go as expected (none-availability)

 


If we are looking for a desolate place, such as Grand Canyon, availability is quite good (there are not many people though, except tourists)

But if we decide to go to nearby Flagstaff, more populated, not a big deal though (population of 68.000 inhabitants), channels’ availability for white spaces is rather scarce.

The bottom line is that unless FCC proposes to ease spectrum usage rules for its white space use (to which TV operators are opposed), outdoor Super-WiFi calls will only be “super” in sparsely populated areas, and its main use will be restricted to indoor Wi-Fi access points, low-powered, 40 mW, much lower than normal Wi-Fi which reaches 1.000mW. However, indoor UHF/VHF propagation is better than in 2,4/5 GHz.

UK goes ahead with “white spaces”, carefully

1 Malo2 Mejorable3 Normal4 Bueno5 Excelente (No valorado aún)
Loading ... Loading ...

On September 1st, British communications regulator Ofcom has announced plans to go ahead with the commercial use of white spaces, unused spectrum between TV channels. The decision has been made after analyzing all the received replies from an information request about the issue.

The following considerations are drawn from a close reading of the document published by Ofcom:

-White space management that is, establishing which channels are unused in a specific place and a given period of time is supported in the georeferenced spectrum data base use. In this regard, it is followed the procedure chosen by the United States’ regulator (FCC).

- Almost everything else requires studies, field tests and researches among the possible actors.

Ofcom has also pointed out that it will follow EU’s regulations on white spaces, but since the EU has not spoken out yet on this issue, given the lack of European rules, Ofcom will go on with UK’s own rules which would be adapted to the European ones once the latter was available.

The document published by Ofcom is available here.

Full-disclosure vs responsible disclosure. Following chapter

1 Malo2 Mejorable3 Normal4 Bueno5 Excelente (Votos: 1. Media: 5.00/5)
Loading ... Loading ...

Author: David Barroso

The eternal discussion between full-disclosure vs responsible-disclosure has a relatively brand-new area: the critical infrastructure protection (CIP). It is quite common, that from time to time it is discussed the best way of reporting a  vulnerability to a manufacturer. A procedure that can satisfy both sides(the one who finds the vulnerability and the manufacturer) has not been institutionalized yet . There is all kind of choices; none of them is more successful though: acknowledging the researcher’s help (i.e. Microsoft), paying a certain amount of money (Google), or simply, using some company which works as a broker in order to pay the vulnerabilities (i.e. (iDefense VCP or Tippingpoint ZDI). But the truth is that these methods don’t work; the better example happened last year when vulnerability was discovered by Tavis Ormandy in Windows. This case proves the needs of having some type of procedure which pleases all sides.

Unfortunately, nowadays some manufacturers don’t think security is essential  to avoid risk to their users ( ZDI’s list about unpatched vulnerabilities is quite illustrative). On the other hand, some researchers also think that manufacturers have to fulfill their requirements immediately, even involving extortions to manufacturers. Although there have always been some attempts of proceduralising the vulnerability’s reporting (from the well-known RFPs procedure),ranging from an IETF’S attempt , Responsible Vulnerability Disclosure Process which ended up becoming the  Organization for Internet Safety procedure base, to No More Free Bugs initiative promoted by several researchers. If we deal with critical infrastructure protection, recently we witness what happened some years ago, after trying to do things right, realizing that it doesn’t work often, we come across different positions like  Digital Bond’s, whose vulnerability reporting policy is as simple as: we’ll do what we like; because they have had enough seeing how manufacturers , after incidents such as Stuxnet  or the vulnerabilities found by Dillon Beresford, don’t seem to react , not even when ICS-CERT is involved ( the manufacturer can even report you).

By the end of the day what really matters is how each manufacturer is concerned about handling and coordinating these incidents (communication with researchers and companies), because, if we’ve finally realized that none of the global vulnerability reporting policies works, it is the manufacturer’s task to fix its own policy ,what’s more pleasing both sides. For instance, MozillaBarracudaGoogleFaceBook or Twitter have already done it. And not all of them pay for vulnerability found, but some of them simply acknowledge the help of it.

In short, prevention is better than cure, and all large firms must be running a clear and published policy about the vulnerabilities that third parties find over their products, services or simply, over their webs, and they must recognize as well the work of people that collaborate positively  in enhancing the network security.

Planning, optimization and mobile network-evolution support systems

1 Malo2 Mejorable3 Normal4 Bueno5 Excelente (Votos: 3. Media: 4.67/5)
Loading ... Loading ...

Author: Eduardo Yusta Padilla, Raquel García Pérez, Beatriz González Rodríguez

 

Mobile operator radio-networks need continuously a planning and optimization process which allows a better adaption to changing and increasing traffic situations that take place nowadays. The deployment of high-binary new services, as well as a wider geographical scope of mobile services, lead the operators to have systems and techniques that in a simple, rapid and specific way, are useful as a support to the different dimensioning, planning, optimization and network evolution processes.

How is the development of the different common resources from the different access network elements like? What are the expanded infrastructure parameters that must be modified to benefit from them? Which are the more critical and strategic areas which should be given priority in network evolution? When and how should it be carried out?

These are some frequently asked questions that must be answered by the different departments involved in mobile network operations in order to ensure that high-quality services are being provided to users, keeping the highest-deployment efficiency, that is, making the most of the investments and minimizing the unnecessary costs derived from a bad operation strategic and network evolution.

Giving an answer to the previous questions is not an easy task; what’s more it requires an analysis of lots of information. Within the different operator’s information systems, a great amount of data is stored, which includes purely technical information as well as measures applied to drive-tests, calculated coverage, traffic statistics, usage of resources and operation of the different network elements, configuration parameters of the very same and inventories of deployment capacity, even top-level information, including business intelligence data, strategic plans and economical and financial issues.

The main problem to tackle is not the data availability but the useful meaning of them. Having and combining all the information, so that clear and specific results can be obtained , finding out which are the required actions in the access network and how they should be carried out, is quite difficult and not very user-friendly from a “manual” point of view. That is why for the mobile network operator all those tools and systems, which allow automating all the obtaining and analyzing process to its fullest extent, have become quite important since they provide information of utmost importance to the ones responsible for the decision-making of every single network management procedures.

Telefónica I+D works actively in developing innovative systems and solutions that enable Telefónica to have all the offered advantages and to maintain its leadership position in mobile service provisions.

A case in point of these systems would be RAND, developed to detect the 3G+ mobile’s growth and evolution necessities in Telefónica, Spain as well as for the semiautomatic generation of network updating plans. RAND belongs to a geographical information system, which is provided with geo-processing advanced functionalities, advanced processing of geo-referenced data and all the advantages of visual display (map servers, cartographic data base, etc.) that offer these systems. Likewise, it is connected to the different indicators data base, inventory and coverage, in order to obtain in a rapid and easy way the necessary information for the analysis.

Functionally, the system has two main application areas:

  • On the one hand, it is bound to ensure coverage provision to provide different mobile services in towns and geographical areas of interest: it also allows identifying service-less areas and offering proposals on the number, type and location of new required settings to make possible the network’s development (bearing in mind the favorite locations and possible settings of  candidate buildings).
  • On the other hand, it is also bound to ensure enough capacity in mobile network elements, so that users can have the required experience quality: it allows carrying out an analysis of the traffic and networking cells and nodes capacity to identify in a flexible way high- load situations and to suggest element updating that enable to solve them.

 

Another example of the importance of these systems is the carrying out of the specific predictions about the possible traffic evolution and network capacity usage. Currently this is an area of utmost importance due to the increasing of broadband mobile services and the dynamism that it is imposed to configuration and network evolution.

 

Although there are procedures and analysis techniques (e.g. RAND’s) that, according to flexible criteria, allow identifying in which network elements there is a higher traffic or resource usage persistently. These ones have the disadvantage of being a merely reactive approach: capacity problems are detected once higher loading grades are being experimented. Thus, for an operator is quite important to be provided with systems which complement those analysis techniques and which enable to tackle the problem in a more proactive way, anticipating future capacity-failure problems in the most specific way.

 

We are currently working on artificial intelligence and data mining-based solutions to deal with prediction problems. The main idea behind these solutions is to be able to analyze in an efficient way the past and present information of network behavior and structure, extracting useful information about the different traffic statistics-evolution and resource usage to combine them with variables of business intelligence in order to obtain a realist estimation of the future access network conditions.

 

A good point about where, how, and, especially, when developing means an excellent investment management and a quality optimization noticed by the mobile-service users, entailing an economic impact for operators. This remarkable point has been strengthened by the support systems of planning, optimization and network evolution, a key element for mobile communication operators.

Apple and fake certificates

1 Malo2 Mejorable3 Normal4 Bueno5 Excelente (Votos: 2. Media: 4.50/5)
Loading ... Loading ...

Author:   Erik Miguel Fernández

On Tuesday it was found out that a SSL supplanting attack has been detected in Iran against users who wished to connect to Gmail web .

It is a man-in-the-middle attack whereby a fake certificate is used. When users try to connect to Google in a safe way, they are actually connecting to the attacker, which seems to be Google by making use of a fake certificate. The Certificate Authority is legal (DigiNotar), but DigiNotar has no authority to issue Google certificates.

The heart of the matter is that in some systems and browsers the Certificate Authority is checked whether it is legal or not, however it is not checked whether it has authority to issue that certificate specifically, apart from being legal.

The result is that in those browsers the user notices nothing at all and thinks to have a safe connection to Google, actually he does have a safe connection but with the hacker ( who is having a safe connection to Google simultaneously so that the traffic is forwarded and what’s more the user notices nothing). Thus, the hackers gets access to the Internet traffic ( user, password, sent or received mails, addressees, etc).He could also be able to manipulate the connection to send fake information to the user, hide data and even impersonate a user to send information on his/her behalf, etc.

If the browser is well programmed, a security alert is activated to warn the user (it’s up to him if he ignores the alert security and goes ahead with the connection anyway).

On Thursday we found out that apart from Gmail and other Google services, Iranian user connections to Yahoo, Mozilla and WordPress, among others, have also been attacked by the same system. It is suspected that the government (presumably Iranian) can be behind the attack

The Certification Body ( Dignotar, from Holland) has admitted a dozen of certificates have been stolen by an Iranian hacker attack which took place in July. While the problem is solved, Google (Chrome) as well as Microsoft ( Internet Explorer) and Mozilla have removed DigNotar from the reliable certificate authority list.

The problem comes with MacOS X, since as a result of the previous issue it has been discovered a bug whereby MacOS x can’t distrust certificates that are sent by a body revoked by users. This means that, while the problem is sorted out, users are asked not to relied on Dignotar webs whenever they are using Apple Safari

On the other hand, Apple has had the same problem with iOS, since it didn’t check that the Certificate Authority did have the real authority to issue certificates that were sent to the terminal. This affects iPad as well as iPhone in the previous Ios4.3.5 versions.

Therefore, watch out if iPhone (or iPad) has a OS version previous to 4.3.5 !

By the way, the iPhone’s simplest jailbraking is made till the 4.3.5 version so that those who want to have jailbraking in his iPhone must be aware that they will have this lack of security in their SSL connections…