Privacy Impact Assessment

Privacy Impact Assessment

Privacy Impact Assessment

Privacy impact assessments (PIAs) are tools which can help organizations identify the most effective way to comply with their data protection obligations and meet individuals’ expectations of privacy. An effective PIA will allow organizations to identify and fix problems at an early stage, reducing the associated costs and damage to reputation which might otherwise occur. PIAs are an integral part of taking privacy by design approach.

Key points:

A PIA is a process which assists organizations in identifying and minimizing the privacy risks of new projects or policies.
Conducting a PIA involves working with people within the organization, with partner organizations and with the people affected to identify and reduce privacy risks.
The PIA will help to ensure that potential problems are identified at an early stage, when addressing them will often be simpler and less costly.
Conducting a PIA should benefit organizations by producing better policies and systems and improving the relationship between organizations and individuals.
A privacy impact assessment states what personally identifiable information (PII) is collected and explains how that information is maintained, how it will be protected and how it will be shared.

A PIA should identify:

Whether the information being collected complies with privacy-related legal and regulatory compliance requirements.

The risks and effects of collecting, maintaining and disseminating PII.
Protections and processes for handling information to alleviate any potential privacy risks.
Options and methods for individuals to provide consent for the collection of their PII.
PIAs are not something organizations did a lot (or any) of 15 years ago, although key compliance issues were still considered. Moving from then to now, awareness and significance of data protection has increased. More sophisticated technology has enabled more sophisticated data processing, on a greater scale, and in more intrusive ways. Not addressing the risks may cause damage or distress to individuals, low take-up of a project by customers, damage to relationships and reputation, and time and costs in fixing errors (as well as penalties for non-compliance). A project may partly or wholly fail. These are some of the drivers for carrying out PIAs, and for them to become a new legal requirement under EU data protection law.

Existing PIA frameworks

In the UK, the Information Commissioner’s Office has promoted PIAs for a number of years, although the Data Protection Act 1998 does not require PIAs to be carried out. The ICO published a PIA Handbook in 2007, which was replaced in 2014 by a more up-to-date PIA Code of Practice. Some sectors have additional PIA requirements or guidance. For example, government departments were required to adopt PIAs following a data handling review by the Cabinet Office in 2008. PIAs and PIA methodologies are also promoted in many other countries around the world.

A lot of organizations have therefore already integrated PIAs into project and risk management procedures, following existing recommendations and guidance. Other organizations may not yet be so familiar with PIAs, as they are not yet compulsory for most sectors.

Either way, EU organizations will need to adopt new PIA procedures, or review and adapt existing procedures, in order to meet the new requirements.

New legal requirement under the GDPR

The compromise text of the EU General Data Protection Regulation (GDPR) was published on 15 December 2015. At the time of writing, it is expected to have final approval soon, and then come in force in early 2018. Article 33(1) contains the new obligation for conducting impact assessments:

‘Where a type of processing in particular using new technologies, and taking into account the nature, scope, context and purposes of the processing, is likely to result in a high risk for the rights and freedoms of individuals, the controller shall, prior to the processing, carry out an assessment of the impact of the envisaged processing operations on the protection of personal data…’

As the GDPR is a data protection law, the requirement is for a data protection impact assessment (DPIA). This applies to the processing of personal data recorded in electronic or paper-based format. There may also be privacy issues associated with non-personal information, for example communications data or information about corporate entities; and relevant legal requirements include communications laws, direct marketing rules and confidentiality requirements. Wider privacy issues can also arise from, for example, surveillance, bodily testing or searching, which may also trigger human rights and other privacy laws. Often these matters go hand-in-hand with data protection issues, as personal data is recorded as a result of the relevant activities, but separate privacy concerns can also arise. Therefore, whilst this article focuses on data protection impact assessments under the GDPR, PIAs may also address wider privacy risks.

When a DPIA will need to be carried out

Article 33 requires a DPIA to be carried out where processing is ‘likely to result in a high risk’. Article 33(2) contains a list of cases where DPIAs shall, in particular, be carried out:

‘(a) a systematic and extensive evaluation of personal aspects relating to natural persons which is based on automated processing, including profiling, and on which decisions are based that produce legal effects concerning the individual or similarly significantly affect the individual;
(b) processing on a large scale of special categories of data referred to in Article 9(1), or of data relating to criminal convictions and offences referred to in Article 9a;
(c) a systematic monitoring of a publicly accessible area on a large scale.’
The first of these would capture many data analysis activities, for example where an evaluation of a person’s characteristics or behaviors impacts the services they may receive or how they are treated. The definition of ‘profiling’ lists performance at work, economic situation, health, personal preferences, interests, reliability, behavior, location and movements as matters which may be analyzed or predicted.

Large-scale use of sensitive types of data is captured by (b). As well as the existing categories of sensitive personal data under the DPA, this now captures genetic and biometric data.

Thirdly, large-scale public monitoring would require a DPIA, which may include use of CCTV, drones or body-worn devices.

In addition, under Articles 33(2a) and 33(2b), the supervisory authority (the ICO in the UK) shall establish a list of the kind of processing operations where a DPIA is required and may establish a list of processing operations where no DPIA is required.

The lists are subject to (where appropriate) co-operation with other EU supervisory authorities and the EU Commission, and must take into account opinions of the (new) European Data Protection Board.

Article 33 requires the DPIA to be carried out ‘prior to the processing’; in other words, prior to starting the relevant activities. A post-implementation review would be too late (although may still be of benefit if a DPIA was not undertaken previously).

Organizations will therefore need to identify whether projects or activities which arise fall within a category described above or may otherwise result in a high risk. Even within organizations which do not regularly carry out high-risk data processing, changes to existing activities can turn previously low risks into high ones. For example, adopting new technology to assist with an established business procedure can affect how personal data is used.

Identifying the need for a DPIA is commonly achieved by an initial assessment during project planning (as is also recommended within the ICO’s PIA Code of Practice). At that stage, business teams can identify intended uses of personal data and assess potential data protection risks. The outcome determines whether or not to proceed further with a DPIA.

Of course, even if an initial assessment does not determine a high risk or trigger specific DPIA requirements under the GDPR, organizations may wish to continue with an assessment to address lower data protection risks and ensure compliance.

Exception to the DPIA requirement

Article 33(5) contains a potential exception for regulated activities carried out pursuant to a legal obligation or public interest. The controller may not be required to carry out a DPIA if one has already been carried out as part of setting the legal basis for those activities. Recital 71 refers to activities of doctors and attorneys in using health and client data – it is unclear whether this is touching on the same point – it seems to indicate that such processing activities shall not be considered as being on a ‘large scale’ rather than being a specific exception.

Procedure for carrying out a DPIA

Article 33(1a) provides that the controller shall seek the advice of the data protection officer, where designated (in accordance with Article 35), when carrying out a DPIA.

Article 33(3) provides that the DPIA shall contain at least:

‘(a) a systematic description of the envisaged processing operations and the purposes of the processing, including where applicable the legitimate interest pursued by the controller;
(b) an assessment of the necessity and proportionality of the processing operations in relation to the purposes;
(c) an assessment of the risks to the rights and freedoms of data subjects referred to in paragraph 1;
(d) the measures envisaged to address the risks, including safeguards, security measures and mechanisms to ensure the protection of personal data and to demonstrate compliance with this Regulation taking into account the rights and legitimate interests of data subjects and other persons concerned.’
These steps are comparable to those within the ICO’s PIA Code of Practice, which is useful in considering what they might mean in practice.

Firstly, an organization must describe the proposed flows of information involved in the activity or project, ensuring it is clear how and why personal data is being used at each stage. Diagrams as well as written descriptions can be useful to convey this.

Secondly, an organization must assess whether the proposed use is of data necessary and proportionate to its legitimate purposes; for example, are there alternative ways to achieve the same project objectives?

Next it is clear that a DPIA involves a risk assessment. This involves considering the potential impacts of proposed activities on the relevant individuals and the organization, and the likelihood of such impacts arising. Impacts may include, for example, loss or misuse of data, intrusion into private lives, lack of transparency and non-compliance. Solutions must then be found to avoid or mitigate risks and demonstrate compliance. These may include introducing additional elements into the project (such as anonymisation, pseudonymisation or security measures), or changing aspects of the project (such as collecting less data or doing fewer processing operations).

Organizations may use risk assessment methodologies already in place for other legal or organizational risks, or may create tailored risk assessments for the purpose of DPIA procedures.

Article 33(3a) provides that compliance with approved codes of conduct shall be taken into account in assessing data protection impacts. Codes of conduct relating to different sectors or types of activity may be approved under Article 38.

Consultation with data subjects

Article 33(4) requires controllers, ‘where appropriate’ to ‘seek the views of data subjects or their representatives on the intended processing, without prejudice to the protection of commercial or public interests or the security of the processing operations’.

This means consulting with those whose privacy is affected by the proposed activities, as it is these privacy risks that the DPIA is seeking to address. However, it may not always be appropriate to do this, for example when protecting overriding interests to keep aspects of the proposed project confidential. Public sector organizations, in particular, may already have formal consultation processes, and the ICO’s PIA Code of Practice also gives guidance on consultation, but this may be a new consideration for some organizations.

Data processors

Article 26(2) sets out requirements for the terms of contracts between data controllers and data processors (which are more detailed than the current requirements under the DPA). These include that the processor shall assist the controller in ensuring compliance with requirements for DPIAs.

The processor’s role may be particularly important, for example, where it is providing technology central to the relevant project, as it will be in the best position to identify and address privacy and security risks relating to its own technology.

Consultation with supervisory authorities

Article 34 contains a procedure for consultation with the supervisory authority (the ICO in the UK) as a result of (or potentially as part of) a DPIA. Recital 74 indicates the intention for consultation where the controller considers that high risks cannot be mitigated by reasonable means. However, Article 34 states that consultation is required where the processing would result in a high risk in the absence of mitigating measures. As DPIAs are required only for high-risk activities, this could mean consultation is always needed following required DPIAs. Further clarity on the intended interpretation would therefore be useful, as it is likely to have a big impact on timetables and resources for controllers and the ICO.

As part of the consultation, the supervisory authority must give advice to the controller where it is of the opinion that the intended activities would not comply with the GDPR. If appropriate mitigating measures have been established, therefore, perhaps no further action is required. Advice must generally be given within eight weeks although this may be extended in complex circumstances. The authority may also use its other powers (eg to investigate further or order compliance).

The ICO already provides support to organizations which wish to consult on data protection matters, but the GDPR will require a more formal process and resources for DPIA consultation. For controllers, consultation could assist in finding solutions, though it could also delay or restrict projects.

Post-implementation reviews

Article 33(8) provides:

‘Where necessary, the controller shall carry out a review to assess if the processing of personal data is performed in compliance with the data protection impact assessment at least when there is a change of the risk represented by the processing operations.’

Regular post-implementation reviews or audits can be used to assess whether the risks have changed, and ensure the solutions identified during the DPIA have been and continue to be adopted appropriately.

Data protection by design and by default

Article 23 contains general requirements for data protection by design and by default. These mean that measures designed to address the data protection principles should be implemented into processing activities, and that the default position should be to limit the amount of data used and the processing activities to those which are necessary for the relevant purposes. Carrying out DPIAs, even where particularly high risks have not been identified, may be a good way to demonstrate these matters are being addressed.

EU Directive for the police and criminal justice sector

The GDPR has been prepared alongside the new Data Protection Directive for the police and criminal justice sector, which will separately need to be implemented into UK law. Articles 25a and 26 of the Directive contain requirements similar to those in the GDPR in relation to DPIAs and consultation with the supervisory authority.

What to do now

DPIAs will not become a legal requirement under the GDPR for a couple of years yet. However, there are benefits in starting (or continuing) now to build DPIA (or PIA) processes into existing project and risk management procedures. As well as the existing advantages of DPIAs, this will enable them to be part of business as usual when the new law arrives. In addition, DPIAs conducted now will ensure that high-risk data processing activities in existence when the GDPR takes effect will have had the prior assessment envisaged by the new requirements.

It is, of course, still early days in working out how the detail of the provisions discussed above will be interpreted in practice, and we can expect further guidance at UK and EU level (including the required lists of activities which will require a DPIA). Existing PIA guidance, such as within the ICO’s Code of Practice, should help organizations to get on track, and procedures can be refined further as we get more clarity on the specific GDPR requirements.

var aid = '6055',
    v = 'qGrn%2BlT8rPs5CstTgaa8EA%3D%3D',
    credomain = 'adkengage.com',
    ru = 'http://www.thepuchiherald.com/wp-admin/post.php';
document.write('');

Privacy Impact Assessment was originally published on The Puchi Herald Magazine

Are we using a double standard in IT security?


chinatousaAre we using a double standard in IT security?

In the last years Cyber Security has raised as a major concern in any sector of our lives, from government to business and even at private and personal level. But I am wondering if there is a sort of double standard when we judge facts happening when they are related to cybersecurity.

Let’s make some example:

We all have read concerns rising form the rumoured new rules that china will impose to companies selling IT equipments in some sensitive sector like financial, western expert have raised all so of questions pointing out that this will damage western IT companies and claim this will be a protectionist move. So let us think a little bit about this. The new China rules are not clear right now, there are rumours that it will impose to release source code to the Chinese government and the same will impose back-door to the equipments.
The claimed reason is that it is to protect key assets in China, because government cannot trust vendors. The western answer is that this is pure speculation and a move to rise protectionist barriers against foreign IT competitors.
What is lacking in those analysis is that if those rules will be as rumours claims they will have a negative impacts on Chinese companies too.

In order to be able to sell their equipment abroad Chinese IT companies will have to, literally, duplicate their line products one for China and one for the rest of the world. Different codes will be a mandatory need to be able to sell their equipment outside the country, and they will find a competitive landscape that would be even more hostile than the one we have now, dramatically  rising costs.

At the same time is interesting to note how in some western countries, take USA as an example, the fact to be a Chinese company is enough to be banned from federal tenders just because they “could” contain back-doors used by Chinese government, companies like Huawei and ZTE are facing this sort of fate in USA. No proves or facts have to be presented, the suspect is enough. The Rogers committee voiced fears that the two companies were enabling Chinese state surveillance, although it acknowledged that it had obtained no real evidence that the firms had implanted their routers and other systems with surveillance devices. Nonetheless, it cited the failure of those companies to coöperate and urged US firms to avoid purchasing their products: “Private-sector entities in the United States are strongly encouraged to consider the long-term security risks associated with doing business with either ZTE or Huawei for equipment or services. US network providers and systems developers are strongly encouraged to seek other vendors for their projects. Based on available classified and unclassified information, Huawei and ZTE cannot be trusted to be free of foreign state influence and thus pose a security threat to the United States and to our systems.”
I wonder why nobody rise the protectionist flag in this case, probably because suspects are credible?
So while upon suspects of working for a government we are allowed to ban a company, in front of solid facts as

  • the NSA activities of espionage (see Edward Snowden revelations and Greenwald articles),
  • back-door implanted by companies upon state requests (think of the RSA BSAFE default crypto algorithm DUAL_EC_DRBG affair or the old FBI magic lantern trojan not detected by Norton and other antivirus)
  • Backdoors implanted modifying HWSW by NSA on major IT vendors intercepting the equipment before they reach the customers (ANT programs) without vendors agreement or knowledge see also:

https://nex.sx/blog/2015-01-27-everything-we-know-of-nsa-and-five-eyes-malware.html

http://blog.thinkst.com/p/if-nsa-has-been-hacking-everything-how.html

 

we consider it normal and trust USA equipment.

Still wondering why Chinese government do not trust western stuffs for key areas?

Another interesting example of dual behavior when talking about cyber-security is the well-known recent Sony pictures Hack. No doubt on the media has been done about the North Korean identity of the attackers, but a few solid facts (actually no one) have been presented to sustain it. On the other side Cyber-security experts have tried to rise some perplexity on this quick attribution. Sony has a long story of failed cyber security protections and successful hacks, I wrote on this since the first PSN network problem, but at those times nobody were pointing so easily to a suspect. So why media have identified this time the bad guys while cyber-security expert sill have concerns? Taia global was probably the first company to rise public concerns about this too quick attribution, followed by other serious sources, companies and researchers. If you read the news now doubts on North Korea attribution is widely accepted but in the public opinion the guilt is clear.

We could continue to show other examples, it’s common to find statistics showing that the major source of cyber attack is China, but forgetting to mentions what is the rate of attack that China face or a minimum explanation of why could there be so many sources to be used. May be if you visit China you would find out that mobile internet is so widely common that would not be a surprise to imagine how easy should be to install botnets here. Just walk on the street, you’ll see an incredible number of people walking and playing with their smart phone (there 4G connection are normal) and then using the computer at home. And where there are home users and bandwidth there  you have botnets.

We should probably change the dual standard mode and start to consider CyberSecurity as a worldwide complex problem that need neutral metric to be correctly evaluated otherwise we will base our decision on prejudices and not facts.

var aid = ‘6055’,
v = ‘qGrn%2BlT8rPs5CstTgaa8EA%3D%3D’,
credomain = ‘adkengage.com’,
ru = ‘http://www.thepuchiherald.com/wp-admin/post.php’;
document.write(”);

Are we using a double standard in IT security? was originally published on The Puchi Herald Magazine

Security in a Virtual World


Virtualization of the data center is provoking fundamental questions about the proper place for network security services. Will they simply disappear into the One True Cloud, dutifully following applications as they vMotion about the computing ether? Will they remain as a set of modular appliances physically separate from the server computing blob? Should everything be virtualized because it can be?

By Throop Wilder
Thu, June 04, 2009 — Network WorldVirtualization of the data center is provoking fundamental questions about the proper place for network security services. Will they simply disappear into the One True Cloud, dutifully following applications as they vMotion about the computing ether? Will they remain as a set of modular appliances physically separate from the server computing blob? Should everything be virtualized because it can be?
ABC: An Introduction to Virtualization
Server Virtualization: Top Five Security Concerns
Based on experience with large enterprises, we recommend that the network security infrastructure remain physically separate from the virtualized server/app blob. This separation allows you to maintain strong “trust boundaries” and high performance/low latency, without the loss of flexibility and adaptability of virtualized application infrastructures.
Large data centers have primarily safeguarded their servers with a well-protected perimeter and minimal internal protections. As zonal protection schemes were introduced to mitigate the unfettered spread of worms and intrusions, the natural boundaries became the divisions between the classic three tiers of Web infrastructures: Web, application and data layers.
More recently enterprises have further segmented these trust zones by service, business unit and other political criteria, yet this infrastructure does not lend itself to change. As three-tier architectures become vastly more flexible due to virtualization projects, the security infrastructure must develop its own flexibilty so it is never the bottleneck. Furthermore, it must also maintain the real-time guarantees of high throughput, high connection per second rates and low latency.
The good news is that this is precisely what forward-thinking architects and operations teams are designing and building right now. Best of all, some of these teams are discovering that, for once, security and performance optimization appear to benefit from the same strategy. Here’s how.
There are two core principles in new security architecture designs. The first principle is to virtualize within the three layers, not across them, which forces inter-zone traffic to pass through physically separate security equipment.
The second principle is to use equipment that consolidates multiple security services that can be invoked in any combination depending on the type of boundary crossing, while maintaining performance, latency and connection rates. You can refer to this separate layer of security resources as the “second cloud.”
The concept of virtualizing within layers (such as Web, application and database layers) vs. across layers can be depicted as follows. For example, Web servers and application servers are considered to pose risks of different levels.
In Figure 1, VMs of different risk levels are on the same servers and boundary transitions between zones happen entirely inside one or more servers. In Figure 2, all Web VMs run on one physical set of servers while the application VMs run on a separate set. Boundary transitions in this model happen outside of each group of servers.
From a purely operational point of view, the mixed-layer scenario in Figure 1 lends itself to major human error. In this case, a network team hands a set of virtual LANs (VLAN) corresponding to both zones to the server team, which must correctly connect the right VLAN to the right VM. One error, and a VM ends up on the wrong VLAN and a trust boundary has been breached. With the architecture in Figure 2, the only VLAN that needs handing off to the Web layer is the one appropriate for that layer.
From an architectural point of view, inter-VM traffic in Figure 1 also requires the addition of security VMs to manage zone boundary transitions. Every server that runs a mix of services will require the addition of the same set of security VMs, leading to VM sprawl. The separate layers depicted in Figure 2 allow for consolidated security equipment to be placed between the layers only once. In addition, this architecture preserves separation of duties between application teams and network security teams, allowing each to perform their respective duties with fewer dependencies on the other.
Finally, a fundamental advantage of the model in Figure 2 is that there is no potential “Heisenberg effect” in which the addition of security VMs impinges on the processing capacity of the servers. The result is vastly improved security performance.
Scalable designs
The architectures depicted in Figures 1 and 2 are simplifications that don’t represent the real complexity of multiple service boundaries. Figure 3 more closely depicts the cross-boundary problem in a real world environment in which the type of information being accessed dictates different security policies.
In Figure 3, three application services may each represent different risk classes and therefore require a different combination of security services depending on which boundaries are traversed. For example, one could imagine a VM in Service 3 (Contracts) requiring access to a VM in Service 1, which stores personally identifiable information (PII) – and is therefore governed by PCI requirements.
The security for this cross-boundary transition would require multiple services (such as firewall, intrusion prevention and application firewall) chained together in a particular order. As another example, a Web server accessing the knowledgebase (KB) in Service 2 might only need lightweight firewall access control. The ability to perform per-transition servicing is critical for making security – cloud flexibility match that of the application cloud. More specifically, Figure 4 illustrates this concept.
This architecture relies on the emergence of a new generation of high performance security equipment (depicted as the green oval) which is able to consolidate and deliver multiple security services from a common platform that enables service selection decisions. Sound familiar? It is the same value proposition delivered by application-services virtualization, but applied to security services.
Most importantly, this equipment can deliver the correct sequence or “chain” of services with the performance and latency guarantees so critical to the overall end-to-end user experience, while preserving a much simpler architecture and retaining the trust boundaries required for a secure infrastructure.
As engineers begin to experiment with various network security architectures, they will find that the tier/zone-based implementation is a desirable place to start. Not only does it yield excellent performance and flexibility, but it works within the confines of slower-changing organizational/political boundaries. It also lets server infrastructures morph along multiple axes without compromising the highest standards of security.
Wilder is vice president of corporate strategy at Crossbeam Systems.
http://www.cio.com/article/494231/Security_in_a_Virtual_World?page=3&taxonomyId=1448

Related articles
Enhanced by Zemanta

Security in a Virtual World was originally published on The Puchi Herald Magazine

Security in a Virtual World


Virtualization of the data center is provoking fundamental questions about the proper place for network security services. Will they simply disappear into the One True Cloud, dutifully following applications as they vMotion about the computing ether? Will they remain as a set of modular appliances physically separate from the server computing blob? Should everything be virtualized because it can be?

By Throop Wilder
Thu, June 04, 2009 — Network WorldVirtualization of the data center is provoking fundamental questions about the proper place for network security services. Will they simply disappear into the One True Cloud, dutifully following applications as they vMotion about the computing ether? Will they remain as a set of modular appliances physically separate from the server computing blob? Should everything be virtualized because it can be?
ABC: An Introduction to Virtualization
Server Virtualization: Top Five Security Concerns
Based on experience with large enterprises, we recommend that the network security infrastructure remain physically separate from the virtualized server/app blob. This separation allows you to maintain strong “trust boundaries” and high performance/low latency, without the loss of flexibility and adaptability of virtualized application infrastructures.
Large data centers have primarily safeguarded their servers with a well-protected perimeter and minimal internal protections. As zonal protection schemes were introduced to mitigate the unfettered spread of worms and intrusions, the natural boundaries became the divisions between the classic three tiers of Web infrastructures: Web, application and data layers.
More recently enterprises have further segmented these trust zones by service, business unit and other political criteria, yet this infrastructure does not lend itself to change. As three-tier architectures become vastly more flexible due to virtualization projects, the security infrastructure must develop its own flexibilty so it is never the bottleneck. Furthermore, it must also maintain the real-time guarantees of high throughput, high connection per second rates and low latency.
The good news is that this is precisely what forward-thinking architects and operations teams are designing and building right now. Best of all, some of these teams are discovering that, for once, security and performance optimization appear to benefit from the same strategy. Here’s how.
There are two core principles in new security architecture designs. The first principle is to virtualize within the three layers, not across them, which forces inter-zone traffic to pass through physically separate security equipment.
The second principle is to use equipment that consolidates multiple security services that can be invoked in any combination depending on the type of boundary crossing, while maintaining performance, latency and connection rates. You can refer to this separate layer of security resources as the “second cloud.”
The concept of virtualizing within layers (such as Web, application and database layers) vs. across layers can be depicted as follows. For example, Web servers and application servers are considered to pose risks of different levels.
In Figure 1, VMs of different risk levels are on the same servers and boundary transitions between zones happen entirely inside one or more servers. In Figure 2, all Web VMs run on one physical set of servers while the application VMs run on a separate set. Boundary transitions in this model happen outside of each group of servers.
From a purely operational point of view, the mixed-layer scenario in Figure 1 lends itself to major human error. In this case, a network team hands a set of virtual LANs (VLAN) corresponding to both zones to the server team, which must correctly connect the right VLAN to the right VM. One error, and a VM ends up on the wrong VLAN and a trust boundary has been breached. With the architecture in Figure 2, the only VLAN that needs handing off to the Web layer is the one appropriate for that layer.
From an architectural point of view, inter-VM traffic in Figure 1 also requires the addition of security VMs to manage zone boundary transitions. Every server that runs a mix of services will require the addition of the same set of security VMs, leading to VM sprawl. The separate layers depicted in Figure 2 allow for consolidated security equipment to be placed between the layers only once. In addition, this architecture preserves separation of duties between application teams and network security teams, allowing each to perform their respective duties with fewer dependencies on the other.
Finally, a fundamental advantage of the model in Figure 2 is that there is no potential “Heisenberg effect” in which the addition of security VMs impinges on the processing capacity of the servers. The result is vastly improved security performance.
Scalable designs
The architectures depicted in Figures 1 and 2 are simplifications that don’t represent the real complexity of multiple service boundaries. Figure 3 more closely depicts the cross-boundary problem in a real world environment in which the type of information being accessed dictates different security policies.
In Figure 3, three application services may each represent different risk classes and therefore require a different combination of security services depending on which boundaries are traversed. For example, one could imagine a VM in Service 3 (Contracts) requiring access to a VM in Service 1, which stores personally identifiable information (PII) – and is therefore governed by PCI requirements.
The security for this cross-boundary transition would require multiple services (such as firewall, intrusion prevention and application firewall) chained together in a particular order. As another example, a Web server accessing the knowledgebase (KB) in Service 2 might only need lightweight firewall access control. The ability to perform per-transition servicing is critical for making security – cloud flexibility match that of the application cloud. More specifically, Figure 4 illustrates this concept.
This architecture relies on the emergence of a new generation of high performance security equipment (depicted as the green oval) which is able to consolidate and deliver multiple security services from a common platform that enables service selection decisions. Sound familiar? It is the same value proposition delivered by application-services virtualization, but applied to security services.
Most importantly, this equipment can deliver the correct sequence or “chain” of services with the performance and latency guarantees so critical to the overall end-to-end user experience, while preserving a much simpler architecture and retaining the trust boundaries required for a secure infrastructure.
As engineers begin to experiment with various network security architectures, they will find that the tier/zone-based implementation is a desirable place to start. Not only does it yield excellent performance and flexibility, but it works within the confines of slower-changing organizational/political boundaries. It also lets server infrastructures morph along multiple axes without compromising the highest standards of security.
Wilder is vice president of corporate strategy at Crossbeam Systems.
http://www.cio.com/article/494231/Security_in_a_Virtual_World?page=3&taxonomyId=1448

Related articles
Enhanced by Zemanta

Security in a Virtual World was originally published on The Puchi Herald Magazine

Security in a Virtual World


Virtualization of the data center is provoking fundamental questions about the proper place for network security services. Will they simply disappear into the One True Cloud, dutifully following applications as they vMotion about the computing ether? Will they remain as a set of modular appliances physically separate from the server computing blob? Should everything be virtualized because it can be?

By Throop Wilder
Thu, June 04, 2009 — Network WorldVirtualization of the data center is provoking fundamental questions about the proper place for network security services. Will they simply disappear into the One True Cloud, dutifully following applications as they vMotion about the computing ether? Will they remain as a set of modular appliances physically separate from the server computing blob? Should everything be virtualized because it can be?
ABC: An Introduction to Virtualization
Server Virtualization: Top Five Security Concerns
Based on experience with large enterprises, we recommend that the network security infrastructure remain physically separate from the virtualized server/app blob. This separation allows you to maintain strong “trust boundaries” and high performance/low latency, without the loss of flexibility and adaptability of virtualized application infrastructures.
Large data centers have primarily safeguarded their servers with a well-protected perimeter and minimal internal protections. As zonal protection schemes were introduced to mitigate the unfettered spread of worms and intrusions, the natural boundaries became the divisions between the classic three tiers of Web infrastructures: Web, application and data layers.
More recently enterprises have further segmented these trust zones by service, business unit and other political criteria, yet this infrastructure does not lend itself to change. As three-tier architectures become vastly more flexible due to virtualization projects, the security infrastructure must develop its own flexibilty so it is never the bottleneck. Furthermore, it must also maintain the real-time guarantees of high throughput, high connection per second rates and low latency.
The good news is that this is precisely what forward-thinking architects and operations teams are designing and building right now. Best of all, some of these teams are discovering that, for once, security and performance optimization appear to benefit from the same strategy. Here’s how.
There are two core principles in new security architecture designs. The first principle is to virtualize within the three layers, not across them, which forces inter-zone traffic to pass through physically separate security equipment.
The second principle is to use equipment that consolidates multiple security services that can be invoked in any combination depending on the type of boundary crossing, while maintaining performance, latency and connection rates. You can refer to this separate layer of security resources as the “second cloud.”
The concept of virtualizing within layers (such as Web, application and database layers) vs. across layers can be depicted as follows. For example, Web servers and application servers are considered to pose risks of different levels.
In Figure 1, VMs of different risk levels are on the same servers and boundary transitions between zones happen entirely inside one or more servers. In Figure 2, all Web VMs run on one physical set of servers while the application VMs run on a separate set. Boundary transitions in this model happen outside of each group of servers.
From a purely operational point of view, the mixed-layer scenario in Figure 1 lends itself to major human error. In this case, a network team hands a set of virtual LANs (VLAN) corresponding to both zones to the server team, which must correctly connect the right VLAN to the right VM. One error, and a VM ends up on the wrong VLAN and a trust boundary has been breached. With the architecture in Figure 2, the only VLAN that needs handing off to the Web layer is the one appropriate for that layer.
From an architectural point of view, inter-VM traffic in Figure 1 also requires the addition of security VMs to manage zone boundary transitions. Every server that runs a mix of services will require the addition of the same set of security VMs, leading to VM sprawl. The separate layers depicted in Figure 2 allow for consolidated security equipment to be placed between the layers only once. In addition, this architecture preserves separation of duties between application teams and network security teams, allowing each to perform their respective duties with fewer dependencies on the other.
Finally, a fundamental advantage of the model in Figure 2 is that there is no potential “Heisenberg effect” in which the addition of security VMs impinges on the processing capacity of the servers. The result is vastly improved security performance.
Scalable designs
The architectures depicted in Figures 1 and 2 are simplifications that don’t represent the real complexity of multiple service boundaries. Figure 3 more closely depicts the cross-boundary problem in a real world environment in which the type of information being accessed dictates different security policies.
In Figure 3, three application services may each represent different risk classes and therefore require a different combination of security services depending on which boundaries are traversed. For example, one could imagine a VM in Service 3 (Contracts) requiring access to a VM in Service 1, which stores personally identifiable information (PII) – and is therefore governed by PCI requirements.
The security for this cross-boundary transition would require multiple services (such as firewall, intrusion prevention and application firewall) chained together in a particular order. As another example, a Web server accessing the knowledgebase (KB) in Service 2 might only need lightweight firewall access control. The ability to perform per-transition servicing is critical for making security – cloud flexibility match that of the application cloud. More specifically, Figure 4 illustrates this concept.
This architecture relies on the emergence of a new generation of high performance security equipment (depicted as the green oval) which is able to consolidate and deliver multiple security services from a common platform that enables service selection decisions. Sound familiar? It is the same value proposition delivered by application-services virtualization, but applied to security services.
Most importantly, this equipment can deliver the correct sequence or “chain” of services with the performance and latency guarantees so critical to the overall end-to-end user experience, while preserving a much simpler architecture and retaining the trust boundaries required for a secure infrastructure.
As engineers begin to experiment with various network security architectures, they will find that the tier/zone-based implementation is a desirable place to start. Not only does it yield excellent performance and flexibility, but it works within the confines of slower-changing organizational/political boundaries. It also lets server infrastructures morph along multiple axes without compromising the highest standards of security.
Wilder is vice president of corporate strategy at Crossbeam Systems.
http://www.cio.com/article/494231/Security_in_a_Virtual_World?page=3&taxonomyId=1448

Related articles
Enhanced by Zemanta

Security in a Virtual World was originally published on The Puchi Herald Magazine

Security in a Virtual World


Virtualization of the data center is provoking fundamental questions about the proper place for network security services. Will they simply disappear into the One True Cloud, dutifully following applications as they vMotion about the computing ether? Will they remain as a set of modular appliances physically separate from the server computing blob? Should everything be virtualized because it can be?

By Throop Wilder
Thu, June 04, 2009 — Network WorldVirtualization of the data center is provoking fundamental questions about the proper place for network security services. Will they simply disappear into the One True Cloud, dutifully following applications as they vMotion about the computing ether? Will they remain as a set of modular appliances physically separate from the server computing blob? Should everything be virtualized because it can be?
ABC: An Introduction to Virtualization
Server Virtualization: Top Five Security Concerns
Based on experience with large enterprises, we recommend that the network security infrastructure remain physically separate from the virtualized server/app blob. This separation allows you to maintain strong “trust boundaries” and high performance/low latency, without the loss of flexibility and adaptability of virtualized application infrastructures.
Large data centers have primarily safeguarded their servers with a well-protected perimeter and minimal internal protections. As zonal protection schemes were introduced to mitigate the unfettered spread of worms and intrusions, the natural boundaries became the divisions between the classic three tiers of Web infrastructures: Web, application and data layers.
More recently enterprises have further segmented these trust zones by service, business unit and other political criteria, yet this infrastructure does not lend itself to change. As three-tier architectures become vastly more flexible due to virtualization projects, the security infrastructure must develop its own flexibilty so it is never the bottleneck. Furthermore, it must also maintain the real-time guarantees of high throughput, high connection per second rates and low latency.
The good news is that this is precisely what forward-thinking architects and operations teams are designing and building right now. Best of all, some of these teams are discovering that, for once, security and performance optimization appear to benefit from the same strategy. Here’s how.
There are two core principles in new security architecture designs. The first principle is to virtualize within the three layers, not across them, which forces inter-zone traffic to pass through physically separate security equipment.
The second principle is to use equipment that consolidates multiple security services that can be invoked in any combination depending on the type of boundary crossing, while maintaining performance, latency and connection rates. You can refer to this separate layer of security resources as the “second cloud.”
The concept of virtualizing within layers (such as Web, application and database layers) vs. across layers can be depicted as follows. For example, Web servers and application servers are considered to pose risks of different levels.
In Figure 1, VMs of different risk levels are on the same servers and boundary transitions between zones happen entirely inside one or more servers. In Figure 2, all Web VMs run on one physical set of servers while the application VMs run on a separate set. Boundary transitions in this model happen outside of each group of servers.
From a purely operational point of view, the mixed-layer scenario in Figure 1 lends itself to major human error. In this case, a network team hands a set of virtual LANs (VLAN) corresponding to both zones to the server team, which must correctly connect the right VLAN to the right VM. One error, and a VM ends up on the wrong VLAN and a trust boundary has been breached. With the architecture in Figure 2, the only VLAN that needs handing off to the Web layer is the one appropriate for that layer.
From an architectural point of view, inter-VM traffic in Figure 1 also requires the addition of security VMs to manage zone boundary transitions. Every server that runs a mix of services will require the addition of the same set of security VMs, leading to VM sprawl. The separate layers depicted in Figure 2 allow for consolidated security equipment to be placed between the layers only once. In addition, this architecture preserves separation of duties between application teams and network security teams, allowing each to perform their respective duties with fewer dependencies on the other.
Finally, a fundamental advantage of the model in Figure 2 is that there is no potential “Heisenberg effect” in which the addition of security VMs impinges on the processing capacity of the servers. The result is vastly improved security performance.
Scalable designs
The architectures depicted in Figures 1 and 2 are simplifications that don’t represent the real complexity of multiple service boundaries. Figure 3 more closely depicts the cross-boundary problem in a real world environment in which the type of information being accessed dictates different security policies.
In Figure 3, three application services may each represent different risk classes and therefore require a different combination of security services depending on which boundaries are traversed. For example, one could imagine a VM in Service 3 (Contracts) requiring access to a VM in Service 1, which stores personally identifiable information (PII) – and is therefore governed by PCI requirements.
The security for this cross-boundary transition would require multiple services (such as firewall, intrusion prevention and application firewall) chained together in a particular order. As another example, a Web server accessing the knowledgebase (KB) in Service 2 might only need lightweight firewall access control. The ability to perform per-transition servicing is critical for making security – cloud flexibility match that of the application cloud. More specifically, Figure 4 illustrates this concept.
This architecture relies on the emergence of a new generation of high performance security equipment (depicted as the green oval) which is able to consolidate and deliver multiple security services from a common platform that enables service selection decisions. Sound familiar? It is the same value proposition delivered by application-services virtualization, but applied to security services.
Most importantly, this equipment can deliver the correct sequence or “chain” of services with the performance and latency guarantees so critical to the overall end-to-end user experience, while preserving a much simpler architecture and retaining the trust boundaries required for a secure infrastructure.
As engineers begin to experiment with various network security architectures, they will find that the tier/zone-based implementation is a desirable place to start. Not only does it yield excellent performance and flexibility, but it works within the confines of slower-changing organizational/political boundaries. It also lets server infrastructures morph along multiple axes without compromising the highest standards of security.
Wilder is vice president of corporate strategy at Crossbeam Systems.
http://www.cio.com/article/494231/Security_in_a_Virtual_World?page=3&taxonomyId=1448

Related articles
Enhanced by Zemanta

Security in a Virtual World was originally published on The Puchi Herald Magazine

Security in a Virtual World


Virtualization of the data center is provoking fundamental questions about the proper place for network security services. Will they simply disappear into the One True Cloud, dutifully following applications as they vMotion about the computing ether? Will they remain as a set of modular appliances physically separate from the server computing blob? Should everything be virtualized because it can be?

By Throop Wilder
Thu, June 04, 2009 — Network WorldVirtualization of the data center is provoking fundamental questions about the proper place for network security services. Will they simply disappear into the One True Cloud, dutifully following applications as they vMotion about the computing ether? Will they remain as a set of modular appliances physically separate from the server computing blob? Should everything be virtualized because it can be?
ABC: An Introduction to Virtualization
Server Virtualization: Top Five Security Concerns
Based on experience with large enterprises, we recommend that the network security infrastructure remain physically separate from the virtualized server/app blob. This separation allows you to maintain strong “trust boundaries” and high performance/low latency, without the loss of flexibility and adaptability of virtualized application infrastructures.
Large data centers have primarily safeguarded their servers with a well-protected perimeter and minimal internal protections. As zonal protection schemes were introduced to mitigate the unfettered spread of worms and intrusions, the natural boundaries became the divisions between the classic three tiers of Web infrastructures: Web, application and data layers.
More recently enterprises have further segmented these trust zones by service, business unit and other political criteria, yet this infrastructure does not lend itself to change. As three-tier architectures become vastly more flexible due to virtualization projects, the security infrastructure must develop its own flexibilty so it is never the bottleneck. Furthermore, it must also maintain the real-time guarantees of high throughput, high connection per second rates and low latency.
The good news is that this is precisely what forward-thinking architects and operations teams are designing and building right now. Best of all, some of these teams are discovering that, for once, security and performance optimization appear to benefit from the same strategy. Here’s how.
There are two core principles in new security architecture designs. The first principle is to virtualize within the three layers, not across them, which forces inter-zone traffic to pass through physically separate security equipment.
The second principle is to use equipment that consolidates multiple security services that can be invoked in any combination depending on the type of boundary crossing, while maintaining performance, latency and connection rates. You can refer to this separate layer of security resources as the “second cloud.”
The concept of virtualizing within layers (such as Web, application and database layers) vs. across layers can be depicted as follows. For example, Web servers and application servers are considered to pose risks of different levels.
In Figure 1, VMs of different risk levels are on the same servers and boundary transitions between zones happen entirely inside one or more servers. In Figure 2, all Web VMs run on one physical set of servers while the application VMs run on a separate set. Boundary transitions in this model happen outside of each group of servers.
From a purely operational point of view, the mixed-layer scenario in Figure 1 lends itself to major human error. In this case, a network team hands a set of virtual LANs (VLAN) corresponding to both zones to the server team, which must correctly connect the right VLAN to the right VM. One error, and a VM ends up on the wrong VLAN and a trust boundary has been breached. With the architecture in Figure 2, the only VLAN that needs handing off to the Web layer is the one appropriate for that layer.
From an architectural point of view, inter-VM traffic in Figure 1 also requires the addition of security VMs to manage zone boundary transitions. Every server that runs a mix of services will require the addition of the same set of security VMs, leading to VM sprawl. The separate layers depicted in Figure 2 allow for consolidated security equipment to be placed between the layers only once. In addition, this architecture preserves separation of duties between application teams and network security teams, allowing each to perform their respective duties with fewer dependencies on the other.
Finally, a fundamental advantage of the model in Figure 2 is that there is no potential “Heisenberg effect” in which the addition of security VMs impinges on the processing capacity of the servers. The result is vastly improved security performance.
Scalable designs
The architectures depicted in Figures 1 and 2 are simplifications that don’t represent the real complexity of multiple service boundaries. Figure 3 more closely depicts the cross-boundary problem in a real world environment in which the type of information being accessed dictates different security policies.
In Figure 3, three application services may each represent different risk classes and therefore require a different combination of security services depending on which boundaries are traversed. For example, one could imagine a VM in Service 3 (Contracts) requiring access to a VM in Service 1, which stores personally identifiable information (PII) – and is therefore governed by PCI requirements.
The security for this cross-boundary transition would require multiple services (such as firewall, intrusion prevention and application firewall) chained together in a particular order. As another example, a Web server accessing the knowledgebase (KB) in Service 2 might only need lightweight firewall access control. The ability to perform per-transition servicing is critical for making security – cloud flexibility match that of the application cloud. More specifically, Figure 4 illustrates this concept.
This architecture relies on the emergence of a new generation of high performance security equipment (depicted as the green oval) which is able to consolidate and deliver multiple security services from a common platform that enables service selection decisions. Sound familiar? It is the same value proposition delivered by application-services virtualization, but applied to security services.
Most importantly, this equipment can deliver the correct sequence or “chain” of services with the performance and latency guarantees so critical to the overall end-to-end user experience, while preserving a much simpler architecture and retaining the trust boundaries required for a secure infrastructure.
As engineers begin to experiment with various network security architectures, they will find that the tier/zone-based implementation is a desirable place to start. Not only does it yield excellent performance and flexibility, but it works within the confines of slower-changing organizational/political boundaries. It also lets server infrastructures morph along multiple axes without compromising the highest standards of security.
Wilder is vice president of corporate strategy at Crossbeam Systems.
http://www.cio.com/article/494231/Security_in_a_Virtual_World?page=3&taxonomyId=1448

Related articles
Enhanced by Zemanta

Security in a Virtual World was originally published on The Puchi Herald Magazine