The Puchi Herald Reblog

A reblogging blog

GDPR and the technology market

GDPR and the technology market

Question: will the new privacy policies and laws impact the technology market?

This is an interesting question to ask ourselves; whether we are consumer of the technology market or technology vendors the impact of the new technologies (from cloud to IoT, from industry 4.0 to big data just to name the most acknowledged from a marketing point of view) privacy regulations can affect heavily our behaviours and the market.

so let try to understand what could be the implications of this new focus on privacy and data protection.

First of all we should try to understand what we are talking about.

Privacy, GDPR and the rest.

Privacy: the state of being alone, or the right to keep one’s personal matters and relationships secret:

In nowadays environments the presence of data related technology is pervasive: from business to  personal life technology play a big part of our life.  data related technology means we use technologies that is able to manipulate information: informations are collected, changed, communicated, shared all in form of data. Bit and bytes that describes our job, our business, our personal life.

Although in the past privacy was mainly a physical issue, and therefore legislation was focusing on those aspects, this increasing presence of data collection and sharing makes people realize that there is a new abstraction layer that involve privacy that is no more related to be alone or in a confined physical space, but in a undefined and without borders digital virtual space.

Email, Blogs, social networks, chat, E-commerce, electronic payment, smart phones all this and more shifted the same perception of privacy from a simple concept to something more hard to be defined.

Rulers and consumers started to deal with those issues in the last years whole enterprise and technical world has been remained almost frozen waiting for indications. the first indications that this would have been a wakeup call for enterprise has been the ending of the safe harbour agreement, privacy was not longer a secondary issue even for the economy.

The latest development can be easily identified in the new  European Union’s General Data Protection Regulation (GDPR), which comes into effect in May 2018, has far-reaching implications that extend far beyond the EU.

Businesses that fail to meet the new mandates aimed at protecting personal data face severe consequences. They can be fined up to $20 million, or 4 percent of global revenues — a cost that makes this regulation impossible to ignore.

But other areas of the world are moving toward a more cautious approach toward data privacy, not only Europe. While it is not yet clear how will be the new USA administration approach toward this subject, it is out of doubt that data privacy is becoming a major issue in the next years; how this will impact business is, although, not yet clear.

For sure is that GDPR will enforce companies to deal with a tremendous amount of data to be protected. Any data used to make inferences linked tenuously or otherwise to a living person is personal data under GDPR. Cookie IDs, IP addresses, any device identifier? All personal data. Even metadata with no obvious identifier is caught under the GDPR’s definition of personal data. Truth be told, such assertions are not entirely new. The difference under GDPR is that they will be enforced and the non compliance fined.

Today swathes of business practices unlocking data monetization rely upon data not being considered personal. So they apply weak consent, onward transfer and data reuse concepts. These models are going to change; either by choice, or by obligation.

Data Privacy , Data Protection and Cyber Security

One aspect that is not yet completely perceived and understood is the correlations between data privacy, data security and cyber security. The requirements that enforce the companies to respect data privacy legal requirements are intrinsically bound with the explicit request for data protection and, therefore, cyber security.

GDPR clearly define data should be fairly processed and protected: the implications are not only in terms of procedure to adopt inside the enterprises, but also technical in terms of data manipulation, retention, storage and security.

Recent security outbreaks as the one related to ransomware are an example of how basic cyber security threats can impact directly on this area, as well as common and well known cyber attack directed to data exfiltration.

This is a growing phenomenon and is affecting not only the classical online services (think of classic dating site attacks, as an example, to collect username and passwords) but, as an example, extensively the healthcare industry.

While in the past those outbreaks could have been just a relative minor issue, the new GDPR structure of fines could affect in a heavy way any company, regardless its sector, and some departments that in the past have never considered those issues as a business imperative, as marketing or Human Resource, will have to face a difficult transaction in terms of awareness, policies to be implemented and technology approach.

It is easy to forecast that this situation will shape in the next years the technology market in different areas.

Impact on the technology market

When we talk about the technology market we face different aspects, “technology” as a term can cover a wide range of things. We can talk about hardware vendors or software vendors. We can talk about service vendors (cloud, CRM or whatever you like more), IT enterprise or carrier HW providers, Security vendors, End user HW providers (as smart phone makers).

Recently the trend is to aggregate functions and offering, making those areas overlapping in the same company although not often integrated.

Since all the industry will have to face the new privacy requirements it is to be expected a increase on data privacy expertise requests hitting the market, and a growing demand for IT solutions that will help companies to manage the requirements. this could, as an example, give a small impulse to historically neglected areas as DLP solutions, data categorization solutions and so on.

Some little advance and effort will be probably put also on more traditional areas as backup.

An heavier impact will be seen in the growing online market with the need to protect not only privacy of users but also to save the economic transactions, content providers, social or gaming platforms will be heavily impacted too.

In a second run we will probably see a renewed interest for baseline security solutions, as the stakeholders will, sooner or later, realize that there is no compliance without data protection and there is not data protection without cyber security.

The request for expertise and consulting services will be mostly redirected outside to technology vendors (here considering HW\SW vendors as cisco, hp, huawei, SAP, Microsoft; service vendors as cloud providers – azure, AWS, google –  but also app stores, CRM online providers), consulting companies and technology integrators.

On the other end technology vendors will have to face a strange situations where they will be both requested to provide solutions compliant with the new rules, be the driver of the new requirements and implementations (public-private partnership basically means this)  and in need to implement solutions to protect themselves in different areas as:

Product and Services development

Here vendors will have to start developing products\services considering data protection a major issue. It is clear the impact on cloud or services, where data protection can be easily identified, but also the HW product side will have to face issues. Although it can seems trivial we can remember the problem related to GPS tracking in apple and, at some extension, android happened some years ago. privacy implication with products can be wider than expected, since we have to protect not only the data per se, but also the metadata (this is the wider range of GDPR and new privacy regulations).

Usually we tend not to consider, as an example, system logs as a problem in terms of privacy, but in effect they are if they contains data that can point to a physical person and being used to track somehow the person behaviour.

Firewall and router logs, as an example, could be used to determine what is someone doing online, and therefore can expose information that are subject to GDPR realm. minor features apparently but the truth that also metadata are object of GDPR.

Privacy By design and Privacy Enhanced Technology will be mandatory component of any product\service developement.

Marketing and Sales

Marketing(and or  sales)  has always been considered agnostic towards technology, but the ultimate scope of marketing is to get in touch with the market, this means customers and ultimately people. Marketing activities will get a huge impact towards GDPR requirements both in terms of operations, since is up on marketing to manage a large amount of data coming from outside the company, and communication.

Technology vendors, somehow, will be expected to lead and drive the request both in terms of consulting and example. The result of a breach or misinterpretation of GDPR guidances will impact severely the business from a brand point of view and undermine vendor credibility.

Internal protection

As any other company there will be a direct impact on business operations of any vendor dealing in the technology field. But this case the extension of the problem will not focus just on the standard cyber security procedures, since technology vendors enter, somehow, almost directly on customers IT or data processing infrastructure the request will be to implement an end to end protection system which include GDPR compliance and cyber security application. This will require technology vendors to operate on:

  1. supply chain
  2. production and vulnerability disclosure
  3. product and service delivery

all three area are still trying to develop standards and good practice although something is moving.

So what are the changes expected under the new regulation?

There are around a dozen headline changes which technology companies should be aware of.

Some of the key areas include:

  • Privacy by design and Privacy enhancing technology – privacy by design calls for the inclusion of data protection from the onset of the designing of systems. Companies must also only hold and process data which is absolutely necessary.

Privacy enhancing technology (PET) and Privacy by Design (PbD) are obligatory and mandated requirements under the GDPR. There remains no generally accepted definition of PET or PbD, but PbD is considered an evidencing step for software development processes to take account of privacy requirements. So the incorporation of what can broadly be defined as PET in such solutions represents PbD.

Two particular PET techniques that control downside and enable upside risk are differential privacy & homomorphic encryption.

  • Differential privacy counters re-identification risk and can be applied to anonymous data mining of frequent patterns. The approach obscures data specific to an individual by algorithmically injecting noise. More formally: for a given computational task T and a given value of ϵ there will be many differentially private algorithms for achieving T in a ϵ-differentially private manner. This enables computable optima’s of privacy and also data utility to be defined by modifying either the data (inputs to query algorithms) or by modifying the outputs (of the queries), or both.
  • Searchable/homomorphic encryption allows encrypted data to be analyzed through information releasing algorithms. Considered implausible only recently, advances in axiomatizing computable definitions of both privacy and utility have enabled companies such as IBM & Fujitsu to commercially pioneer the approach.
  • Data processors – those who process data on behalf of data controllers, including cloud-providers, data centres and processors. Liability will extend to these and businesses that collect and use personal data.
  • Data portability: Empowers customers to port their profiles and segmentation inferences from one service provider to another. This is a reflection by lawmakers that data is relevant to competition law, whilst not conceding an imbalance between a companies ability to benefit from data at expenses of us all as citizens.
  • Data protection officers – internal record keeping and a data protection officer (DPO) will be introduced as a requirement for large scale monitoring of data. Their position involves expert knowledge of data protection laws and practices, and they will be required to directly report to the highest level of management.
  • Consent – explicit permission to hold any personal data in electronic systems will become mandatory. It will no longer be possible to rely on implied consent with individuals having the option to opt-out.Customers consent to privacy policies that change. Being able to prove which contract was agreed to, in court or to a regulator, requires  registration time stamping and tamper resistant logs become de rigueur.As we move into an opt-in world of explicit consent and ubiquitous personal data, data transmissions beyond a website visit must be explicitly permissioned and controlled. In this world, default browser values de-link machine identifiers from search queries. In other words, in this new world, online advertising to EU citizens is in line for fundamental change.And given particular regulatory emphasis on profiling, explicit consent will require loyalty programs to differentiate consent between general and personalized marketing consents. Those consent flags must cascade through registration, reporting and analysis, targeting and profiling, contact center operations and all other processes that handle such data.
  • Breach notifications – the notification of a breach, where there is a risk that the rights and freedoms of individuals could become compromised, must be reported within 72 hours of the breach being identified. it is underestimate the relationship between breach notification and vulnerability disclosure. While for an end user those two aspect seems to be unrelated, there could be a higher impact on vendors for, at least, a couple of factors:
    • The breach notification could expose the vendor as the main source of the breach itself due to lack of vulnerability management and disclosure.
    • The victim could consider liability against the vendors which “vulnerabilities” caused the breach redirecting to them part of the costs.
  • Right to access – data subjects will now have the right to obtain confirmation from you of what personal data is held concerning them, how is it being processed, where and for what purpose.
  • Right to be forgotten – data subjects will now have the right to be forgotten which entitles the data subject to have you ensure that information is deleted from every piece of IT equipment, portable device and from server back-ups and cloud facilities.A framework to comply with this obligation would include the following steps:
    • Spot identifiers which tie together datasets, e.g: machine identifiers link together our social media experiences;
    • Prescribe how re-identifiable data flows in and outside the organization;
    • Document a scalable process to overwrite identifiers in all datasets where re-identification can be established, upon the validated request of a user, and
    • Third party contracts and SLAs should be adjusted to ensure compliance with validated requests.
  • Data Bookkeeping: Field level data, linked to an identifier, flows across geographies and legal entities, processed by machines and people. Organizations will account for these flows with evergreen reporting. It stands to reason that these flows will be threat-modeled for integrity and confidentiality so controls can be readily evidenced upon request.

 

GDPR impact

Privacy regulations as GDPR and the growing awareness and concerns related to data privacy and security are related to the expanding presence in everydays life and business of smart mobile devices able to process data, the growing online market, consolidated trends as cloud services or newcomers as IoT.

Technology market face this transition in front line, and will see the impact of new regulations and customer reactions in several ways. This is both a chance and a problem; implementation of new mandatory requirements will impact all areas, from design and production to sales and delivery. But this will means also new area of business in the consulting area, in the technologies to support GDPR and privacy compliances in the market where data analysis technology, artificial intelligence and other high end technology areas could provide a competitive\price insensitive advance vs the consolidated technology market.

The key success factor is to embrace this change and drive it acquiring internally the needed competences, implementing the correct corrections and driving the needed improvement related to product and services provided.

Future trend will see a prevalence of  technologies related to “data” processing and services related to data vs products. The new Data paradigm is already visible nowadays as example in the Big Data market (take data lake implementation as an example). in terms of technology market this will means to focus on Data Science which will pose a new and somehow unpredictable relationship with privacy regulations.

GDPR Risks and “Data Science”

The term data science describes a process from data discovery, to providing access to data through technologies such as Apache Hadoop (open source software for large data sets) in the case of Big Data; and distilling the data through architectures such as Spark, in-memory and parallel processing. That data science creates value is understood. What isn’t are the risks it exposes investors to under the GDPR, of which there are principally three:

Risk 1: The Unknown Elephant in the Room – Unicity: a general misunderstanding in monetization strategies is that stripping away identifiers of a data model renders the data set anonymous. Such a belief is flawed. So-called anonymous data sets can often, without implausible effort, be re-identified. Unicity is a measure of how easy it is to re-identify data. It quantifies additional data needed to re-identify a user. The higher a data set’s unicity, the easier it is to re-identify. Transactional and geo-temporal data yield not only high monetization potential, they carry statistically unique patterns which give rise to high unicity.

Risk 2: Relevance & Quality: Income, preferences and family circumstances routinely change, and preference data on children is difficult to ethically justify processing. While this creates a problem for predictive analytics, that data and the inferences it engenders can be considered inaccurate at a given point in time, which creates a GDPR cause-of-action. Data quality needs to stay aligned to business objectives.

Risk 3: Expecting the Unexpected: When data science creates unexpected inferences about us, it tends to invalidate the consent that allowed data to be captured in the first place, which, again, is a big deal. Data collected today, particularly from mobile devices, is subject to a constant stream of future inferences that neither the customer nor the collector can reasonably comprehend. Consider a car-sharing app that can model propensity for one-night-stands from usage patterns. While that data may not result in propositions today, the market will consider upside risk/option value to have been created (the market still does not seem to believe in GDPR impact), but this incremental data coming into existence creates downside risk (such data is difficult to find a legal-basis for, given the vagaries of a given consented disclosure).

More generally, the problem of negative correlations is brought to the fore by algorithmic flaws, biased data and ill-considered marketing or risk practices, the enduring example being U.S. retailer Targets’ predictive campaigns to pregnant teenagers, spotted by parents. These are examples of a new form of systemic control failure, leading to potentially actionable GDPR claims.

 

Related articles

var aid = '6055',
    v = 'qGrn%2BlT8rPs5CstTgaa8EA%3D%3D',
    credomain = 'adkengage.com',
    ru = 'http://www.thepuchiherald.com/wp-admin/post.php';
document.write('');

GDPR and the technology market was originally published on The Puchi Herald Magazine

Watching the new presidents’ acts and talks (and the possible future outlook) And I am scared

Watching the new presidents’ acts and talks (and the possible future outlook) And I am scared

does the floor color make a difference here? really?

I usually do not write here about political stuff, if not in rare occasions but, hey, this is my blog at the end so I can express my feeling and thoughts.

I was watching today some videos related to USA president elected Donald Trump and his approach to the news (he would tweet: Fake news, sad!) and, honestly, I am scared to dead.

I do not like Mr Trump, USA citizens elected him so I have to cope with that, but this does not means I have to like him. I found most of his tweet questionable, his cult of personality disturbing, his approach with media alarming.

This does not means of course that media are always right, but is unthinkable to me that in a open democracy a president can consider communication a one-way affair and anyone who criticize him is “fake news“, “bad person”, “untrusty” or whatever mr Trump consider worth to put in a tweet.

let say the first days of his activity made me more worried than ever.

It was like the silly polemic on the amount of people watching his ceremony live. more than Obama one? Less than Obama one? the point was he could have managed, for my taste, the whole affair differently… claiming false statement was not the best presentation to the world… but the whole Trump administration seems to be  suffering from a severe news detachment, funny form a man that owe so much to the media.

Will mr Trump makes America great again? I am not so sure and, honestly, I haven’t understood what means america being great again, and what will be the price the world will have to pay for his vision. for sure at the moment I saw a clear detachment from actual data (take economic and crime data in USA compared to mr Trump assumptions) and a willingness to not respond to any doubt. He is autoreferential, he is the unquestionable metric for truth, ethics and results.

I saw this in the near past, from president Duterte from philippine, or Zuma in South africa, or turkey president Erdoğan or in a less recent past from Benito Mussolini or Hitler.

What they have in common? Extreme nationalism, cult of personality, hate for free press, being autoreferential.

I am not saying here that Mr. Trump will be like Mussolini, I am saying that there is a common pattern, and when I listen to absurd justifications like the ones presented to justify the false statements related to the crowd presence during mr Trump ceremony I am frankly scared to death.

But Trump, Erdogan, Duterte are a symptom of a bigger problem

We are on the verge of a 4th industrial revolution, but people of the countries all around the world seems to be oriented to close themselves into their borders in an attempt to protect themselves from the inevitable change. Alas changes will eventually come anyway and this is scary. Protectionism and nationalism are the first answer to change. But in the new world that we are shaping what will the consequences be?

If USA citizen will try to close their country (build the wall, remember) is in their right although not sure in their interest. Sure they are a big market, but it is not self sufficient. Without selling their stuffs outside how much USA economy will be affected?  Why a mexican should then buy a USA car instead of an European or Japanese or Chinese or Indian one? or why we should take a USA air flight unless we are forced to? (I actually travel emirates when I can).

But also why we should buy apple or use googleandroid? And the whole new list of technologies that will shape the new economy?  because this is the point, the new industrial revolution will put its root on data sharing… we will move from products to services, and to justify the investment needed we will have to scale at an international level.

Hate calls hate, racism calls racism, violence calls violence, disrespect calls disrespect. I know you don’t see it in your leader at the end you have to support him because it is what you created with your hands (vote) to cover your fears, but you should try to see it in other reactions where this is going …

Like it or not, this new economy will force to change our approach to job, new jobs will comes while other will die. Alas the trend is moving away from manual jobs to more skilled ones, more focused on the new technologies. Not only engineering, a whole bunch of new knowledge workers that will reshape the current middle class.

But we are in the middle of this change, we can’t see the light yet we just see the scary shadows of the tunnel. The good news is that all the industrial revolutions increased the number of workers, but at the same times have been shaped by crisis and, worst scenario, wars. We are experiencing the economic crisis right now (it is not over i am afraid) but we are (as people did in the past) addressing the new with old recipes.

In a Hyperconnected world as we are attempt to leverage censorship are questionable. China, north korea, Saudi Arabia, Iran will be the new references for the once flag of freedom of speech?

This is not just a USA issue, the rise of populism in Europe and in the rest of the world is a sign that this feeling is running through all the population of the biggest democracies (where you do not have democracy, well, you do not have the right to question the government and its rule).

The whole Brexit rhetoric has been based on this kind of assumptions (regain the control of our destiny, of our nation, of our economy so we will be again bigger, better, stronger …)that is not so different from the Front Nationale or Lega nord statements, or the Grillo’s claim of the need of a “strong man”.

What a twisted world it has become? Ironically the champion of capitalism, at the moment, is china with its free trade and free commerce slogans, while we ought to russia the safety for someone who disclosed USA attempt to hack million of USA and worldwide citizens.

Willingly or not the change will come, no matter what. The point is how much we will have to suffer because of this resistance.and remember each time you do not drive the change the change drives you.

hope for the best but prepare for the worst… at the moment I am scared because I see the down of an old era trying to strake the last shots, and they will hurt…

var aid = '6055',
    v = 'qGrn%2BlT8rPs5CstTgaa8EA%3D%3D',
    credomain = 'adkengage.com',
    ru = 'http://www.thepuchiherald.com/wp-admin/post.php';
document.write('');

Watching the new presidents’ acts and talks (and the possible future outlook) And I am scared was originally published on The Puchi Herald Magazine

What is Openstack

What is Openstack

English: This is a picture of the Nebula cloud...
English: This is a picture of the Nebula cloud computing container located at NASA Ames Research Center. (Photo credit: Wikipedia)

OpenStack is an open source platform for creating and managing large groups of virtual private servers in a cloud computing environment. The platform supports interoperability between cloud services and allow businesses to build and deploy private cloud services in their own data centers.

The National Aeronautics and Space Administration (NASA) worked with Rackspace, a managed hosting and cloud computing service provider, to develop OpenStack. RackSpace donated the code that powers its storage and content delivery service and production servers. NASA contributed the technology that powers their high performance computing, networking and data storage cloud service.

OpenStack has a modular architecture that currently has eleven components:

  • Nova – provides virtual machines (VMs) upon demand.
  • Swift – provides a scalable storage system that supports object storage.
  • Cinder – provides persistent block storage to guest VMs.
  • Glance – provides a catalog and repository for virtual disk images.
  • Keystone – provides authentication and authorization for all the OpenStack services.
  • Horizon – provides a modular web-based user interface (UI) for OpenStack services.
  • Neutron – provides network connectivity-as-a-service between interface devices managed by OpenStack services.
  • Ceilometer – provides a single point of contact for billing systems.
  • Heat – provides orchestration services for multiple composite cloud applications.
  • Trove – provides database-as-a-service provisioning for relational and non-relational database engines.
  • Sahara – provides data processing services for OpenStack-managed resources.

OpenStack, which is freely available under the Apache 2.0 license, is often referred to in the media as “the Linux of the Cloud” and is compared to Eucalyptus and the Apache CloudStack project, two other open source cloud initiatives.

OpenStack officially became an independent non-profit organization in September 2012. The OpenStack community, which is overseen by a board of directors, is comprised of many direct and indirect competitors, including IBM, Intel and VMware.

var aid = '6055',
    v = 'qGrn%2BlT8rPs5CstTgaa8EA%3D%3D',
    credomain = 'adkengage.com',
    ru = 'http://www.thepuchiherald.com/wp-admin/post.php';
document.write('');

What is Openstack was originally published on The Puchi Herald Magazine

The IoT Files – Business Models

The IoT Files – Business Models

The IoT Files – Business Models

Ok we talked about security, privacy and infrastructure in the last post Here we are.

do we really think that IoT will work in the current business model environment?

Well I have to tell you, probably it would not.

The key factor of IoT will be if it will be able to generate revenue? the problem of revenue is related to the fact they should cover the infrastructure costs, in order to do so, since we already seen that infrastructure costs will be big, as well as security, privacy implication that have, also, a cost, selling devices will not be enough.

And selling the device, per se, could not justify the costs the user will have to deal with for connection.

So to make IoT an attractive success, and not a marketing bubble, we should rethink our business models and various levels.

Old Telco model is out

The first to pay for the revolution will be the telco provider. Telephone companies are struggling even now in the effort to survive digitalization. the expensive infrastructure created to teal with voice communications are becoming rapidly obsolete. New digital provider are eroding the classi telco area. Think of the digital company that go through data (wechat, skype, whatsapp, line just to name few) and offer also voice services.

The telco model is so out that even big content providers as Google and Facebook are trying to overcome their limitation starting to consider to offer connectivity.

But telco model has been created in years, and now generation of managers that grown up there have great resistance to rethink their role.

The New Data Paradigma.

The truth is that the new business model should start form the new data paradigma. It is data that matter, and the rest is just a companion.

We keep calling them phones, but smartphone are used 99% to transmit data, digital data, every day less and less to make voice communication. the reason is that voice communication can’t provide the same level of experience that data can.

So data will be more and more important, in terms of quantity and quality. this is already a reason to concerns, but we should start to learn how to deal with it.

A big mistake would be to consider data as a gray amount of bits all the same, in this model (an old model) we can simply make to pay you the bandwidth you consume. But in IoT the data are not all the same, and we will not be able to justify to pay (and offer the same service level) to medical data and chat.

As well as in critical segment as automotive, or scada control we can’t think we can just reason in terms of amount of data.

So Data will require a new approach, because all data are important, but some data are more important than other.

From Product to services

as well as data is nothing if not associated with a service. In the IoT environment services that manipulate data will replace products.

If data will be the object of our interest, than product will be just a media to obtain the service, the cultural shifting is to a box moving environment to a service one.

But this require a different approach on selling, measuring, marketing all this. Isn’t it a big change of the business model?

The Big Marketing Imperative

Marketing will become way more important, because it will be mandatory to understand the mood of the customers to offer and modify services accordingly.

But at the same time marketing will be the most interested entity to collect and analyze data, so marketing will become more important even than finance guys, something  would really see, a Marketing manager shouting a CFO in a board meeting….

Roaming, connectivity and other hidden costs

meanwhile in the transition to the IoT we will have to face how hidden cost could impact the new world.

Think, as an example, to roaming costs: I travel a lot, and when abroad I can be asked to pay till 18 euros for 1 MB of data when abroad…. this will be not possible in IoT, and basically unthinkable right now.

I do what all the other do in this case, don’t use roaming for data and try to find wifi hotspots able to provide me data connectivity I need.

Or buying a new sim card in the place where I am.

But if I will use dozen of different devices this could become impractical. a cost is not just the money you have to pay, but also the value you lose for some reason. so basically every time I can’t use my devices the way i want, it is a cost, an hidden cost, that sooner or later will be taken into consideration.

All those hidden cost have to be taken into account in a new IoT business model.

B2B, B2C and more?

This could lead us to say hello to the old B2B and B2C characterization. IoT will require a different approach where the interaction between consumer and business will be more complicated. we will probably have to go beyond B2B and B2C for a X4Y and something more ….

 

Who Pay all this?

Everything change, so business models, but every change create reactions and costs. As for infrastructures we should ask ourselves who will pay for this?

the biggest problem is that at the moment we have a lack of knowledge on what we will face, and using the standard metrics can drive us into wrong conclusion.

But this is the object of the last post for this introductory analysis of IoT. the Cultural impact of IoT.

var aid = '6055',
    v = 'qGrn%2BlT8rPs5CstTgaa8EA%3D%3D',
    credomain = 'adkengage.com',
    ru = 'http://www.thepuchiherald.com/wp-admin/post.php';
document.write('');

The IoT Files – Business Models was originally published on The Puchi Herald Magazine

Security in a Virtual World


Virtualization of the data center is provoking fundamental questions about the proper place for network security services. Will they simply disappear into the One True Cloud, dutifully following applications as they vMotion about the computing ether? Will they remain as a set of modular appliances physically separate from the server computing blob? Should everything be virtualized because it can be?

By Throop Wilder
Thu, June 04, 2009 — Network WorldVirtualization of the data center is provoking fundamental questions about the proper place for network security services. Will they simply disappear into the One True Cloud, dutifully following applications as they vMotion about the computing ether? Will they remain as a set of modular appliances physically separate from the server computing blob? Should everything be virtualized because it can be?
ABC: An Introduction to Virtualization
Server Virtualization: Top Five Security Concerns
Based on experience with large enterprises, we recommend that the network security infrastructure remain physically separate from the virtualized server/app blob. This separation allows you to maintain strong “trust boundaries” and high performance/low latency, without the loss of flexibility and adaptability of virtualized application infrastructures.
Large data centers have primarily safeguarded their servers with a well-protected perimeter and minimal internal protections. As zonal protection schemes were introduced to mitigate the unfettered spread of worms and intrusions, the natural boundaries became the divisions between the classic three tiers of Web infrastructures: Web, application and data layers.
More recently enterprises have further segmented these trust zones by service, business unit and other political criteria, yet this infrastructure does not lend itself to change. As three-tier architectures become vastly more flexible due to virtualization projects, the security infrastructure must develop its own flexibilty so it is never the bottleneck. Furthermore, it must also maintain the real-time guarantees of high throughput, high connection per second rates and low latency.
The good news is that this is precisely what forward-thinking architects and operations teams are designing and building right now. Best of all, some of these teams are discovering that, for once, security and performance optimization appear to benefit from the same strategy. Here’s how.
There are two core principles in new security architecture designs. The first principle is to virtualize within the three layers, not across them, which forces inter-zone traffic to pass through physically separate security equipment.
The second principle is to use equipment that consolidates multiple security services that can be invoked in any combination depending on the type of boundary crossing, while maintaining performance, latency and connection rates. You can refer to this separate layer of security resources as the “second cloud.”
The concept of virtualizing within layers (such as Web, application and database layers) vs. across layers can be depicted as follows. For example, Web servers and application servers are considered to pose risks of different levels.
In Figure 1, VMs of different risk levels are on the same servers and boundary transitions between zones happen entirely inside one or more servers. In Figure 2, all Web VMs run on one physical set of servers while the application VMs run on a separate set. Boundary transitions in this model happen outside of each group of servers.
From a purely operational point of view, the mixed-layer scenario in Figure 1 lends itself to major human error. In this case, a network team hands a set of virtual LANs (VLAN) corresponding to both zones to the server team, which must correctly connect the right VLAN to the right VM. One error, and a VM ends up on the wrong VLAN and a trust boundary has been breached. With the architecture in Figure 2, the only VLAN that needs handing off to the Web layer is the one appropriate for that layer.
From an architectural point of view, inter-VM traffic in Figure 1 also requires the addition of security VMs to manage zone boundary transitions. Every server that runs a mix of services will require the addition of the same set of security VMs, leading to VM sprawl. The separate layers depicted in Figure 2 allow for consolidated security equipment to be placed between the layers only once. In addition, this architecture preserves separation of duties between application teams and network security teams, allowing each to perform their respective duties with fewer dependencies on the other.
Finally, a fundamental advantage of the model in Figure 2 is that there is no potential “Heisenberg effect” in which the addition of security VMs impinges on the processing capacity of the servers. The result is vastly improved security performance.
Scalable designs
The architectures depicted in Figures 1 and 2 are simplifications that don’t represent the real complexity of multiple service boundaries. Figure 3 more closely depicts the cross-boundary problem in a real world environment in which the type of information being accessed dictates different security policies.
In Figure 3, three application services may each represent different risk classes and therefore require a different combination of security services depending on which boundaries are traversed. For example, one could imagine a VM in Service 3 (Contracts) requiring access to a VM in Service 1, which stores personally identifiable information (PII) – and is therefore governed by PCI requirements.
The security for this cross-boundary transition would require multiple services (such as firewall, intrusion prevention and application firewall) chained together in a particular order. As another example, a Web server accessing the knowledgebase (KB) in Service 2 might only need lightweight firewall access control. The ability to perform per-transition servicing is critical for making security – cloud flexibility match that of the application cloud. More specifically, Figure 4 illustrates this concept.
This architecture relies on the emergence of a new generation of high performance security equipment (depicted as the green oval) which is able to consolidate and deliver multiple security services from a common platform that enables service selection decisions. Sound familiar? It is the same value proposition delivered by application-services virtualization, but applied to security services.
Most importantly, this equipment can deliver the correct sequence or “chain” of services with the performance and latency guarantees so critical to the overall end-to-end user experience, while preserving a much simpler architecture and retaining the trust boundaries required for a secure infrastructure.
As engineers begin to experiment with various network security architectures, they will find that the tier/zone-based implementation is a desirable place to start. Not only does it yield excellent performance and flexibility, but it works within the confines of slower-changing organizational/political boundaries. It also lets server infrastructures morph along multiple axes without compromising the highest standards of security.
Wilder is vice president of corporate strategy at Crossbeam Systems.
http://www.cio.com/article/494231/Security_in_a_Virtual_World?page=3&taxonomyId=1448

Related articles
Enhanced by Zemanta

Security in a Virtual World was originally published on The Puchi Herald Magazine

Security in a Virtual World


Virtualization of the data center is provoking fundamental questions about the proper place for network security services. Will they simply disappear into the One True Cloud, dutifully following applications as they vMotion about the computing ether? Will they remain as a set of modular appliances physically separate from the server computing blob? Should everything be virtualized because it can be?

By Throop Wilder
Thu, June 04, 2009 — Network WorldVirtualization of the data center is provoking fundamental questions about the proper place for network security services. Will they simply disappear into the One True Cloud, dutifully following applications as they vMotion about the computing ether? Will they remain as a set of modular appliances physically separate from the server computing blob? Should everything be virtualized because it can be?
ABC: An Introduction to Virtualization
Server Virtualization: Top Five Security Concerns
Based on experience with large enterprises, we recommend that the network security infrastructure remain physically separate from the virtualized server/app blob. This separation allows you to maintain strong “trust boundaries” and high performance/low latency, without the loss of flexibility and adaptability of virtualized application infrastructures.
Large data centers have primarily safeguarded their servers with a well-protected perimeter and minimal internal protections. As zonal protection schemes were introduced to mitigate the unfettered spread of worms and intrusions, the natural boundaries became the divisions between the classic three tiers of Web infrastructures: Web, application and data layers.
More recently enterprises have further segmented these trust zones by service, business unit and other political criteria, yet this infrastructure does not lend itself to change. As three-tier architectures become vastly more flexible due to virtualization projects, the security infrastructure must develop its own flexibilty so it is never the bottleneck. Furthermore, it must also maintain the real-time guarantees of high throughput, high connection per second rates and low latency.
The good news is that this is precisely what forward-thinking architects and operations teams are designing and building right now. Best of all, some of these teams are discovering that, for once, security and performance optimization appear to benefit from the same strategy. Here’s how.
There are two core principles in new security architecture designs. The first principle is to virtualize within the three layers, not across them, which forces inter-zone traffic to pass through physically separate security equipment.
The second principle is to use equipment that consolidates multiple security services that can be invoked in any combination depending on the type of boundary crossing, while maintaining performance, latency and connection rates. You can refer to this separate layer of security resources as the “second cloud.”
The concept of virtualizing within layers (such as Web, application and database layers) vs. across layers can be depicted as follows. For example, Web servers and application servers are considered to pose risks of different levels.
In Figure 1, VMs of different risk levels are on the same servers and boundary transitions between zones happen entirely inside one or more servers. In Figure 2, all Web VMs run on one physical set of servers while the application VMs run on a separate set. Boundary transitions in this model happen outside of each group of servers.
From a purely operational point of view, the mixed-layer scenario in Figure 1 lends itself to major human error. In this case, a network team hands a set of virtual LANs (VLAN) corresponding to both zones to the server team, which must correctly connect the right VLAN to the right VM. One error, and a VM ends up on the wrong VLAN and a trust boundary has been breached. With the architecture in Figure 2, the only VLAN that needs handing off to the Web layer is the one appropriate for that layer.
From an architectural point of view, inter-VM traffic in Figure 1 also requires the addition of security VMs to manage zone boundary transitions. Every server that runs a mix of services will require the addition of the same set of security VMs, leading to VM sprawl. The separate layers depicted in Figure 2 allow for consolidated security equipment to be placed between the layers only once. In addition, this architecture preserves separation of duties between application teams and network security teams, allowing each to perform their respective duties with fewer dependencies on the other.
Finally, a fundamental advantage of the model in Figure 2 is that there is no potential “Heisenberg effect” in which the addition of security VMs impinges on the processing capacity of the servers. The result is vastly improved security performance.
Scalable designs
The architectures depicted in Figures 1 and 2 are simplifications that don’t represent the real complexity of multiple service boundaries. Figure 3 more closely depicts the cross-boundary problem in a real world environment in which the type of information being accessed dictates different security policies.
In Figure 3, three application services may each represent different risk classes and therefore require a different combination of security services depending on which boundaries are traversed. For example, one could imagine a VM in Service 3 (Contracts) requiring access to a VM in Service 1, which stores personally identifiable information (PII) – and is therefore governed by PCI requirements.
The security for this cross-boundary transition would require multiple services (such as firewall, intrusion prevention and application firewall) chained together in a particular order. As another example, a Web server accessing the knowledgebase (KB) in Service 2 might only need lightweight firewall access control. The ability to perform per-transition servicing is critical for making security – cloud flexibility match that of the application cloud. More specifically, Figure 4 illustrates this concept.
This architecture relies on the emergence of a new generation of high performance security equipment (depicted as the green oval) which is able to consolidate and deliver multiple security services from a common platform that enables service selection decisions. Sound familiar? It is the same value proposition delivered by application-services virtualization, but applied to security services.
Most importantly, this equipment can deliver the correct sequence or “chain” of services with the performance and latency guarantees so critical to the overall end-to-end user experience, while preserving a much simpler architecture and retaining the trust boundaries required for a secure infrastructure.
As engineers begin to experiment with various network security architectures, they will find that the tier/zone-based implementation is a desirable place to start. Not only does it yield excellent performance and flexibility, but it works within the confines of slower-changing organizational/political boundaries. It also lets server infrastructures morph along multiple axes without compromising the highest standards of security.
Wilder is vice president of corporate strategy at Crossbeam Systems.
http://www.cio.com/article/494231/Security_in_a_Virtual_World?page=3&taxonomyId=1448

Related articles
Enhanced by Zemanta

Security in a Virtual World was originally published on The Puchi Herald Magazine

Security in a Virtual World


Virtualization of the data center is provoking fundamental questions about the proper place for network security services. Will they simply disappear into the One True Cloud, dutifully following applications as they vMotion about the computing ether? Will they remain as a set of modular appliances physically separate from the server computing blob? Should everything be virtualized because it can be?

By Throop Wilder
Thu, June 04, 2009 — Network WorldVirtualization of the data center is provoking fundamental questions about the proper place for network security services. Will they simply disappear into the One True Cloud, dutifully following applications as they vMotion about the computing ether? Will they remain as a set of modular appliances physically separate from the server computing blob? Should everything be virtualized because it can be?
ABC: An Introduction to Virtualization
Server Virtualization: Top Five Security Concerns
Based on experience with large enterprises, we recommend that the network security infrastructure remain physically separate from the virtualized server/app blob. This separation allows you to maintain strong “trust boundaries” and high performance/low latency, without the loss of flexibility and adaptability of virtualized application infrastructures.
Large data centers have primarily safeguarded their servers with a well-protected perimeter and minimal internal protections. As zonal protection schemes were introduced to mitigate the unfettered spread of worms and intrusions, the natural boundaries became the divisions between the classic three tiers of Web infrastructures: Web, application and data layers.
More recently enterprises have further segmented these trust zones by service, business unit and other political criteria, yet this infrastructure does not lend itself to change. As three-tier architectures become vastly more flexible due to virtualization projects, the security infrastructure must develop its own flexibilty so it is never the bottleneck. Furthermore, it must also maintain the real-time guarantees of high throughput, high connection per second rates and low latency.
The good news is that this is precisely what forward-thinking architects and operations teams are designing and building right now. Best of all, some of these teams are discovering that, for once, security and performance optimization appear to benefit from the same strategy. Here’s how.
There are two core principles in new security architecture designs. The first principle is to virtualize within the three layers, not across them, which forces inter-zone traffic to pass through physically separate security equipment.
The second principle is to use equipment that consolidates multiple security services that can be invoked in any combination depending on the type of boundary crossing, while maintaining performance, latency and connection rates. You can refer to this separate layer of security resources as the “second cloud.”
The concept of virtualizing within layers (such as Web, application and database layers) vs. across layers can be depicted as follows. For example, Web servers and application servers are considered to pose risks of different levels.
In Figure 1, VMs of different risk levels are on the same servers and boundary transitions between zones happen entirely inside one or more servers. In Figure 2, all Web VMs run on one physical set of servers while the application VMs run on a separate set. Boundary transitions in this model happen outside of each group of servers.
From a purely operational point of view, the mixed-layer scenario in Figure 1 lends itself to major human error. In this case, a network team hands a set of virtual LANs (VLAN) corresponding to both zones to the server team, which must correctly connect the right VLAN to the right VM. One error, and a VM ends up on the wrong VLAN and a trust boundary has been breached. With the architecture in Figure 2, the only VLAN that needs handing off to the Web layer is the one appropriate for that layer.
From an architectural point of view, inter-VM traffic in Figure 1 also requires the addition of security VMs to manage zone boundary transitions. Every server that runs a mix of services will require the addition of the same set of security VMs, leading to VM sprawl. The separate layers depicted in Figure 2 allow for consolidated security equipment to be placed between the layers only once. In addition, this architecture preserves separation of duties between application teams and network security teams, allowing each to perform their respective duties with fewer dependencies on the other.
Finally, a fundamental advantage of the model in Figure 2 is that there is no potential “Heisenberg effect” in which the addition of security VMs impinges on the processing capacity of the servers. The result is vastly improved security performance.
Scalable designs
The architectures depicted in Figures 1 and 2 are simplifications that don’t represent the real complexity of multiple service boundaries. Figure 3 more closely depicts the cross-boundary problem in a real world environment in which the type of information being accessed dictates different security policies.
In Figure 3, three application services may each represent different risk classes and therefore require a different combination of security services depending on which boundaries are traversed. For example, one could imagine a VM in Service 3 (Contracts) requiring access to a VM in Service 1, which stores personally identifiable information (PII) – and is therefore governed by PCI requirements.
The security for this cross-boundary transition would require multiple services (such as firewall, intrusion prevention and application firewall) chained together in a particular order. As another example, a Web server accessing the knowledgebase (KB) in Service 2 might only need lightweight firewall access control. The ability to perform per-transition servicing is critical for making security – cloud flexibility match that of the application cloud. More specifically, Figure 4 illustrates this concept.
This architecture relies on the emergence of a new generation of high performance security equipment (depicted as the green oval) which is able to consolidate and deliver multiple security services from a common platform that enables service selection decisions. Sound familiar? It is the same value proposition delivered by application-services virtualization, but applied to security services.
Most importantly, this equipment can deliver the correct sequence or “chain” of services with the performance and latency guarantees so critical to the overall end-to-end user experience, while preserving a much simpler architecture and retaining the trust boundaries required for a secure infrastructure.
As engineers begin to experiment with various network security architectures, they will find that the tier/zone-based implementation is a desirable place to start. Not only does it yield excellent performance and flexibility, but it works within the confines of slower-changing organizational/political boundaries. It also lets server infrastructures morph along multiple axes without compromising the highest standards of security.
Wilder is vice president of corporate strategy at Crossbeam Systems.
http://www.cio.com/article/494231/Security_in_a_Virtual_World?page=3&taxonomyId=1448

Related articles
Enhanced by Zemanta

Security in a Virtual World was originally published on The Puchi Herald Magazine

%d bloggers like this: