The Puchi Herald Reblog

A reblogging blog

Guida al GDPR per chi non ne vuole sapere: devi iniziare, ma cosa devi fare?

Guida al GDPR per chi non ne vuole sapere: devi iniziare, ma cosa devi fare?

Hai già realizzato che tra un anno dovrai essere conforme alle nuove leggi sulla privacy dettate dal GDPR?

Ok Ok ho capito

devi pensare di passare da il tuo:

“chissenfrega della privacy tanto non è una roba di business “

a

“ops se stavolta non faccio le cose per bene rischio una multa fino al 4% sul mio fatturato. maledetto @#][<> GDPR

 

e stai entrando in ansia.

a dire il vero non credo tu lo stia facendo, anzi credo che continui a dire la prima frase come un mantra, ma facciamo finta che tu ti sia reso conto che stai per andare a metterti un un fiume di rogne se non fai qualche cosa, il punto è cosa fare?

Vediamo se ti posso aiutare. Certo, vorrei poterti spiegare cosa sia il GDPR cosa significhi data privacy e data protection, ma siccome so che non sei interessto a capire il perchè ma solo il cosa cercherò di essere il piu elementare possibile.

Passo numero uno, ti serve un DPO

Che cavolo è un DPO?

Il DPO è il tizio che ti dovrebbe aiutare a gestire le richieste derivanti da questo signor GDPR che nessuno, al momento, ti ha ancora presentato ma che sembra sia ansioso di darti multe e prendere i tuoi soldi.

DPO in inglese sarebbe Data Protection Officer, che in italiano puoi tradurre come Responsabile della Protezione dei Dati: ma come non ti bastava dover prendere un IT manager (quando lo hai)?

Ora lo so che tu vorresti chiamare il tuo IT manager e dirgli,

fai te prendi uno dei tuoi e dagli sta sola AGGRATIS

ma, purtroppo, temo non funzioni così.

Il signor GDPR, un perfido europeo insensibile ai tuoi bisogni, ha imposto che questo DPO deve essere un ruolo che gode di una certa indipendenza, e addirittura sembra che l’orientamento sia quello di dire che questo signore è incompatibile col ruolo di IT manager (una ditta tedesca è gia stata sanzionata per questo, ma si sa i tedeschi sono pignoli).

Ti dirò di più un DPO deve avere garantita autonomia per darti le indicazioni su come implementare la conformità alle richieste del signor GDPR  ma tu mantieni la responsabilità delle scelte aziendali, come a dire

  • se lui ti dice di fare “A” e tu invece fai “B” il responsabile sei tu
  • se lui ti dice di fare “B” perché tu gli hai detto che vuoi cosi il responsabile sei tu
  • comunque io responsabile sei tu.

andiamo bene, già sono sicuro che la cosa non ti piace, se ti stava antipatico il signor GDPR ora immagino incominci a detestare anche questo signor DPO, chiunque esso sia.

Voglio essere sincero con te, in Italia si sta ancora discutendo cosa sia un DPO, c’è chi dice un giurista, c’è chi dice sia un informatico,io ti dico è un po di tutti e due… ma in caso sia un giurista specializzato ti costerà di più … sai bene che gli specialisti IT li prendi per un tozzo di pane dal cognato del fratello dell’amico del cognato del salumiere.

Il problema del DPO è che deve spiegarti (non lo invidio) cosa DEVI fare per mettere in sicuro i dati che gestisci e che possono essere soggetti al GDPR. ma questo richiede:

  1. di capire le leggi sulla privacy
  2. di capire come i sistemi di gestione dei dati sono implementati
  3. di capire come funzioni il tuo business
  4. di capire come proteggere i dati in funzione dei tuoi sistemi, del tuo business e delle leggi sulla privacy

Insomma, il DPO dovrebbe essere un manager di provata esperienza cosa che, da sola, è quasi insopportabile, e mettere il naso nelle tue cose.

ora hai diverse scelte:

  1. puoi usare un consulente esterno
  2. puoi assumere o specializzare una persona interna
  3. puoi fregartene (come stai facendo al momento) e rischiare allegramente la multa.

ovviamente i tre punti hanno pro e contro, se usi un esterno devi pagarlo ma puoi cambiarlo se on fa quello che vuoi tu, se usi un interno rischi che no possa fare il suo lavoro precedente, se te ne freghi beh, spera che non ti becchino.

E dopo che hai preso un DPO?

Ora supponiamo che alla fine ti sia messo una mano sul cuore ed una sul portafoglio ed hai scelto l’opzione 1 o 2 (escludo la 3)

che fare?

il primo step da seguire è mettere insieme tutte le teste pensanti della tua azienda, la gente IT ed il DPO e fare 2 cose:

  1. scoprire dove sono i dati soggetti al GDPR e come li gestisci
  2. effettuare una robaccia che si chiama PIA (Privacy Impact Assessment) che vuol dire, basicamente,

questi primi due passi sono importantissimi perchè, diciamocelo chiaro, tu non hai la minima idea di

  • cosa siano i dati,
  • dove siano,
  • come li usi,
  • a cosa ti servono,
  • come li raccogli,
  • come li gestisci ,

La cosa spaventosa è che di sicuro il signor GDPR obbligherà le aziende a farsi carico si una enorme quantità di dati da proteggere.

Cerco di essere chiaro: tutti i dati  che possono essere utilizzati per fare riferimento ad una persona che vive sono dati personali ai sensi GDPR:

  • ID,
  • cookie,
  • indirizzi IP,
  • indirizzi di posta elettronica,
  • ogni identificatore di dispositivo personale
  • i metadati senza identificatore che possono essere afferiti ad una persona
  • ….

Non sai di che parlo? SALLO!!!!

ok ok lo so non ne capisci nulla di sta roba, per questo ti dicono che devi usare un DPO che, in qualche modo, deve essere capace di parlare con te e i tuoi managers e spiegarVi le cose, con l’IT manager e spiegargli le cose, con chi si occupa di sicurezza …..

capire dove siano questi dati, cosa siano non è quindi elementare, ma almeno una volta che lo hai fatto puoi passare al secondo step, la PIA.

No la PIA non è la Paperon Intelligence Agency

Non devi aspettarti che Paperino venga in tuo soccorso. la PIA è uno strumento che ti aiuta a capire i rischi cui sei soggetto gestendo i dati che stai gestendo e che neanche sapevi che stavi gestendo.

la PIA ti serve per capire cosa si rischia e come si protegge. purtroppo la PIA richiede che il tuo DPO, l’IT manager e il responsabile della sicurezza siano in grado di fare queste valutazioni, il che significa, implicitamente, che il cognat del vicino del fratello del salumiere sotto casa a cui fai riferimento come “guru” economico per tutte le tue esigenze IT probabilmente non sarà abbastanza.

Insomma se la PIA alla fine ti dice che sei messo maluccio non ti stupire, anzi stupisciti se ti dice il contrario.

…. e finalmente puoi iniziare a lavorare

una volta che hai DPO, e PIA puoi finalmente iniziare a ragionare su cosa ti serve, aspettati parecchio lavoro in termini di:

  1. come gestisci i tuoi dati
  2. policy e procedure da implementare
  3. tecnologie e, scusa la parolaccia, roba IT che manca o va gestita davvero tipo: backups, databases, sicurezza …

la cosa cattiva è che dovrai lavorarci parecchio

la cosa buona è che potresti scoprire che gestire bene le cose alla fine può anche farti lavorare meglio, anche se probabilmente in maniera diversa da prima.

se vuoi ne parliamo, fammi sapere….

 

 

var aid = '6055',
    v = 'qGrn%2BlT8rPs5CstTgaa8EA%3D%3D',
    credomain = 'adkengage.com',
    ru = 'http://www.thepuchiherald.com/wp-admin/post.php';
document.write('');

Guida al GDPR per chi non ne vuole sapere: devi iniziare, ma cosa devi fare? was originally published on The Puchi Herald Magazine

GDPR and the technology market

GDPR and the technology market

Question: will the new privacy policies and laws impact the technology market?

This is an interesting question to ask ourselves; whether we are consumer of the technology market or technology vendors the impact of the new technologies (from cloud to IoT, from industry 4.0 to big data just to name the most acknowledged from a marketing point of view) privacy regulations can affect heavily our behaviours and the market.

so let try to understand what could be the implications of this new focus on privacy and data protection.

First of all we should try to understand what we are talking about.

Privacy, GDPR and the rest.

Privacy: the state of being alone, or the right to keep one’s personal matters and relationships secret:

In nowadays environments the presence of data related technology is pervasive: from business to  personal life technology play a big part of our life.  data related technology means we use technologies that is able to manipulate information: informations are collected, changed, communicated, shared all in form of data. Bit and bytes that describes our job, our business, our personal life.

Although in the past privacy was mainly a physical issue, and therefore legislation was focusing on those aspects, this increasing presence of data collection and sharing makes people realize that there is a new abstraction layer that involve privacy that is no more related to be alone or in a confined physical space, but in a undefined and without borders digital virtual space.

Email, Blogs, social networks, chat, E-commerce, electronic payment, smart phones all this and more shifted the same perception of privacy from a simple concept to something more hard to be defined.

Rulers and consumers started to deal with those issues in the last years whole enterprise and technical world has been remained almost frozen waiting for indications. the first indications that this would have been a wakeup call for enterprise has been the ending of the safe harbour agreement, privacy was not longer a secondary issue even for the economy.

The latest development can be easily identified in the new  European Union’s General Data Protection Regulation (GDPR), which comes into effect in May 2018, has far-reaching implications that extend far beyond the EU.

Businesses that fail to meet the new mandates aimed at protecting personal data face severe consequences. They can be fined up to $20 million, or 4 percent of global revenues — a cost that makes this regulation impossible to ignore.

But other areas of the world are moving toward a more cautious approach toward data privacy, not only Europe. While it is not yet clear how will be the new USA administration approach toward this subject, it is out of doubt that data privacy is becoming a major issue in the next years; how this will impact business is, although, not yet clear.

For sure is that GDPR will enforce companies to deal with a tremendous amount of data to be protected. Any data used to make inferences linked tenuously or otherwise to a living person is personal data under GDPR. Cookie IDs, IP addresses, any device identifier? All personal data. Even metadata with no obvious identifier is caught under the GDPR’s definition of personal data. Truth be told, such assertions are not entirely new. The difference under GDPR is that they will be enforced and the non compliance fined.

Today swathes of business practices unlocking data monetization rely upon data not being considered personal. So they apply weak consent, onward transfer and data reuse concepts. These models are going to change; either by choice, or by obligation.

Data Privacy , Data Protection and Cyber Security

One aspect that is not yet completely perceived and understood is the correlations between data privacy, data security and cyber security. The requirements that enforce the companies to respect data privacy legal requirements are intrinsically bound with the explicit request for data protection and, therefore, cyber security.

GDPR clearly define data should be fairly processed and protected: the implications are not only in terms of procedure to adopt inside the enterprises, but also technical in terms of data manipulation, retention, storage and security.

Recent security outbreaks as the one related to ransomware are an example of how basic cyber security threats can impact directly on this area, as well as common and well known cyber attack directed to data exfiltration.

This is a growing phenomenon and is affecting not only the classical online services (think of classic dating site attacks, as an example, to collect username and passwords) but, as an example, extensively the healthcare industry.

While in the past those outbreaks could have been just a relative minor issue, the new GDPR structure of fines could affect in a heavy way any company, regardless its sector, and some departments that in the past have never considered those issues as a business imperative, as marketing or Human Resource, will have to face a difficult transaction in terms of awareness, policies to be implemented and technology approach.

It is easy to forecast that this situation will shape in the next years the technology market in different areas.

Impact on the technology market

When we talk about the technology market we face different aspects, “technology” as a term can cover a wide range of things. We can talk about hardware vendors or software vendors. We can talk about service vendors (cloud, CRM or whatever you like more), IT enterprise or carrier HW providers, Security vendors, End user HW providers (as smart phone makers).

Recently the trend is to aggregate functions and offering, making those areas overlapping in the same company although not often integrated.

Since all the industry will have to face the new privacy requirements it is to be expected a increase on data privacy expertise requests hitting the market, and a growing demand for IT solutions that will help companies to manage the requirements. this could, as an example, give a small impulse to historically neglected areas as DLP solutions, data categorization solutions and so on.

Some little advance and effort will be probably put also on more traditional areas as backup.

An heavier impact will be seen in the growing online market with the need to protect not only privacy of users but also to save the economic transactions, content providers, social or gaming platforms will be heavily impacted too.

In a second run we will probably see a renewed interest for baseline security solutions, as the stakeholders will, sooner or later, realize that there is no compliance without data protection and there is not data protection without cyber security.

The request for expertise and consulting services will be mostly redirected outside to technology vendors (here considering HW\SW vendors as cisco, hp, huawei, SAP, Microsoft; service vendors as cloud providers – azure, AWS, google –  but also app stores, CRM online providers), consulting companies and technology integrators.

On the other end technology vendors will have to face a strange situations where they will be both requested to provide solutions compliant with the new rules, be the driver of the new requirements and implementations (public-private partnership basically means this)  and in need to implement solutions to protect themselves in different areas as:

Product and Services development

Here vendors will have to start developing products\services considering data protection a major issue. It is clear the impact on cloud or services, where data protection can be easily identified, but also the HW product side will have to face issues. Although it can seems trivial we can remember the problem related to GPS tracking in apple and, at some extension, android happened some years ago. privacy implication with products can be wider than expected, since we have to protect not only the data per se, but also the metadata (this is the wider range of GDPR and new privacy regulations).

Usually we tend not to consider, as an example, system logs as a problem in terms of privacy, but in effect they are if they contains data that can point to a physical person and being used to track somehow the person behaviour.

Firewall and router logs, as an example, could be used to determine what is someone doing online, and therefore can expose information that are subject to GDPR realm. minor features apparently but the truth that also metadata are object of GDPR.

Privacy By design and Privacy Enhanced Technology will be mandatory component of any product\service developement.

Marketing and Sales

Marketing(and or  sales)  has always been considered agnostic towards technology, but the ultimate scope of marketing is to get in touch with the market, this means customers and ultimately people. Marketing activities will get a huge impact towards GDPR requirements both in terms of operations, since is up on marketing to manage a large amount of data coming from outside the company, and communication.

Technology vendors, somehow, will be expected to lead and drive the request both in terms of consulting and example. The result of a breach or misinterpretation of GDPR guidances will impact severely the business from a brand point of view and undermine vendor credibility.

Internal protection

As any other company there will be a direct impact on business operations of any vendor dealing in the technology field. But this case the extension of the problem will not focus just on the standard cyber security procedures, since technology vendors enter, somehow, almost directly on customers IT or data processing infrastructure the request will be to implement an end to end protection system which include GDPR compliance and cyber security application. This will require technology vendors to operate on:

  1. supply chain
  2. production and vulnerability disclosure
  3. product and service delivery

all three area are still trying to develop standards and good practice although something is moving.

So what are the changes expected under the new regulation?

There are around a dozen headline changes which technology companies should be aware of.

Some of the key areas include:

  • Privacy by design and Privacy enhancing technology – privacy by design calls for the inclusion of data protection from the onset of the designing of systems. Companies must also only hold and process data which is absolutely necessary.

Privacy enhancing technology (PET) and Privacy by Design (PbD) are obligatory and mandated requirements under the GDPR. There remains no generally accepted definition of PET or PbD, but PbD is considered an evidencing step for software development processes to take account of privacy requirements. So the incorporation of what can broadly be defined as PET in such solutions represents PbD.

Two particular PET techniques that control downside and enable upside risk are differential privacy & homomorphic encryption.

  • Differential privacy counters re-identification risk and can be applied to anonymous data mining of frequent patterns. The approach obscures data specific to an individual by algorithmically injecting noise. More formally: for a given computational task T and a given value of ϵ there will be many differentially private algorithms for achieving T in a ϵ-differentially private manner. This enables computable optima’s of privacy and also data utility to be defined by modifying either the data (inputs to query algorithms) or by modifying the outputs (of the queries), or both.
  • Searchable/homomorphic encryption allows encrypted data to be analyzed through information releasing algorithms. Considered implausible only recently, advances in axiomatizing computable definitions of both privacy and utility have enabled companies such as IBM & Fujitsu to commercially pioneer the approach.
  • Data processors – those who process data on behalf of data controllers, including cloud-providers, data centres and processors. Liability will extend to these and businesses that collect and use personal data.
  • Data portability: Empowers customers to port their profiles and segmentation inferences from one service provider to another. This is a reflection by lawmakers that data is relevant to competition law, whilst not conceding an imbalance between a companies ability to benefit from data at expenses of us all as citizens.
  • Data protection officers – internal record keeping and a data protection officer (DPO) will be introduced as a requirement for large scale monitoring of data. Their position involves expert knowledge of data protection laws and practices, and they will be required to directly report to the highest level of management.
  • Consent – explicit permission to hold any personal data in electronic systems will become mandatory. It will no longer be possible to rely on implied consent with individuals having the option to opt-out.Customers consent to privacy policies that change. Being able to prove which contract was agreed to, in court or to a regulator, requires  registration time stamping and tamper resistant logs become de rigueur.As we move into an opt-in world of explicit consent and ubiquitous personal data, data transmissions beyond a website visit must be explicitly permissioned and controlled. In this world, default browser values de-link machine identifiers from search queries. In other words, in this new world, online advertising to EU citizens is in line for fundamental change.And given particular regulatory emphasis on profiling, explicit consent will require loyalty programs to differentiate consent between general and personalized marketing consents. Those consent flags must cascade through registration, reporting and analysis, targeting and profiling, contact center operations and all other processes that handle such data.
  • Breach notifications – the notification of a breach, where there is a risk that the rights and freedoms of individuals could become compromised, must be reported within 72 hours of the breach being identified. it is underestimate the relationship between breach notification and vulnerability disclosure. While for an end user those two aspect seems to be unrelated, there could be a higher impact on vendors for, at least, a couple of factors:
    • The breach notification could expose the vendor as the main source of the breach itself due to lack of vulnerability management and disclosure.
    • The victim could consider liability against the vendors which “vulnerabilities” caused the breach redirecting to them part of the costs.
  • Right to access – data subjects will now have the right to obtain confirmation from you of what personal data is held concerning them, how is it being processed, where and for what purpose.
  • Right to be forgotten – data subjects will now have the right to be forgotten which entitles the data subject to have you ensure that information is deleted from every piece of IT equipment, portable device and from server back-ups and cloud facilities.A framework to comply with this obligation would include the following steps:
    • Spot identifiers which tie together datasets, e.g: machine identifiers link together our social media experiences;
    • Prescribe how re-identifiable data flows in and outside the organization;
    • Document a scalable process to overwrite identifiers in all datasets where re-identification can be established, upon the validated request of a user, and
    • Third party contracts and SLAs should be adjusted to ensure compliance with validated requests.
  • Data Bookkeeping: Field level data, linked to an identifier, flows across geographies and legal entities, processed by machines and people. Organizations will account for these flows with evergreen reporting. It stands to reason that these flows will be threat-modeled for integrity and confidentiality so controls can be readily evidenced upon request.

 

GDPR impact

Privacy regulations as GDPR and the growing awareness and concerns related to data privacy and security are related to the expanding presence in everydays life and business of smart mobile devices able to process data, the growing online market, consolidated trends as cloud services or newcomers as IoT.

Technology market face this transition in front line, and will see the impact of new regulations and customer reactions in several ways. This is both a chance and a problem; implementation of new mandatory requirements will impact all areas, from design and production to sales and delivery. But this will means also new area of business in the consulting area, in the technologies to support GDPR and privacy compliances in the market where data analysis technology, artificial intelligence and other high end technology areas could provide a competitive\price insensitive advance vs the consolidated technology market.

The key success factor is to embrace this change and drive it acquiring internally the needed competences, implementing the correct corrections and driving the needed improvement related to product and services provided.

Future trend will see a prevalence of  technologies related to “data” processing and services related to data vs products. The new Data paradigm is already visible nowadays as example in the Big Data market (take data lake implementation as an example). in terms of technology market this will means to focus on Data Science which will pose a new and somehow unpredictable relationship with privacy regulations.

GDPR Risks and “Data Science”

The term data science describes a process from data discovery, to providing access to data through technologies such as Apache Hadoop (open source software for large data sets) in the case of Big Data; and distilling the data through architectures such as Spark, in-memory and parallel processing. That data science creates value is understood. What isn’t are the risks it exposes investors to under the GDPR, of which there are principally three:

Risk 1: The Unknown Elephant in the Room – Unicity: a general misunderstanding in monetization strategies is that stripping away identifiers of a data model renders the data set anonymous. Such a belief is flawed. So-called anonymous data sets can often, without implausible effort, be re-identified. Unicity is a measure of how easy it is to re-identify data. It quantifies additional data needed to re-identify a user. The higher a data set’s unicity, the easier it is to re-identify. Transactional and geo-temporal data yield not only high monetization potential, they carry statistically unique patterns which give rise to high unicity.

Risk 2: Relevance & Quality: Income, preferences and family circumstances routinely change, and preference data on children is difficult to ethically justify processing. While this creates a problem for predictive analytics, that data and the inferences it engenders can be considered inaccurate at a given point in time, which creates a GDPR cause-of-action. Data quality needs to stay aligned to business objectives.

Risk 3: Expecting the Unexpected: When data science creates unexpected inferences about us, it tends to invalidate the consent that allowed data to be captured in the first place, which, again, is a big deal. Data collected today, particularly from mobile devices, is subject to a constant stream of future inferences that neither the customer nor the collector can reasonably comprehend. Consider a car-sharing app that can model propensity for one-night-stands from usage patterns. While that data may not result in propositions today, the market will consider upside risk/option value to have been created (the market still does not seem to believe in GDPR impact), but this incremental data coming into existence creates downside risk (such data is difficult to find a legal-basis for, given the vagaries of a given consented disclosure).

More generally, the problem of negative correlations is brought to the fore by algorithmic flaws, biased data and ill-considered marketing or risk practices, the enduring example being U.S. retailer Targets’ predictive campaigns to pregnant teenagers, spotted by parents. These are examples of a new form of systemic control failure, leading to potentially actionable GDPR claims.

 

Related articles

var aid = '6055',
    v = 'qGrn%2BlT8rPs5CstTgaa8EA%3D%3D',
    credomain = 'adkengage.com',
    ru = 'http://www.thepuchiherald.com/wp-admin/post.php';
document.write('');

GDPR and the technology market was originally published on The Puchi Herald Magazine

The IoT Files – Privacy

The IoT Files – Privacy

In the previous post “The IoT Files – intro and security” I started to talk about Security issues related to the IoT world.

Security imply a wide range of elements, one of them is Privacy. But since the nature of this topic is particularly sensitive I will talk about it separately.

Privacy in the Internet of Things assume a different taste than we used to think. We should think again what IoT means. A lot of objects that are able to communicate and process data,  equipped with sensors that make them aware of the neighborhood.

Those sensors will be able to track where we are (geolocalization) what we buy and eat (smart fridge) how often we shower or be at home (smart meter for gaselectricity), our taste in terms of media, show (smart tv) and so on.

When we go out our smart cars will communicate in our smart roads about our position destination and driving skills, our smart medical devices will keep track of when we sleep, or make exercise…

Forget to have an affair, or just a little moment for yourself, all will be monitored by something, somehow.

This open a completely new scenario in terms of privacy, the amount of data available will be way bigger than what we have today.

but who will be able to protect our privacy? how we will be able to monitor who will access our data?

Personal data

for sure we will need a clear definition of personal and sensitive data. but in an environment where every move or choice is registered, or can be deducted analyzing the output of different sensors and systems, the extension of “personal” will grow from direct data to metadata, to deducted data.

This is somehow a more complex environment of a already complex dilemma: how to handle all this?

As of now there is not even an agreement on the definition of personal and sensitive data, nor how to handle those data.

Some countries have strict controls, other lousy, and it is not just a matter of developed or not developed country. Take as an example the querelle between Europe from one side and USA (plus UK) from the other on mass surveillance rights.

EU approach on Data privacy is way more restrictive than the lousy USA ones. But even in EU we can see difference form country to country, and the recent statement of Austria against the new GDPR agreement is a clear sign that we are moving in a really complicated area.

Medical records

And not all personal data are the same, some can have a really “personal” connection. Not only sexual orientation or political and religious believes, but think as an example about your medical record.

If we will use IoT medical devices, those will be able to help us to stay alive, but at the same time will collect, process and send a lot of really sensitive and private data about our physical condition. those data if not managed correctly could expose us to unpleasant situation but how to control the flux of those data?

Geolocalization

Same concerns can be found in the geolocalization. Sure it can be useful to find the place where I have to go, or being found if I wantneed to, but at the same time tracking our move can expose us to risks. may be I am going during my vacation to an interview and I don’t want my boss knows, or I tell my mom i can’t go to visit her because I am at work while I am actually watching something on TV I don’t want to miss (lousy reason I know). No matter what is the reason I would like to be sure I can get control of who is accessing those information.

Consumer Preferences

On the other end those data can have a great value for third party, interpolating the result coming from different sensors it can be possible to track consumer behavior to a level we cannot imagine right now.

We can understand mixing geolocalization data with actual purchases, how much time we need to choose a product, how we choose and why.

So it is not only governments, police enforcement agencies, that want to know all about us, it is a bigger entity: marketing.

Personal Communication

If we sum all the data we can have in the IoT even communication, personal communication, assume a whole new significance. Using the so called metadata nowadays it is possible to understand a lot of people behavior (ask GCHQ).

But with localization, hart beat rate, may be we will be able to tell if you are lying or if you are nervous and we don’t knows what more. Again it is not Science Fiction, but just the evolution prospect by IoT.

Privacy can be simply disintegrated because all of those sensors, the incredible amount of data will make able no know, see, listen or deduct all we do.

Privacy of Things

Probably we will have to introduce the Privacy of Things among the Internet of things, and create rules that allows us to stop interpolating data that can expose critical information, and not only direct personal or sensitive information as we do today.

What more?

A scenario that is open to new unexpected evolution, no so different from the one I presented for the security space.

What we should consider is not only the simple data, but the data that can be extrapolated analyzing other stuffs apparently unrelated.

In the age of Safe harbor 2.0 (aka Privacy Shield)

I wrote in the past on Safe Harbor and the problems related to data privacy in our age, now we have a safe harbor 2.0 (Privacy Shield), that we don’t know how long will stand.

Even with the relative small amount of personal data (compared to the IoT) we face problems nowadays, how we will manage the next to come?

There are sensible questions that have to be addressed in order to, at least, start to analyze the impact of privacy on IoT.

A few points are the following:

  • Where my data are stored?
  • How my data travel?
  • Who is storing my data?
  • How I will control who is managing my data?
  • Who can access those data?
  • How my data are used?
  • What if I want to change something?

Since there is not a common understanding on the basic definition this will be hard. and the questions does not have a simple solutions, and will require a sound technological approach.

Consider the problem of how data travel. In a world where data can travel trough different countries and stored “in the cloud” that means somewhere we do not actually control any control will be difficult.

Storing the data is just one of the aspect, because data, as an example, can be legally sniffed if the passed in certain countries that allow this. Take, as an example, USA. All data that physically pass trough the USA are subject to USA federal laws, this means USA government can check those data, even if will be stored somewhere else. The simple transit put privacy at risk no matter what “privacy shield” state.

And so may be some encryption will be not allowed.

A solution, may be, would be implementing geotraking of every single packet, in order to determine the path that the packet is allowed to take, but this is at the moment far form our real implementation capabilities.

Legal, technological, cultural frames are still missing ….

We are moving in a slippery field, where legal, technological cultural frames are still missing.

In the absence of indications, some implementations could be not privacy aware and can create problems in the future, as the safe harbor things showed us.

Alas politics and governments are not still on this boat, too technical probably (it is a sarcastic comment).

But it is the cultural lack that is the major obstacle to understand those issues, a knowledge gap that is related to  lack of experience, lack of real technical knowledge, lack of interest. Alas Security and Privacy suffer of the same problems, they are multidimensional and require a holistic approach (with technical, legal, economical, cultural basics) and not the compartmentalized ones we still have on those subjects.

Next Post will be on the infrastructures required by IoT.

var aid = '6055',
    v = 'qGrn%2BlT8rPs5CstTgaa8EA%3D%3D',
    credomain = 'adkengage.com',
    ru = 'http://www.thepuchiherald.com/wp-admin/post.php';
document.write('');

The IoT Files – Privacy was originally published on The Puchi Herald Magazine

Privacy Impact Assessment

Privacy Impact Assessment

Privacy Impact Assessment

Privacy impact assessments (PIAs) are tools which can help organizations identify the most effective way to comply with their data protection obligations and meet individuals’ expectations of privacy. An effective PIA will allow organizations to identify and fix problems at an early stage, reducing the associated costs and damage to reputation which might otherwise occur. PIAs are an integral part of taking privacy by design approach.

Key points:

A PIA is a process which assists organizations in identifying and minimizing the privacy risks of new projects or policies.
Conducting a PIA involves working with people within the organization, with partner organizations and with the people affected to identify and reduce privacy risks.
The PIA will help to ensure that potential problems are identified at an early stage, when addressing them will often be simpler and less costly.
Conducting a PIA should benefit organizations by producing better policies and systems and improving the relationship between organizations and individuals.
A privacy impact assessment states what personally identifiable information (PII) is collected and explains how that information is maintained, how it will be protected and how it will be shared.

A PIA should identify:

Whether the information being collected complies with privacy-related legal and regulatory compliance requirements.

The risks and effects of collecting, maintaining and disseminating PII.
Protections and processes for handling information to alleviate any potential privacy risks.
Options and methods for individuals to provide consent for the collection of their PII.
PIAs are not something organizations did a lot (or any) of 15 years ago, although key compliance issues were still considered. Moving from then to now, awareness and significance of data protection has increased. More sophisticated technology has enabled more sophisticated data processing, on a greater scale, and in more intrusive ways. Not addressing the risks may cause damage or distress to individuals, low take-up of a project by customers, damage to relationships and reputation, and time and costs in fixing errors (as well as penalties for non-compliance). A project may partly or wholly fail. These are some of the drivers for carrying out PIAs, and for them to become a new legal requirement under EU data protection law.

Existing PIA frameworks

In the UK, the Information Commissioner’s Office has promoted PIAs for a number of years, although the Data Protection Act 1998 does not require PIAs to be carried out. The ICO published a PIA Handbook in 2007, which was replaced in 2014 by a more up-to-date PIA Code of Practice. Some sectors have additional PIA requirements or guidance. For example, government departments were required to adopt PIAs following a data handling review by the Cabinet Office in 2008. PIAs and PIA methodologies are also promoted in many other countries around the world.

A lot of organizations have therefore already integrated PIAs into project and risk management procedures, following existing recommendations and guidance. Other organizations may not yet be so familiar with PIAs, as they are not yet compulsory for most sectors.

Either way, EU organizations will need to adopt new PIA procedures, or review and adapt existing procedures, in order to meet the new requirements.

New legal requirement under the GDPR

The compromise text of the EU General Data Protection Regulation (GDPR) was published on 15 December 2015. At the time of writing, it is expected to have final approval soon, and then come in force in early 2018. Article 33(1) contains the new obligation for conducting impact assessments:

‘Where a type of processing in particular using new technologies, and taking into account the nature, scope, context and purposes of the processing, is likely to result in a high risk for the rights and freedoms of individuals, the controller shall, prior to the processing, carry out an assessment of the impact of the envisaged processing operations on the protection of personal data…’

As the GDPR is a data protection law, the requirement is for a data protection impact assessment (DPIA). This applies to the processing of personal data recorded in electronic or paper-based format. There may also be privacy issues associated with non-personal information, for example communications data or information about corporate entities; and relevant legal requirements include communications laws, direct marketing rules and confidentiality requirements. Wider privacy issues can also arise from, for example, surveillance, bodily testing or searching, which may also trigger human rights and other privacy laws. Often these matters go hand-in-hand with data protection issues, as personal data is recorded as a result of the relevant activities, but separate privacy concerns can also arise. Therefore, whilst this article focuses on data protection impact assessments under the GDPR, PIAs may also address wider privacy risks.

When a DPIA will need to be carried out

Article 33 requires a DPIA to be carried out where processing is ‘likely to result in a high risk’. Article 33(2) contains a list of cases where DPIAs shall, in particular, be carried out:

‘(a) a systematic and extensive evaluation of personal aspects relating to natural persons which is based on automated processing, including profiling, and on which decisions are based that produce legal effects concerning the individual or similarly significantly affect the individual;
(b) processing on a large scale of special categories of data referred to in Article 9(1), or of data relating to criminal convictions and offences referred to in Article 9a;
(c) a systematic monitoring of a publicly accessible area on a large scale.’
The first of these would capture many data analysis activities, for example where an evaluation of a person’s characteristics or behaviors impacts the services they may receive or how they are treated. The definition of ‘profiling’ lists performance at work, economic situation, health, personal preferences, interests, reliability, behavior, location and movements as matters which may be analyzed or predicted.

Large-scale use of sensitive types of data is captured by (b). As well as the existing categories of sensitive personal data under the DPA, this now captures genetic and biometric data.

Thirdly, large-scale public monitoring would require a DPIA, which may include use of CCTV, drones or body-worn devices.

In addition, under Articles 33(2a) and 33(2b), the supervisory authority (the ICO in the UK) shall establish a list of the kind of processing operations where a DPIA is required and may establish a list of processing operations where no DPIA is required.

The lists are subject to (where appropriate) co-operation with other EU supervisory authorities and the EU Commission, and must take into account opinions of the (new) European Data Protection Board.

Article 33 requires the DPIA to be carried out ‘prior to the processing’; in other words, prior to starting the relevant activities. A post-implementation review would be too late (although may still be of benefit if a DPIA was not undertaken previously).

Organizations will therefore need to identify whether projects or activities which arise fall within a category described above or may otherwise result in a high risk. Even within organizations which do not regularly carry out high-risk data processing, changes to existing activities can turn previously low risks into high ones. For example, adopting new technology to assist with an established business procedure can affect how personal data is used.

Identifying the need for a DPIA is commonly achieved by an initial assessment during project planning (as is also recommended within the ICO’s PIA Code of Practice). At that stage, business teams can identify intended uses of personal data and assess potential data protection risks. The outcome determines whether or not to proceed further with a DPIA.

Of course, even if an initial assessment does not determine a high risk or trigger specific DPIA requirements under the GDPR, organizations may wish to continue with an assessment to address lower data protection risks and ensure compliance.

Exception to the DPIA requirement

Article 33(5) contains a potential exception for regulated activities carried out pursuant to a legal obligation or public interest. The controller may not be required to carry out a DPIA if one has already been carried out as part of setting the legal basis for those activities. Recital 71 refers to activities of doctors and attorneys in using health and client data – it is unclear whether this is touching on the same point – it seems to indicate that such processing activities shall not be considered as being on a ‘large scale’ rather than being a specific exception.

Procedure for carrying out a DPIA

Article 33(1a) provides that the controller shall seek the advice of the data protection officer, where designated (in accordance with Article 35), when carrying out a DPIA.

Article 33(3) provides that the DPIA shall contain at least:

‘(a) a systematic description of the envisaged processing operations and the purposes of the processing, including where applicable the legitimate interest pursued by the controller;
(b) an assessment of the necessity and proportionality of the processing operations in relation to the purposes;
(c) an assessment of the risks to the rights and freedoms of data subjects referred to in paragraph 1;
(d) the measures envisaged to address the risks, including safeguards, security measures and mechanisms to ensure the protection of personal data and to demonstrate compliance with this Regulation taking into account the rights and legitimate interests of data subjects and other persons concerned.’
These steps are comparable to those within the ICO’s PIA Code of Practice, which is useful in considering what they might mean in practice.

Firstly, an organization must describe the proposed flows of information involved in the activity or project, ensuring it is clear how and why personal data is being used at each stage. Diagrams as well as written descriptions can be useful to convey this.

Secondly, an organization must assess whether the proposed use is of data necessary and proportionate to its legitimate purposes; for example, are there alternative ways to achieve the same project objectives?

Next it is clear that a DPIA involves a risk assessment. This involves considering the potential impacts of proposed activities on the relevant individuals and the organization, and the likelihood of such impacts arising. Impacts may include, for example, loss or misuse of data, intrusion into private lives, lack of transparency and non-compliance. Solutions must then be found to avoid or mitigate risks and demonstrate compliance. These may include introducing additional elements into the project (such as anonymisation, pseudonymisation or security measures), or changing aspects of the project (such as collecting less data or doing fewer processing operations).

Organizations may use risk assessment methodologies already in place for other legal or organizational risks, or may create tailored risk assessments for the purpose of DPIA procedures.

Article 33(3a) provides that compliance with approved codes of conduct shall be taken into account in assessing data protection impacts. Codes of conduct relating to different sectors or types of activity may be approved under Article 38.

Consultation with data subjects

Article 33(4) requires controllers, ‘where appropriate’ to ‘seek the views of data subjects or their representatives on the intended processing, without prejudice to the protection of commercial or public interests or the security of the processing operations’.

This means consulting with those whose privacy is affected by the proposed activities, as it is these privacy risks that the DPIA is seeking to address. However, it may not always be appropriate to do this, for example when protecting overriding interests to keep aspects of the proposed project confidential. Public sector organizations, in particular, may already have formal consultation processes, and the ICO’s PIA Code of Practice also gives guidance on consultation, but this may be a new consideration for some organizations.

Data processors

Article 26(2) sets out requirements for the terms of contracts between data controllers and data processors (which are more detailed than the current requirements under the DPA). These include that the processor shall assist the controller in ensuring compliance with requirements for DPIAs.

The processor’s role may be particularly important, for example, where it is providing technology central to the relevant project, as it will be in the best position to identify and address privacy and security risks relating to its own technology.

Consultation with supervisory authorities

Article 34 contains a procedure for consultation with the supervisory authority (the ICO in the UK) as a result of (or potentially as part of) a DPIA. Recital 74 indicates the intention for consultation where the controller considers that high risks cannot be mitigated by reasonable means. However, Article 34 states that consultation is required where the processing would result in a high risk in the absence of mitigating measures. As DPIAs are required only for high-risk activities, this could mean consultation is always needed following required DPIAs. Further clarity on the intended interpretation would therefore be useful, as it is likely to have a big impact on timetables and resources for controllers and the ICO.

As part of the consultation, the supervisory authority must give advice to the controller where it is of the opinion that the intended activities would not comply with the GDPR. If appropriate mitigating measures have been established, therefore, perhaps no further action is required. Advice must generally be given within eight weeks although this may be extended in complex circumstances. The authority may also use its other powers (eg to investigate further or order compliance).

The ICO already provides support to organizations which wish to consult on data protection matters, but the GDPR will require a more formal process and resources for DPIA consultation. For controllers, consultation could assist in finding solutions, though it could also delay or restrict projects.

Post-implementation reviews

Article 33(8) provides:

‘Where necessary, the controller shall carry out a review to assess if the processing of personal data is performed in compliance with the data protection impact assessment at least when there is a change of the risk represented by the processing operations.’

Regular post-implementation reviews or audits can be used to assess whether the risks have changed, and ensure the solutions identified during the DPIA have been and continue to be adopted appropriately.

Data protection by design and by default

Article 23 contains general requirements for data protection by design and by default. These mean that measures designed to address the data protection principles should be implemented into processing activities, and that the default position should be to limit the amount of data used and the processing activities to those which are necessary for the relevant purposes. Carrying out DPIAs, even where particularly high risks have not been identified, may be a good way to demonstrate these matters are being addressed.

EU Directive for the police and criminal justice sector

The GDPR has been prepared alongside the new Data Protection Directive for the police and criminal justice sector, which will separately need to be implemented into UK law. Articles 25a and 26 of the Directive contain requirements similar to those in the GDPR in relation to DPIAs and consultation with the supervisory authority.

What to do now

DPIAs will not become a legal requirement under the GDPR for a couple of years yet. However, there are benefits in starting (or continuing) now to build DPIA (or PIA) processes into existing project and risk management procedures. As well as the existing advantages of DPIAs, this will enable them to be part of business as usual when the new law arrives. In addition, DPIAs conducted now will ensure that high-risk data processing activities in existence when the GDPR takes effect will have had the prior assessment envisaged by the new requirements.

It is, of course, still early days in working out how the detail of the provisions discussed above will be interpreted in practice, and we can expect further guidance at UK and EU level (including the required lists of activities which will require a DPIA). Existing PIA guidance, such as within the ICO’s Code of Practice, should help organizations to get on track, and procedures can be refined further as we get more clarity on the specific GDPR requirements.

var aid = '6055',
    v = 'qGrn%2BlT8rPs5CstTgaa8EA%3D%3D',
    credomain = 'adkengage.com',
    ru = 'http://www.thepuchiherald.com/wp-admin/post.php';
document.write('');

Privacy Impact Assessment was originally published on The Puchi Herald Magazine

Time for enterprises to think about security, seriously


English: A map of Europe divided into countrie...
English: A map of Europe divided into countries; where EU member states are further divided by NUTS level 3 areas. The NUTS 3 areas are shaded green according to their GDP per capita in 2007 at current market prices in euros; darker green denotes higher GDP per capita and lighter green, lower GDP per capita. (Photo credit: Wikipedia)
View image | gettyimages.com

UE directive on Attack against information systems  give us no more excuse to deal seriously,

Under the new rules, illegal access, system interference or interception constitute criminal offence across the EU. But while the legislator is working to create tools to address cybercrime as a Whole system problem, that is affecting EU economy, what are enterprise doing on this side?

The problem is that if enterprises does not align their cyber security defence to the correct approach every legislation will be useless, because the target will be always too easy.

Makes absolutely no sense to start a security system while internally you use Explorer 8 and Windows 7 as default OS. make absolutely no sense to rely on firewall and ipsids inside without implementing a correct siem infrastructure.

Make absolutely no sense to try to keep Intellectual property if we do not add a correct dlp system, that means to have also categorization and processes.

Make absolutely no sense to beg for security if our Windows environment is poorly designed,

It is time to change our security approach from an annoying task to a foundation of our systems. we do not discuss the need of a CFO and risk analysis related to finance why it is so hard to make the same on information and cyber security (let me add also privacy)?

CSO role, and DPO ones, should be at the heart of every  board as the CFO, the HR and the other company roles.

Alas CSO and DPO need a high level of Independence, since their roles itself need to be a source of control and guidance for the entire company (no more no less than a CFO). And both the roles are not “IT geek guys stuff” since require specific knowledge, that goes beyond the IT implementation.

Alas if architectural roles are still a minority in the IT world, we can imagine how hard could be to find those other figures that requires the ability to see the security inside the business and deal with a wide range of interfaces not necessarily technical.

This is a wide problem that cover all sectors of the industries. there is no more area that can be safe from IT implications. The Jeep cars hack is just an example another example of how serious is the question.

a correct cyber and information security approach should take in account:

  1. how we protect ourself from the external threats
  2. how we implement internally a secure aware process to deal with the valuable information we process
  3. how we implement a secure aware production process
  4. how we contribute to the progress of the cyber and information safety in our environment and ecosystem.

does not matter who we are or what we do those 4 points can’t be avoided anymore.

and can’t be managed as a geek itch to be scratched.

  1. how we protect ourself from the external threats

Point one is historically the first implemented, but also one of the worst nightmare.

Security is usually seen as a series of Patches to be put on system after the design. and usually this is done putting a “firewall” or a “next generation firewall” or some other marketing driven Technologies, not considering that any insertion is useless if not seen into a serious context and design.

And the design start with the simplest questions:

  • what I want to do with my IT?
  • what is the value of IT for my business?
  • what is the implication of the IT process in our process?

Budget and design should follow accordingly to that.

but design can’t avoid simply facts as:

Things need to be patched and upgraded to maintain a minimum baseline of efficiency and security

process should be design accordingly to the technology, the people and the business

if you don’t do this you keep having people surprised by the End of Support of the old Windows versions and using Windows Explorer 8 browsers just for “compatibility issues”.

If you do this  to proof you do not understand anything about IT, you did a good job otherwise, well we have a problem.

2. how we implement internally a secure aware process to deal with the valuable information we process

We can implement whatever we want, but if we do not have a clear picture of what we are going to protect and why, all the design is useless.

I wrote in the past how hard is to understand what is and where is the value in our data. Still so many people does not consider that most of the Intellectual Property of our company is in our email servers or pst files, or that names, addresses and emails have a value for the criminal cyberworld even if we do not value it…

Internal processes are usually bad designed because they do not keep into account what need to be protected, :

  • resources
  • people
  • training
  • controls
  • metrics

And of course the most important request of all, KISS implementation (Keep It Simple Stupid).

having more than 1000 processes in place is not a good thing, is a nightmare.

3. how we implement a secure aware production process

No matter if we write code, make hardware or make paperwork, how secure is our work? how can be be sure the component we are using do what we want and have not be tampered? if we write code how we can be sure we write good, secure code? if we do cars how can we be sure that our entertainment system could not allow to take control of the car’s brakes?

it all the same, we need to implement security in our production process, this means being able to set up controls and metrics (again) that span all the production line, and involve also who provide us services or parts.

is our financial broker a secure interface? can we trust those derivates? can i trust this code?… is all about security.

if we delivery anything to anyone, HW, SW, Service of any kind we have a production system that need to be secured. sometimes the law help us putting references, sometimes is our job to create those references.

but if can’t provide a trustworthy production system why the customer should trust us?

it is not only IT, it is security, IT is just a part of the equation.

4. how we contribute to the progress of the cyber and information safety in our environment and ecosystem.

And we can’t be secure in an insecure world, we are all player of an interconnected world. we can’t think of security in the finance systems without the collaboration of all players (banks, governments, regulators bodies), the same should be for IT. But we are years behind, so it is time we take our part of responsibility and start collaborating to make the environment safer.

Kicking out the bad thing is a long, never ending process that require a lot of effort from everyone, all the players should be in charge of a part of the responsibility. if we are not cure we lower the overall security, so if a car can be hacked it is a danger for all the other cars on the streets, the same if enterprise do not keep this thing seriously they are a danger for all the rest.

collaborating, exchanging ideas, listening and Learning, there are a lot of different ways to do so.

Activities like the ENISA EU cyber security months that will be held in October are a great moment to think about security and related issues

just watch at the weeks arguments:

  • Week 1Cyber Security Training for Employees
  • Week 2Creating a Culture of Cyber Security at Work
  • Week 3Code Week for All
  • Week 4Understanding Cloud Solutions for All
  • Week 5Digital Single Market for All

this is what I am talking about. I strongly suggest that you all participate as citizens, companies, public entity. there is much to learn much to do, it’s time.

cheers

sent by Microsoft Edge

 

 

 

var aid = ‘6055’,
v = ‘qGrn%2BlT8rPs5CstTgaa8EA%3D%3D’,
credomain = ‘adkengage.com’,
ru = ‘http://www.thepuchiherald.com/wp-admin/post.php&#8217;;
document.write(”);

Time for enterprises to think about security, seriously was originally published on The Puchi Herald Magazine

%d bloggers like this: