GDPR and the technology market

GDPR and the technology market

Question: will the new privacy policies and laws impact the technology market?

This is an interesting question to ask ourselves; whether we are consumer of the technology market or technology vendors the impact of the new technologies (from cloud to IoT, from industry 4.0 to big data just to name the most acknowledged from a marketing point of view) privacy regulations can affect heavily our behaviours and the market.

so let try to understand what could be the implications of this new focus on privacy and data protection.

First of all we should try to understand what we are talking about.

Privacy, GDPR and the rest.

Privacy: the state of being alone, or the right to keep one’s personal matters and relationships secret:

In nowadays environments the presence of data related technology is pervasive: from business to  personal life technology play a big part of our life.  data related technology means we use technologies that is able to manipulate information: informations are collected, changed, communicated, shared all in form of data. Bit and bytes that describes our job, our business, our personal life.

Although in the past privacy was mainly a physical issue, and therefore legislation was focusing on those aspects, this increasing presence of data collection and sharing makes people realize that there is a new abstraction layer that involve privacy that is no more related to be alone or in a confined physical space, but in a undefined and without borders digital virtual space.

Email, Blogs, social networks, chat, E-commerce, electronic payment, smart phones all this and more shifted the same perception of privacy from a simple concept to something more hard to be defined.

Rulers and consumers started to deal with those issues in the last years whole enterprise and technical world has been remained almost frozen waiting for indications. the first indications that this would have been a wakeup call for enterprise has been the ending of the safe harbour agreement, privacy was not longer a secondary issue even for the economy.

The latest development can be easily identified in the new  European Union’s General Data Protection Regulation (GDPR), which comes into effect in May 2018, has far-reaching implications that extend far beyond the EU.

Businesses that fail to meet the new mandates aimed at protecting personal data face severe consequences. They can be fined up to $20 million, or 4 percent of global revenues — a cost that makes this regulation impossible to ignore.

But other areas of the world are moving toward a more cautious approach toward data privacy, not only Europe. While it is not yet clear how will be the new USA administration approach toward this subject, it is out of doubt that data privacy is becoming a major issue in the next years; how this will impact business is, although, not yet clear.

For sure is that GDPR will enforce companies to deal with a tremendous amount of data to be protected. Any data used to make inferences linked tenuously or otherwise to a living person is personal data under GDPR. Cookie IDs, IP addresses, any device identifier? All personal data. Even metadata with no obvious identifier is caught under the GDPR’s definition of personal data. Truth be told, such assertions are not entirely new. The difference under GDPR is that they will be enforced and the non compliance fined.

Today swathes of business practices unlocking data monetization rely upon data not being considered personal. So they apply weak consent, onward transfer and data reuse concepts. These models are going to change; either by choice, or by obligation.

Data Privacy , Data Protection and Cyber Security

One aspect that is not yet completely perceived and understood is the correlations between data privacy, data security and cyber security. The requirements that enforce the companies to respect data privacy legal requirements are intrinsically bound with the explicit request for data protection and, therefore, cyber security.

GDPR clearly define data should be fairly processed and protected: the implications are not only in terms of procedure to adopt inside the enterprises, but also technical in terms of data manipulation, retention, storage and security.

Recent security outbreaks as the one related to ransomware are an example of how basic cyber security threats can impact directly on this area, as well as common and well known cyber attack directed to data exfiltration.

This is a growing phenomenon and is affecting not only the classical online services (think of classic dating site attacks, as an example, to collect username and passwords) but, as an example, extensively the healthcare industry.

While in the past those outbreaks could have been just a relative minor issue, the new GDPR structure of fines could affect in a heavy way any company, regardless its sector, and some departments that in the past have never considered those issues as a business imperative, as marketing or Human Resource, will have to face a difficult transaction in terms of awareness, policies to be implemented and technology approach.

It is easy to forecast that this situation will shape in the next years the technology market in different areas.

Impact on the technology market

When we talk about the technology market we face different aspects, “technology” as a term can cover a wide range of things. We can talk about hardware vendors or software vendors. We can talk about service vendors (cloud, CRM or whatever you like more), IT enterprise or carrier HW providers, Security vendors, End user HW providers (as smart phone makers).

Recently the trend is to aggregate functions and offering, making those areas overlapping in the same company although not often integrated.

Since all the industry will have to face the new privacy requirements it is to be expected a increase on data privacy expertise requests hitting the market, and a growing demand for IT solutions that will help companies to manage the requirements. this could, as an example, give a small impulse to historically neglected areas as DLP solutions, data categorization solutions and so on.

Some little advance and effort will be probably put also on more traditional areas as backup.

An heavier impact will be seen in the growing online market with the need to protect not only privacy of users but also to save the economic transactions, content providers, social or gaming platforms will be heavily impacted too.

In a second run we will probably see a renewed interest for baseline security solutions, as the stakeholders will, sooner or later, realize that there is no compliance without data protection and there is not data protection without cyber security.

The request for expertise and consulting services will be mostly redirected outside to technology vendors (here considering HW\SW vendors as cisco, hp, huawei, SAP, Microsoft; service vendors as cloud providers – azure, AWS, google –  but also app stores, CRM online providers), consulting companies and technology integrators.

On the other end technology vendors will have to face a strange situations where they will be both requested to provide solutions compliant with the new rules, be the driver of the new requirements and implementations (public-private partnership basically means this)  and in need to implement solutions to protect themselves in different areas as:

Product and Services development

Here vendors will have to start developing products\services considering data protection a major issue. It is clear the impact on cloud or services, where data protection can be easily identified, but also the HW product side will have to face issues. Although it can seems trivial we can remember the problem related to GPS tracking in apple and, at some extension, android happened some years ago. privacy implication with products can be wider than expected, since we have to protect not only the data per se, but also the metadata (this is the wider range of GDPR and new privacy regulations).

Usually we tend not to consider, as an example, system logs as a problem in terms of privacy, but in effect they are if they contains data that can point to a physical person and being used to track somehow the person behaviour.

Firewall and router logs, as an example, could be used to determine what is someone doing online, and therefore can expose information that are subject to GDPR realm. minor features apparently but the truth that also metadata are object of GDPR.

Privacy By design and Privacy Enhanced Technology will be mandatory component of any product\service developement.

Marketing and Sales

Marketing(and or  sales)  has always been considered agnostic towards technology, but the ultimate scope of marketing is to get in touch with the market, this means customers and ultimately people. Marketing activities will get a huge impact towards GDPR requirements both in terms of operations, since is up on marketing to manage a large amount of data coming from outside the company, and communication.

Technology vendors, somehow, will be expected to lead and drive the request both in terms of consulting and example. The result of a breach or misinterpretation of GDPR guidances will impact severely the business from a brand point of view and undermine vendor credibility.

Internal protection

As any other company there will be a direct impact on business operations of any vendor dealing in the technology field. But this case the extension of the problem will not focus just on the standard cyber security procedures, since technology vendors enter, somehow, almost directly on customers IT or data processing infrastructure the request will be to implement an end to end protection system which include GDPR compliance and cyber security application. This will require technology vendors to operate on:

  1. supply chain
  2. production and vulnerability disclosure
  3. product and service delivery

all three area are still trying to develop standards and good practice although something is moving.

So what are the changes expected under the new regulation?

There are around a dozen headline changes which technology companies should be aware of.

Some of the key areas include:

  • Privacy by design and Privacy enhancing technology – privacy by design calls for the inclusion of data protection from the onset of the designing of systems. Companies must also only hold and process data which is absolutely necessary.

Privacy enhancing technology (PET) and Privacy by Design (PbD) are obligatory and mandated requirements under the GDPR. There remains no generally accepted definition of PET or PbD, but PbD is considered an evidencing step for software development processes to take account of privacy requirements. So the incorporation of what can broadly be defined as PET in such solutions represents PbD.

Two particular PET techniques that control downside and enable upside risk are differential privacy & homomorphic encryption.

  • Differential privacy counters re-identification risk and can be applied to anonymous data mining of frequent patterns. The approach obscures data specific to an individual by algorithmically injecting noise. More formally: for a given computational task T and a given value of ϵ there will be many differentially private algorithms for achieving T in a ϵ-differentially private manner. This enables computable optima’s of privacy and also data utility to be defined by modifying either the data (inputs to query algorithms) or by modifying the outputs (of the queries), or both.
  • Searchable/homomorphic encryption allows encrypted data to be analyzed through information releasing algorithms. Considered implausible only recently, advances in axiomatizing computable definitions of both privacy and utility have enabled companies such as IBM & Fujitsu to commercially pioneer the approach.
  • Data processors – those who process data on behalf of data controllers, including cloud-providers, data centres and processors. Liability will extend to these and businesses that collect and use personal data.
  • Data portability: Empowers customers to port their profiles and segmentation inferences from one service provider to another. This is a reflection by lawmakers that data is relevant to competition law, whilst not conceding an imbalance between a companies ability to benefit from data at expenses of us all as citizens.
  • Data protection officers – internal record keeping and a data protection officer (DPO) will be introduced as a requirement for large scale monitoring of data. Their position involves expert knowledge of data protection laws and practices, and they will be required to directly report to the highest level of management.
  • Consent – explicit permission to hold any personal data in electronic systems will become mandatory. It will no longer be possible to rely on implied consent with individuals having the option to opt-out.Customers consent to privacy policies that change. Being able to prove which contract was agreed to, in court or to a regulator, requires  registration time stamping and tamper resistant logs become de rigueur.As we move into an opt-in world of explicit consent and ubiquitous personal data, data transmissions beyond a website visit must be explicitly permissioned and controlled. In this world, default browser values de-link machine identifiers from search queries. In other words, in this new world, online advertising to EU citizens is in line for fundamental change.And given particular regulatory emphasis on profiling, explicit consent will require loyalty programs to differentiate consent between general and personalized marketing consents. Those consent flags must cascade through registration, reporting and analysis, targeting and profiling, contact center operations and all other processes that handle such data.
  • Breach notifications – the notification of a breach, where there is a risk that the rights and freedoms of individuals could become compromised, must be reported within 72 hours of the breach being identified. it is underestimate the relationship between breach notification and vulnerability disclosure. While for an end user those two aspect seems to be unrelated, there could be a higher impact on vendors for, at least, a couple of factors:
    • The breach notification could expose the vendor as the main source of the breach itself due to lack of vulnerability management and disclosure.
    • The victim could consider liability against the vendors which “vulnerabilities” caused the breach redirecting to them part of the costs.
  • Right to access – data subjects will now have the right to obtain confirmation from you of what personal data is held concerning them, how is it being processed, where and for what purpose.
  • Right to be forgotten – data subjects will now have the right to be forgotten which entitles the data subject to have you ensure that information is deleted from every piece of IT equipment, portable device and from server back-ups and cloud facilities.A framework to comply with this obligation would include the following steps:
    • Spot identifiers which tie together datasets, e.g: machine identifiers link together our social media experiences;
    • Prescribe how re-identifiable data flows in and outside the organization;
    • Document a scalable process to overwrite identifiers in all datasets where re-identification can be established, upon the validated request of a user, and
    • Third party contracts and SLAs should be adjusted to ensure compliance with validated requests.
  • Data Bookkeeping: Field level data, linked to an identifier, flows across geographies and legal entities, processed by machines and people. Organizations will account for these flows with evergreen reporting. It stands to reason that these flows will be threat-modeled for integrity and confidentiality so controls can be readily evidenced upon request.

 

GDPR impact

Privacy regulations as GDPR and the growing awareness and concerns related to data privacy and security are related to the expanding presence in everydays life and business of smart mobile devices able to process data, the growing online market, consolidated trends as cloud services or newcomers as IoT.

Technology market face this transition in front line, and will see the impact of new regulations and customer reactions in several ways. This is both a chance and a problem; implementation of new mandatory requirements will impact all areas, from design and production to sales and delivery. But this will means also new area of business in the consulting area, in the technologies to support GDPR and privacy compliances in the market where data analysis technology, artificial intelligence and other high end technology areas could provide a competitive\price insensitive advance vs the consolidated technology market.

The key success factor is to embrace this change and drive it acquiring internally the needed competences, implementing the correct corrections and driving the needed improvement related to product and services provided.

Future trend will see a prevalence of  technologies related to “data” processing and services related to data vs products. The new Data paradigm is already visible nowadays as example in the Big Data market (take data lake implementation as an example). in terms of technology market this will means to focus on Data Science which will pose a new and somehow unpredictable relationship with privacy regulations.

GDPR Risks and “Data Science”

The term data science describes a process from data discovery, to providing access to data through technologies such as Apache Hadoop (open source software for large data sets) in the case of Big Data; and distilling the data through architectures such as Spark, in-memory and parallel processing. That data science creates value is understood. What isn’t are the risks it exposes investors to under the GDPR, of which there are principally three:

Risk 1: The Unknown Elephant in the Room – Unicity: a general misunderstanding in monetization strategies is that stripping away identifiers of a data model renders the data set anonymous. Such a belief is flawed. So-called anonymous data sets can often, without implausible effort, be re-identified. Unicity is a measure of how easy it is to re-identify data. It quantifies additional data needed to re-identify a user. The higher a data set’s unicity, the easier it is to re-identify. Transactional and geo-temporal data yield not only high monetization potential, they carry statistically unique patterns which give rise to high unicity.

Risk 2: Relevance & Quality: Income, preferences and family circumstances routinely change, and preference data on children is difficult to ethically justify processing. While this creates a problem for predictive analytics, that data and the inferences it engenders can be considered inaccurate at a given point in time, which creates a GDPR cause-of-action. Data quality needs to stay aligned to business objectives.

Risk 3: Expecting the Unexpected: When data science creates unexpected inferences about us, it tends to invalidate the consent that allowed data to be captured in the first place, which, again, is a big deal. Data collected today, particularly from mobile devices, is subject to a constant stream of future inferences that neither the customer nor the collector can reasonably comprehend. Consider a car-sharing app that can model propensity for one-night-stands from usage patterns. While that data may not result in propositions today, the market will consider upside risk/option value to have been created (the market still does not seem to believe in GDPR impact), but this incremental data coming into existence creates downside risk (such data is difficult to find a legal-basis for, given the vagaries of a given consented disclosure).

More generally, the problem of negative correlations is brought to the fore by algorithmic flaws, biased data and ill-considered marketing or risk practices, the enduring example being U.S. retailer Targets’ predictive campaigns to pregnant teenagers, spotted by parents. These are examples of a new form of systemic control failure, leading to potentially actionable GDPR claims.

 

Related articles

var aid = '6055',
    v = 'qGrn%2BlT8rPs5CstTgaa8EA%3D%3D',
    credomain = 'adkengage.com',
    ru = 'http://www.thepuchiherald.com/wp-admin/post.php';
document.write('');

GDPR and the technology market was originally published on The Puchi Herald Magazine

Why IT companies are so concerned by latest (and future) USA administration moves.

Why IT companies are so concerned by latest (and future) USA administration moves.

Latest USA administration moves are rising a lot of concerns towards the IT community, and a lot of concerns worldwide.

There are, of course, different sentiments related to political beliefs, ethics and moral considerations that should be considered. I will not enter here in the political, ethical and moral arena to present my personal point of view on the specific subject but I would like make some considerations on the IT sector reactions to what is happening.

It is an easy prediction that the future economic outlook will be impacted by USA administration approach and actions, and this can cause understandable reactions on the various stakeholders.

It is interesting to note the different approach from companies that need a global market to survive, as the technological ones, and the ones that rely on local and few other markets.

This difference is, nowadays, more evident on the IT (SW, HW, Services) sector, a highly technological and advanced area that has 2 important needs:

1) highly qualified and skilled personnel

2) a global market to act on

Setting aside the ethical and moral considerations (which are, don’t get me wrong, imperative to anyone), from a business point of view there is no doubt that some markets (as the technological one) need globalization more than other to prosper and survive.

The IT market, although, cover a critical position here, since it is the engine of the 4th industrial revolution and it is facing, as of now, a growing resistance from the older economical model players; comments and reactions I have seen on various platforms are mostly expression of this growing sentiment.

The IT market needs, market historically leaded by USA companies, has been able to growth thanks mainly to innovation, openness and intercultural exchange.

People working in this sector belongs to different ethnic groups, countries and religions bringing, due to this diversity, high value thanks to their experience and approach. In order to create something new (which is what all the Information technology industry is about) a different approach to things is needed. It is not a case that the IT industry in USA has historically found in the open approach (in terms of market and human resources) a tremendous advantage which brought USA to lead the IT market.

IT CEOs are understandably concerned that the environment that made them prosper now can change dramatically. USA administration announced economic protectionism and other rumored or in place actions (last but not least the improperly so called “muslim” ban) could, as a matter of facts, harms those company’s ability to growth and prosper.

In this view it is totally understandable the concerns of important CEOs towards the present and future actions of USA government and the need to address those concerns openly in public.

If, as rumor says, one of the next moves will be to target H-1B visas (working visas) this will heavily affect those companies that will be forced to rethink their approach to the technological market may be forcing them, as an example, to move R&D facilities to more friendly shores.

The truth behind this is that the need for qualified people in the IT sector is still growing to a rate that there is no single nation, nor even USA, that can provide the resources needed to back up this development; therefore the need for qualified and skilled people coming from virtually anywhere is imperative for this sector.

Like it or not some political issues does affect the economic of some sectors, therefore is absolutely understandable that the technology market reacts toward an approach that can undermine its chance to grow, expand, and ultimately bring value to a country in terms of economic wealth and image.

It is worth to notice also that the IT sector is changing, the technologies are shifting from products to services that need a worldwide market to be remunerative. From Cloud to IoT, passing through security and Big data all the recent technology trends calls for the most open and widest possible market.

But there is another factor to take into account; the consolidated IT technologies that need a limited innovation approach are now offered also by emerging competitors in countries outside USA as china and others.

Even if not ready to provide, in most cases, a disruptive technologies advance those companies are able to produce, in the consolidated technology market, a stable product implementation and constant improvement in a price\competitive fashion. Quality issues in consolidated technology fields are a minor concerns since products tend to be aligned.

If we add the geopolitical issues that lead, as an example, some countries to start looking for alternatives to USA products (China, Russia, Pakistan, India are an example, but understandable the middle east area in the future) the picture is more clear.

This is not politic, but economy.

One further economical consideration, the inevitable shift to a so called “data economy” (the real meaning of the 4th industrial revolution) is something that should be driven. Closing the economy to the old models although make you feel in your “comfort zone” will just retard the inevitable, creating more later costs to adapt.

But there are also ethical and moral consideration to be taken into account, and most of those CEO for once demonstrate that business and ethics can match, probably due not only to their business but also their heritage.

Kudos to Satya Nadella , Brad Smith, Sundar Pichai, Tim Cook, Mark Zuckerberg and the others that put business and ethics as a matter and speak out.

Antonio

var aid = '6055',
    v = 'qGrn%2BlT8rPs5CstTgaa8EA%3D%3D',
    credomain = 'adkengage.com',
    ru = 'http://www.thepuchiherald.com/wp-admin/post.php';
document.write('');

Why IT companies are so concerned by latest (and future) USA administration moves. was originally published on The Puchi Herald Magazine

Watching the new presidents’ acts and talks (and the possible future outlook) And I am scared

Watching the new presidents’ acts and talks (and the possible future outlook) And I am scared

does the floor color make a difference here? really?

I usually do not write here about political stuff, if not in rare occasions but, hey, this is my blog at the end so I can express my feeling and thoughts.

I was watching today some videos related to USA president elected Donald Trump and his approach to the news (he would tweet: Fake news, sad!) and, honestly, I am scared to dead.

I do not like Mr Trump, USA citizens elected him so I have to cope with that, but this does not means I have to like him. I found most of his tweet questionable, his cult of personality disturbing, his approach with media alarming.

This does not means of course that media are always right, but is unthinkable to me that in a open democracy a president can consider communication a one-way affair and anyone who criticize him is “fake news“, “bad person”, “untrusty” or whatever mr Trump consider worth to put in a tweet.

let say the first days of his activity made me more worried than ever.

It was like the silly polemic on the amount of people watching his ceremony live. more than Obama one? Less than Obama one? the point was he could have managed, for my taste, the whole affair differently… claiming false statement was not the best presentation to the world… but the whole Trump administration seems to be  suffering from a severe news detachment, funny form a man that owe so much to the media.

Will mr Trump makes America great again? I am not so sure and, honestly, I haven’t understood what means america being great again, and what will be the price the world will have to pay for his vision. for sure at the moment I saw a clear detachment from actual data (take economic and crime data in USA compared to mr Trump assumptions) and a willingness to not respond to any doubt. He is autoreferential, he is the unquestionable metric for truth, ethics and results.

I saw this in the near past, from president Duterte from philippine, or Zuma in South africa, or turkey president Erdoğan or in a less recent past from Benito Mussolini or Hitler.

What they have in common? Extreme nationalism, cult of personality, hate for free press, being autoreferential.

I am not saying here that Mr. Trump will be like Mussolini, I am saying that there is a common pattern, and when I listen to absurd justifications like the ones presented to justify the false statements related to the crowd presence during mr Trump ceremony I am frankly scared to death.

But Trump, Erdogan, Duterte are a symptom of a bigger problem

We are on the verge of a 4th industrial revolution, but people of the countries all around the world seems to be oriented to close themselves into their borders in an attempt to protect themselves from the inevitable change. Alas changes will eventually come anyway and this is scary. Protectionism and nationalism are the first answer to change. But in the new world that we are shaping what will the consequences be?

If USA citizen will try to close their country (build the wall, remember) is in their right although not sure in their interest. Sure they are a big market, but it is not self sufficient. Without selling their stuffs outside how much USA economy will be affected?  Why a mexican should then buy a USA car instead of an European or Japanese or Chinese or Indian one? or why we should take a USA air flight unless we are forced to? (I actually travel emirates when I can).

But also why we should buy apple or use googleandroid? And the whole new list of technologies that will shape the new economy?  because this is the point, the new industrial revolution will put its root on data sharing… we will move from products to services, and to justify the investment needed we will have to scale at an international level.

Hate calls hate, racism calls racism, violence calls violence, disrespect calls disrespect. I know you don’t see it in your leader at the end you have to support him because it is what you created with your hands (vote) to cover your fears, but you should try to see it in other reactions where this is going …

Like it or not, this new economy will force to change our approach to job, new jobs will comes while other will die. Alas the trend is moving away from manual jobs to more skilled ones, more focused on the new technologies. Not only engineering, a whole bunch of new knowledge workers that will reshape the current middle class.

But we are in the middle of this change, we can’t see the light yet we just see the scary shadows of the tunnel. The good news is that all the industrial revolutions increased the number of workers, but at the same times have been shaped by crisis and, worst scenario, wars. We are experiencing the economic crisis right now (it is not over i am afraid) but we are (as people did in the past) addressing the new with old recipes.

In a Hyperconnected world as we are attempt to leverage censorship are questionable. China, north korea, Saudi Arabia, Iran will be the new references for the once flag of freedom of speech?

This is not just a USA issue, the rise of populism in Europe and in the rest of the world is a sign that this feeling is running through all the population of the biggest democracies (where you do not have democracy, well, you do not have the right to question the government and its rule).

The whole Brexit rhetoric has been based on this kind of assumptions (regain the control of our destiny, of our nation, of our economy so we will be again bigger, better, stronger …)that is not so different from the Front Nationale or Lega nord statements, or the Grillo’s claim of the need of a “strong man”.

What a twisted world it has become? Ironically the champion of capitalism, at the moment, is china with its free trade and free commerce slogans, while we ought to russia the safety for someone who disclosed USA attempt to hack million of USA and worldwide citizens.

Willingly or not the change will come, no matter what. The point is how much we will have to suffer because of this resistance.and remember each time you do not drive the change the change drives you.

hope for the best but prepare for the worst… at the moment I am scared because I see the down of an old era trying to strake the last shots, and they will hurt…

var aid = '6055',
    v = 'qGrn%2BlT8rPs5CstTgaa8EA%3D%3D',
    credomain = 'adkengage.com',
    ru = 'http://www.thepuchiherald.com/wp-admin/post.php';
document.write('');

Watching the new presidents’ acts and talks (and the possible future outlook) And I am scared was originally published on The Puchi Herald Magazine

Industria 4.0. Rivoluzione culturale prima che tecnologica

Industria 4.0. Rivoluzione culturale prima che tecnologica

Industria 4.0. Rivoluzione culturale prima che tecnologica

c5d0ea88-fca7-496b-90d3-f8fae042e105-large
Siamo ormai abituati ad avere a che fare con espressioni linguistiche costituite da un nome e due numeri puntati il cui secondo è uno zero: tipo 2.0, 3.0, 4.0 eccetera. Messe in ordine ascendente, le cifre dovrebbero suggerire un’evoluzione, un passaggio verso una versione più avanzata (o aggiornata) di una data situazione o di un certo oggetto.
Fra le prime ad imporsi e più note non solo fra gli addetti ai lavori c’è sicuramente “web 2.0”. Si tratta di un fenomeno affascinante dal punto di vista ideale, che ha fatto cultura, che ha dato l’avvio a molte discussioni sul futuro delle nostre società ma che da un punto di vista tecnologico è sostanzialmente vuoto, privo di contenuti. Ciò che il web 2.0 portava come straordinaria novità era il cambio di approccio all’uso della rete, con il passaggio da un sistema in cui solo un numero limitato di content provider produceva e forniva contenuti, ad un’altra modalità che, invece, prevedeva e favoriva la nascita di una comunità sempre più allargata di utenti, ognuno dei quali in grado non solo di produrre ma anche di condividere – o mettere in rete – questi contenuti.
In un certo senso, l’Industria 4.0 non è differente dal sopra citato web 2.0: più che di rivoluzione tecnologica – il digitale non è certamente una novità di questi ultimissimi anni – si deve parlare di nuovo atteggiamento o rinnovato approccio alle modalità di fare industria, di produrre. Un atteggiamento con forti legami a questioni di ruolo e di procedura che coinvolge molto meno il personale tecnico e molto più figure chiave in azienda come il direttore finanziario o l’amministratore delegato. Personaggi che nell’ecosistema aziendale delineano le strategie e prendono le decisioni, scegliendo una direzione piuttosto che un’altra.

Operando in una compagnia che di Industria 4.0 fornisce il backbone, cioè l’informatica e quegli strumenti che servono a collegarsi, sono fermamente convinto di quanto, per un’azienda, sia importante avere un progetto. Ogni implementazione di software senza un’idea seria e strutturata alle spalle è assolutamente inutile, se non dannosa.
Ecco perché l’Industria 4.0 è innanzitutto la necessità o la capacità di definire all’interno dell’azienda, qualunque essa sia, qualunque sia l’impatto economico, un percorso di nuova gestione delle risorse. E qui si intende gestione e integrazione di tutte le risorse, da quelle energetiche a quelle produttive a quelle informatiche e così via.
L’Industria 4.0 è una bellissima idea grazie alla quale tutti gli oggetti e tutti i soggetti che fanno parte di un’impresa smettono di essere isolati e diventano interconnessi. E non solamente come connessione fisica o di comunicazione, ma come vera e propria questione di processo. In questo senso, l’interconnessione vuol dire che tutti gli oggetti – fra loro “uniti” – devono poter lavorare insieme per fornire un risultato.
Ovviamente per poter operare in modo congiunto e per garantire un risultato servono dispositivi e strumenti (hardware e software) in grado di ben funzionare, dai connettori per collegamenti, ai sensori per monitoraggio dati, ai sistemi di analisi big data e di qualità del dato, fino ai sistemi di sicurezza informatica. Elementi che pur importanti, non sono decisivi per arrivare a un risultato pieno. Ciò che viene prima del buon funzionamento degli strumenti è la capacità di integrare la tecnologia nei processi e questi – a loro volta – nella cultura aziendale. In altre parole, significa che l’impresa è preparata su come utilizzare al meglio (ovvero in modo funzionale e strategico all’attività dell’impresa stessa) ciò che le nuove tecnologia potranno generare.
Un esempio su tutti: la mole di dati che gli oggetti interconnessi producono rimane inutilizzata o sottoutilizzata a causa di scarse capacità di analisi.
L’Industria 4.0 è rivoluzionaria nel suo essere elemento di rottura rispetto al modello industriale consolidato. E questo discorso vale tanto per i grandi gruppi, dove ogni intervento ha ripercussioni maggiori (basti pensare agli interventi di efficientamento energetico) sia per le PMI.
In Italia, in particolare, è importante che la piccola e media impresa si doti degli strumenti culturali per capire dove intervenire per diventare o rimanere competitiva in un panorama mondiale di forte cambiamento. Ciò significa saper scegliere sia la soluzione più adatta alle proprie esigenze sia il sistema che meglio si sposa con i propri piani strategici di crescita. E le offerte non mancano: piattaforme di proprietà, servizi cloud, affiancamento di consulenti, affidamento in outsourcing. Ogni scelta ha vantaggi e svantaggi: l’importante è che anche in una piccola realtà imprenditoriale vi sia qualcuno che abbia una visione più ampia, a medio-lungo termine.
Come sarà, dunque, questo passaggio all’Industria 4.0? Probabilmente lento, a piccoli step sia per le ragioni culturali sopra citate, sia per motivazioni più squisitamente economiche, considerando i costi non indifferenti per l’adeguamento della produzione a ai nuovi standard.
Senza dubbio sarà inevitabile e prima si inizierà a pensare in modo nuovo, prima recupereremo come sistema-Paese competitività a livello globale.

var aid = '6055',
    v = 'qGrn%2BlT8rPs5CstTgaa8EA%3D%3D',
    credomain = 'adkengage.com',
    ru = 'http://www.thepuchiherald.com/wp-admin/post.php';
document.write('');

Industria 4.0. Rivoluzione culturale prima che tecnologica was originally published on The Puchi Herald Magazine

Learning from mistakes is not an easy task to accomplish

Learning from mistakes is not an easy task to accomplish

Learning from mistakes is not an easy task to accomplish

mistakes-are-the-stepping-stones-to-learningIt is common to read that we should learn from our mistakes. It is absolutely true. The problem when we talk about mistakes is to learn how to recognize a mistake, and which lesson we can take from it.

Although it seems a simple task to accomplish, it is one of the hardest to do.

Recognize a mistake

learning-from-mistakes-quote-by-marcus-tullius-cicero

The mistake analysis is a serious thing.

First of all we should be able to recognize a mistake

Then we should analyze what kind of mistake we are dealing with

After we understood what the mistake is we should try to find out which lesson we want to take from it in order to avoid to repeat it again.

But it is not always so easy to recognize a mistake. Sometimes a thing goes wrong and we do not understand why, we focus mainly in find someone to blame or the guilt.

Causes or symptoms?

Sometimes it can seem simple to understand what a mistake is, the truth is that most of the time we are not actually able to understand the difference between the cause and the symptoms of a mistake. Looking in the wrong direction can make us think the symptoms is just a cause, this way we will never learn a lesson, but we will add new problem for the future.

To understand the point think about what a doctor should do to diagnose an illness. Sometimes what can seem a cause is just another symptom, so maybe you see fever as the cause of your debilitating, but fever is caused by what? By an infection may be? Or by something else? Looking for symptoms even apparently unrelated is, usually, the way to discover the root cause of your illness. Addressing symptoms without a proper analysis can drive you to the wrong diagnosis.

Do not look for someone to blame

the-best-way-to-learn-from-your-mistakes-is-to-admit-them-rather-than-to-blame-them-on-someone-else-quote-1
The first rule when something goes wrong is not to find a guilty, someone to blame. It sound incredible but find a guilty will not help you to address the problem.

Sometimes people make mistakes because it is forced to from external factors, sometimes are just honest mistakes, sometimes is done for negligence, sometimes for will to damage.

No matter what is the reason our task is to understand:

  • What happened?
  • What went wrong?
  • What can we do to minimize the impact?
  • What can we do to address and fix the problem?

Those are the important things. If you just look for a guilty you are wasting resolution time. This can be just an output of the error analysis and not your first target.

Finding someone to blame could help your ego but hardly will address the real problem.

Understanding where the error has been done

Errors are not all the same

Most important errors can be divided into specific categories related to the level.

  • You can have error at strategy level
  • You can have error at tactic level
  • You can have error at policy level
  • You can have error at execution level:
    • Policies
    • Process
    • Procedures

download
Understanding where the error is mandatory to be able to address it and learn the lesson.

The problem is that, all errors have to deal with execution, because we measure the impact there, so it is a common mistake to stop the analysis at the execution level (where you can find your scapegoat to blame)

To find out what an error actually is and therefore what is its level somehow it is useful to do an approach similar to reverse engineering.

You should track back from the evidence of the error back to the real source.

At every piece of evidence you should take note of:

  1. Is this actually an output related to the error?
  2. What was the cause?
  3. Is the cause related to something else?

The relationship can be to other outputs or to procedures and process that are related to the specific output.

Ones you collected all the evidence, of course the problem is to see what was the procedure or procedures involved.

Sometimes it can happen that procedures applied correctly have as an output an error, this is the simplest case. In this case you have to step back in your analysis to the process that is connected to the procedure.

Alas sometimes an error in the procedure can hide a problem in the process, so if a procedure gave as result a mistaken output it is another common mistake to stop the analysis at that level.

The correct approach should be something like:

• Why this output is wrong?

• What kind of cause has developed the error?

• Is this error an exception or it can be replicated in this or different scenarios?

If we are not able to make a model of the error we can hardly fix the specific procedure.

The things are more complicated when we have to deal with different procedures but it is basically the same.

When we have created a model that describe the error we can expand our analysis further and see if the problem is strictly related by the way we feed the procedures andor the whole process lack of control points.

Process and procedures are, of course, different things. But theoretically a sound process should be composed by different procedures that design its physical outputs in detail.

The process itself should contain the enforcement point and the control point, all described by specific procedures.

Sometimes an error is simply a son of a bad designed process. If there are lots of procedures, over complicated or simply overlapped the probability of an error rise up.

So, as an example, a big excel form to be filled manually is a bad idea from a risk error management point of view, because it makes easier to make mistakes.

If someone input wrong data on an excel spreadsheet the blame should be, mainly, on the excel form. If someone can make mistakes it means the excel is not the good data input tool.

Even if the wrong data input has been done intentionally the real point is that the spreadsheet is to blame, because the procedure itself did not implement good control point.

The tools are not a secondary aspect in an error analysis, and also user interfaces and control points.

Again looking for a guilty will not help you to learn anything.

Iterating this approach we should be able to regress to the point the error real root cause is, and so at this point we should be able to understand what really happened.

Strategy, tactics, policy, process and procedures

Going backward looking for root cause is a tedious and difficult job because the root cause can be related to several different processes apparently not connected one to another.

A human error, as an example, can be related to excess of stress, of lack of training, or bad tools and procedure design, a toxic company environment, a bad boss or a combination of all those things and more.

Sometimes the liaisons that cause the error are related to different levels, so some are at procedure level, some can be at processprocesses other can be even at tactics or strategy level.

And it is should be clear that in a company there are a lot of strategies in place: how to deal with personnel, how to deal with the market, how to deal with production and so on….

When strategy fails.

We just hope not to have errors at Strategy level; this is really a bad thing. It means we are doing it all wrong. In this case we should put in place a correction that requires questioning all the structure we have put in place. The problem with errors at strategy level is that at this level it means you don’t know where you want to go….

The good news is that at tactics, process and procedure level not all will be waste. But all the interaction should be revised carefully.

Tactics can fail too

This is more common, but nevertheless quite painful. Tactics without strategy is just a waste of time and resource, strategy without tactics is even worse, just mere words.

Tactics should put in place how we want to go there, but this can be wrong.

Wrong tactics are the usual problem in war, which makes people lose battles. And losing a battle can lead you to lose a war…

Luckily tactic can be revised from time to time and adjusted to the objective. But if we are not able to recognize an error at tactic level we can simply make all effort vanish.

Processes and procedures won’t help to cover a bad tactic issue, because they can’t address the root cause, may be mitigate the effect.

So if you want to win the market with a wrong channel approach (a classical tactic mistake) you can cover it for a while with price policies or sales policies but sooner or later you will pay the price in terms of higher expenditure and minor incomes.

Policy, processes and procedures.

When we are really lucky the error is related to the lower level, between policies, processes and procedures.

In this case, assuming we have designed a sound system, we can address the problem and the related externalities that the problem can cause.

It is mandatory, at this level, to be able to understand how the possible solution or mitigation can impact other process.

Correcting a problem we create thousands new

A common issue when analyzing an error to learn a lesson is that the solution can be worse that the primary cause.

This happen, usually, for 2 main reasons:

  • Not all the implication has been correctly analyzed
  • The problem is actually at a higher level.

This is common, as an example; when we try to fix a bug in software, and the fix create other unwanted problems.

It is not that the fix does not solve the issue; it is that the issue and the fix does not live in an isolated realm, but are interconnected with the rest of the structure.

Correcting a bug is a good way to understand the question we are trying to sort out.

A bug can be just a simple piece of code bad written, or due to an architectural error in the whole software.

But the underline lesson we should address is: if there has been a problem in code writing or architecture design why we made this mistake? How we can correct it’

The real solution therefore would not be to write a fix, this would be just addressing the contingency, and the real lesson should be how to make developer in condition to write better code.

Error handling is not learning from error

learn-from-your-mistakesAnother common mistake is thinking that running around fixing problems and managing errors would be a good way to learn a lesson.

Alas sometimes what you have to do to fix the problem is not what should do to avoid the problem from the beginning. Fixing a problem is just contingency and requires a different approach from learning the lesson.

If you run an emergency response team, as an example, those things are clear. There is a huge difference between what you should do in an ERT and how things should run in a normal way. And this goes through every aspect, form hierarchy to people management, to actions to be done.

Thinking to run normal operation as in Emergency is just foolish because ERT are made basically to provide containment not standard operation.

Conclusion?

learning-from-mistakes-8-638Learning from mistakes is one of the most important things we can do to improve. The whole point is to understand what actually the mistake is, and how to avoid it in the future.

var aid = '6055',
    v = 'qGrn%2BlT8rPs5CstTgaa8EA%3D%3D',
    credomain = 'adkengage.com',
    ru = 'http://www.thepuchiherald.com/wp-admin/post.php';
document.write('');

Learning from mistakes is not an easy task to accomplish was originally published on The Puchi Herald Magazine

The IoT Files: The need for cryptography

The IoT Files: The need for cryptography

The IoT Files: The need for cryptography

AAEAAQAAAAAAAASVAAAAJDM3NzViYTkwLWM3NmEtNDAzZC1iNDczLTU0NTJjZWI1ZTZiMw

One of the main arguments that should be touched by IoT discussion is cryptography. There is an undisputed consensus that cryptography is a mandatory requirement to preserve security and privacy in the IoT world, but we are far away for a general consensus on how to operate.

The need for cryptography in IoT comes from two main aspects:

The first need is clear; encryption is a mandatory requirement when we want to implement any form of authentication and non repudiation. Encryption is widely used even if we don’t know we are using it. PKI, sign in certificates are just some example.

Whenever we want to transmit something, encryption comes in hand to be sure what we transmit is not seen by 3rd party and not tampered.

Whenever we store something encryption comes handy when we need to preserve the access to those data, even at a local level.

Regarding Data privacy, it is a way more strong call for encryption, a wide use of it. As a system IoT allow a multitude of devices to exchange data that can become sensitive and private. Without a clear understanding of this point there can be misinterpretation. In IoT the amount of data and metadata will be way bigger than the already impressive amount of data we deliver on the wild nowadays. So basically a more cautious approach to data privacy will be needed and embedded into the very essence of IoT, therefore encryption will be a mandatory requirement.

But encryption is not an easy area, and I am not talking about implementation (which can e easily achieved) but for the need and use of this technology.

A little check on the actual status

Cryptography is not only a technical or business argument (cost vs performance vs security) but, mainly, a political issue.

The history of cryptography has been doomed by constant attempts to block, or control, the use of good secure cryptography tools in the civil environment. It is not a mystery nowadays we have a lot of discussion upon cryptography and backdoors (although the term “backdoors” is misleading and misused most of the time).

The USA has, as an example, a good and long history fighting against civil cryptographic tools both in the past, may be someone remember the PGP affair, and in nowadays events, think of apple case as a clear example.

Every time we lower the level of security for some reason, we have to expect sooner or later someone else will leverage and use it for purpose not intended by the regulator. Recent history is full of those examples; some of the actions performed against cryptographic tools are on the news every day. We tend to call them vulnerability (SSLTLS vulnerability like freak  …) but let us be clear on what they actually are: the consequences of export grade restriction on cryptography.

There are a lot of laws and regulation related to the use, import and export of cryptography, here some examples:

This section gives a very brief description of the cryptographic policies in twelve countries. We emphasize that the laws and regulations are continuously changing, and the information given here is not necessarily complete or accurate. For example, export regulations in several countries are likely to change in the near future in accordance with the new U.S. policy. Moreover, some countries might have different policies for tangible and intangible products; intangible products are products that can be downloaded from the Internet. Please consult with export agencies or legal firms with multi-national experience in order to comply with all applicable regulations.

Australia

The Australian government has been criticized for its lack of coordination in establishing a policy concerning export, import, and domestic use of cryptographic products. Recent clarifications state that there are no restrictions on import and domestic use, but that export is controlled by the Department of Defense in accordance with the Wassenaar Arrangement.

Brazil

While there are no restrictions of any kind today, there are proposals for a new law requiring users to register their products. Brazil is not part of the Wassenaar Arrangement.

Canada

There are no restrictions on import and domestic use of encryption products in Canada today . The Canadian export policy is in accordance with the policies of countries such as United States, United Kingdom, and Australia in the sense that Canada’s Communications Security Establishment (CSE) cooperates with the corresponding authorities in the mentioned countries.

China

China is one of the countries with the strongest restrictions on cryptography; a license is required for export, import, or domestic use of any cryptography product. There are several restrictions on export regulations, and China is not participating in the Wassenaar Arrangement.

The European Union

The European Union strongly supports the legal use of cryptography and is at the forefront of counteracting restrictions on cryptography as well as key escrow and recovery schemes. While this policy is heavily encouraged by Germany, there are a variety of more restrictive policies among the other member states.

France

France used to have strong restrictions on import and domestic use of encryption products, but the most substantial restrictions were abolished in early 1999. Export regulations are pursuant to the Wassenaar Arrangement and controlled by Service Central de la Sécurité des Systèmes d’Information (SCSSI).

Germany

There are no restrictions on the import or use of any encryption software or hardware. Furthermore, the restrictions on export regulations were removed in June 1999.

Italy

While unhindered use of cryptography is supported by the Italian authorities, there have been proposals for cryptography controls. There are no import restrictions, but export is controlled in accordance with the Wassenaar Arrangement by the Ministry of Foreign Trade.

United Kingdom

The policy of United Kingdom is similar to that of Italy, but with even more outspoken proposals for new domestic cryptography controls. Export is controlled by the Department of Trade and Industry.

Israel

Domestic use, export, and import of cryptographic products are tightly controlled in Israel. There have been proposals for slight relaxations of the regulations, but only for cryptographic products used for authentication purposes.

Japan

There are no restrictions on the import or use of encryption products. Export is controlled in accordance with the Wassenaar Arrangement by the Security Export Control Division of the Ministry of International Trade and Industry.

Russia

The Russian policy is similar to the policies of China and Israel with licenses required for import and domestic use of encryption products. Unlike those countries, however, Russia is a participant of the Wassenaar Arrangement. Export of cryptographic products from Russia generally requires a license.

South Africa

There are no restrictions on the domestic use of cryptography, but import of cryptographic products requires a valid permit from the Armaments Control Division. Export is controlled by the Department of Defense Armaments Development and Protection. South Africa does not participate in the Wassenaar Arrangement.

 

In the table below, 75 countries have been divided into five categories according to their cryptographic policies as of 1999. Category 1 includes countries with a policy allowing for unrestricted use of cryptography, while category 5 consists of countries where cryptography is tightly controlled. The table and most other facts in this answer are collected from [EPIC99], which includes extensive lists of references. Countries with their names in italics are participants in the Wassenaar Arrangement .

 

1 Canada, Chile, Croatia, Cyprus, Dominica, Estonia, Germany, Iceland, Indonesia, Ireland, Kuwait, Kyrgyzstan, Latvia, Lebanon, Lithuania, Mexico, Morocco, Papua New Guinea, Philippines, Slovenia, Sri Lanka, Switzerland, Tanzania, Tonga, Uganda, United Arab Emirates.
2 Argentina, Armenia, AustraliaAustriaBelgium, Brazil, BulgariaCzech RepublicDenmarkFinlandFranceGreece,HungaryItalyJapan, Kenya, South KoreaLuxembourgNetherlandsNew ZeelandNorwayPolandPortugalRomania, South Africa, Sweden, Taiwan, TurkeyUkraine, Uruguay.
3 Hong Kong, Malaysia, SlovakiaSpainUnited KingdomUnited States.
4 India, Israel, Saudi Arabia.
5 Belarus, China, Kazakhstan, Mongolia, Pakistan, Russia, Singapore, Tunisia, Venezuela, Vietnam.

NOTE: WHAT IS THE WASSENAAR ARRANGEMENT?

The Wassenaar Arrangement (WA) was founded in 1996 by a group of 33 countries including United States, Russia, Japan, Australia, and the members of the European Union. Its purpose is to control exports of conventional weapons and sensitive dual-use technology, which includes cryptographic products; “dual-use” means that a product can be used for both commercial and military purposes. The Wassenaar Arrangement controls do not apply to so-called intangible products, which include downloads from the Internet.

WA is the successor of the former Coordinating Committee on Multilateral Export Controls (COCOM), which placed export restrictions to communist countries. It should be emphasized that WA is not a treaty or a law; the WA Control lists are merely guidelines and recommendations, and each participating state may adjust its export policy through new regulations. Indeed, there are substantial differences between the export regulation policies of the participating countries.

As of the latest revision in December 1999, WA controls encryption and key management products where the security is based on one or several of the following:

A symmetric algorithm with a key size exceeding 56 bits.

Factorization of an integer of size exceeding 512 bits.

Computation of discrete logarithms in a multiplicative group of a field of size is excess of 512 bits.

Computation of discrete logarithms in a group that is not part of a field, where the size of the group exceeds 112 bits.

Other products, including products based on single-DES, are decontrolled. For more information on the Wassenaar Arrangement, see http://www.wassenaar.org/.

Why IoT needs cryptography and where?

IoT, as a general concept, refers to a multitude of object that can access to the Internet.

The need to access the internet is related to several aspects: need to exchange data, receive command, and export outputs…

Of course there are different needs and different grade of privacy and security required accordingly to the nature of the object we are talking about: it is not the same thing to talk about an infotainment car system, an autonomous driving system or a GPS, as well is different when we talk about a refrigerator or a SCADA controller in a nuclear plant.

But, no matter what the device is and its role, some assumptions are common to all IoT objects:

  • They have to deal with sensors
  • They have to deal with data
  • They have security and privacy implications
  • They have to store data
  • They have to transmit data
  • They have to received data

The first point is important in the encryption discussion because sensors can retrieve information that can give indication to an expert eye to a lot of things outside the realm of the IoT object.

Data are of course the main reason to implement encryption.

Security and privacy implication are the obvious case study for encryption.

The last three points are where encryption should, at least, be implemented.

One of the common mistakes related to IoT security consideration is to focus on a specific aspectdevice and not see the big picture.

Looking at a specific device is good for implementation, but not good to understand security and data privacy issues. What can seems trivial in an object assume a different role in a context, and IoT is all about context.

So the idea is that even if some data can seem harmful, they can assume a different value if merged with other data.

Cryptography role, in this context, is to prevent those data to be used for not authorized and not wanted activities. But cryptography is also one of the basic tools needed to allow data integrity and non repudiation.

Cryptography, of course, is not the panacea of very problem, but it is one of the tools to be used when we transmit and store data in order to preserve and save information.

Data transmission

When we have to transmit or receive data, no matter if commands, processed outputs or raw data, we should be confident that our data:

  • Comes from a trusted and authorized source
  • The data has not been manipulated during the transport (Data injections, data forgering…)
  • Data are protected by unauthorized access (data sniffing…)
  • The data are consistent with the requests

Encryption can play its role mainly in the second point, although encryption is also used for authentication and authorization aspects.

Encrypting a transmission allow the data to pass from a point A to a point B without third party can read it preventing exfiltration of data. And since the key provide a basic level of authentication a data encryption can provide also some defense against injections of unwanted data.

The downside of encryption is related to two aspects: solidity of the encryption and key exchange.

Those aspects are not trivial, a 40 symmetrical encryption key can be easily forced by modern computer systems (see as an example the “Bar mitzvah attack” on ssltls protocols), therefore a 40 bit encryption (see freak lesson) is a clear security hole.

On the other end even a longer encryption key is useless if the key is discovered.

Processor time and resources

The longer the key the more the encryption will take in terms of time and resources. Encryption chipset are, usually, the answer to solve this aspect, while they can do a little on key exchange.

The argumentation against a wide use of long keys in encryption (256 bit) are, in reality, more related to political or costs constrain than to technical ones. And even costs are just partially a problem, scaling the production would make those chips inexpensive.

Of course software encryption is a more economic (but, may be, less secure) way to address the question on IoT.

All the point is to understand how much we can invest in this IoT device in terms of resources.

Another point to take care of is the overload that encryption gives on network package. Usually a encryption protocol brings some overload to the transmission due to bigger packets (although the use of compression can reduce it) and the key exchange process which can require several exchanges.

The key exchange issue

The other issue is the key exchange. To make encryption (symmetrical or asymmetrical) you need to exchange the key with your partner in communication.

The key can be

  • Static
  • Dynamic

A static key is easy to be implemented and can be hardened in the solution. The problem with static keys is that they can be good for storage issues but not good for data transmission. Once the key has been discovered all the security has gone

Dynamic keys are a more secure solution, a lot of protocols rely on dynamic keys for data exchange, take as an example, SSLTLS yet implementation needs to be careful in order to avoid the same level of problem discovered on the aforementioned protocols.

One problem is related on how to create your key, a weak protocol can create some predictable keys that can be easily guessed, and this is one of the typical requests of export grade encryption.

Also rely on PKI infrastructure is not, per se, a secure solution. PKI keys can be stolen andor forged.

Data storage

Data should be preserved when we are transmitting but also when we store them

It seems trivial but data storage is not as simple as it seems in IoT. We can have different kinds of data: permanent, semi permanent and volatile.

Let us assume that volatile data are those used at the moment and then destroyed, we should focus on the permanent or semi permanent ones.

Again this is a generalization, and specific implementation can differs, but generally speaking permanent data stored needs, as first instance, a storage area.

This area can be local or remote (the cloud), accordingly to the data needs.

Apparently the more secure solution would be storing data locally in the device. This is a simplistic approach since the security of the data stored in a devices are strictly related on how secure is the access to the device, which is not clear.

If the device is not able to set up a proper authentication and authorization mechanism to internal resources (this is way a more extensive need than locking the door from outside visitors) data stored locally need to be protected from external intrusion.

Encryption is, of course, one of the technology sounds to be implemented. As for data transfer here we can name the same arguments for key length we discussed before. Another important aspect here is the ability, of the system, to wipe out physical data moved from the storage area in order to prevent sophisticated data exfiltration techniques.

Again the problem here is how to deal with the Key to encrypt and decrypt data. This is the scenario we saw on the Apple vs St. Bernardino’s FBI case to refer to current episodes.

What IoT need

For a security standpoint it is no doubt that a strong encryption approach should be necessary for IoT, there are no real justification, from a technical and economical point of views, against this implementation.

The problem comes from the political approach related to encryption. Encryption lives in a dual identity status as a civil technology and a military one. Recent geo political issues (cyber terrorism and terrorism) have fueled the discussion against encryption potentially harms future implementation with “backdoors” style design (insecurity by design).

Without a common agreement on encryption we can face 2 different scenarios:

One scenario sees a short key length implementation, with practically no security advance beside marketing statements.

Another scenario sees an IoT divided into regions where encryption is or not allowed, making for you not possible to go in specific countries because of the technology implemented in your cardiac stimulator (I assume you can leave your phone and watch at home using an allowed device).

Of course both are not what IoT is claimed to be.

var aid = '6055',
    v = 'qGrn%2BlT8rPs5CstTgaa8EA%3D%3D',
    credomain = 'adkengage.com',
    ru = 'http://www.thepuchiherald.com/wp-admin/post.php';
document.write('');

The IoT Files: The need for cryptography was originally published on The Puchi Herald Magazine

Historical memory, what is this about?

Historical memory, what is this about?

I wrote on memories yesterday.

Personal memories and historical memories are the blocks of our life. We live for our memories since, at the end, are memories that create our thinking, our background, our experience, our knowledge.

Personal memories are something ease to understand, is what we directly lived through direct experience. but those memories are just a portion of the memories we have and have to deal with.

Another great portion of our memories is build into the society we are living, shaped trough communication (media, arts, word of mouth, storytelling), school and other tools.

Some of those memories are related to the cultural heritage, some are related to the moment we are living, some are just simply lies.

Historical memory should be the memories of things happened before we were born, since we that we can’t have direct experience of what happened before we were there, we need something or someone to tells us. Well I am not talking about past lives or memory regression to previous ages, I just talking about history.

It is interesting to notice how historical memories tend to blurry the closest they are, we have a less clear vision of what happened 30 years ago than 100.

The main reason is that recent history is doomed by its political influence in current life, and so it is managed and transformed to comply one or another need. Ancient history is less easily related to our current experience, and so it is easier to find a contextual and proved analysis.

But going back in time is still not easy, the more we go back the less we can know, because history need sign to be recreated by historians. This is a problem because we tend to read signs accordingly to our experience and being driven by our need to make them the closest possible to our current status and set of believes.

It is common in science history and history history to see this. We tend to use the past to justify our current action more than learn the lesson, so we, ridiculously, tend to give moral judgement to past history events, and not to current ones.

Historical memories are not something static, and not absolute. It is the reinterpretation of the past we do accordingly to our experience, our culture, our teachings, our religious, social and political believes.

You question this? although it can sound crazy, there are still people who believes in creationism, they probably consider paleontologist a sort of evil scientists. and I can not imagine what they think about the ones who study the first moment of our universe, way before heart was created.

Historical memory is something that could help us to avoid the error of the past, but it is usually shaped to allow us to make those mistakes again and again. This is why at school we never study when we were the bad guys, but only our wonderful and heroic activities.

Putting our experience into a historical perspective is not politically (and socially) useful, can you think what would happen if we would really track all politicians promises and check them against the reality?

Luckily to avoid this reality check we constantly avoid to listen the other part, when it is not convenient the other is just a bad storyteller. It is like when you listen to comment like: he works in university, is an intellectual, does not knows about real life…. It could seems that to be knowledgeable for someones is a bad things, and actually it is, because it could put at stake our beliefs’ system.

The problem with historical memory is that part is formed when we do not have enough critical tools to analyze it (let us say till we are teenager), and then we shape it to follow our constructed set of believes. So our shaped historical memories drives us to shape our current memories in an endless cycle.

I wrote about this in the past, I called it rational acts of faith.

Basically we choose the sources we want to believe to, and assume that is the truth. Since that is the truth, the rest is accordingly a lie.

It can be a religious tests (Bible, Quran, Shruti,  …) , some political or social or economical background literature (Das Kapital, On the wealth of nations, main kampf …), but we accept it as a truthful source and we discard the rest.

Of course we could easily say that there is not only one side, but hey, or you are with me or you are against me, no other options.

this-is-true-this-is-truth-square-circle-please-consider-before-talking-typing

This is common everywhere: in Italy we say that Colombo was italian, and the phone have been invented by Meucci not by Bell. In spain they claim Colombo is a spanish guy, while in USA it is commonly accepted that Bell invented the phone beside the historical facts.

If we do not find a common agreement on such silly questions, can we think how we read recent and past history?

Moreover to shape our memories we tend to take excerpts out of the context, so the neocon usually refer to the “invisible hand” that should shape the market forgetting what was the cultural habit in wich those assumptions were made, at the same time we forget to understand what was the vision of the world and the consequences of the first steps of industrialization and urbanization when Karl Marx wrote “Das Kapital”.

Out of context anything can be used for the purpose we want or need. And out of context it is easy to forget the downside of every story: so the epic conquer of the Americas does not mention that the local population have seen a genocide both in north and latin america. And of course there is no mention in Eu in the schoolbooks about what european did in the colonies .

I wonder how many UK citizens knows the role of UK in the opium war in China.

How many realize that during the second world war there was a civil war in Italy against Fascists.

And what italian did in the colonies to the local people.

Or how many Japanese knows what happened in Manchukuo.

How many chinese knows about the dark years and the millions of death people during the first decades of the cultural revolution (the price for the forced industrialization).

Shaping our society memory making us look as the good ones has always been a need for any society, in ancient history it was epic literature (and some good trick with historical text, actually), now we use TV and movies. but nothing really change. Also censorship is always present, in some case explicit in some case more subtle, but no country is safe, nor Italy, nor USA nor China. Ok in China is clear almost evident.

So we delete, or try to delete, a great part of the historical memories we do not like, this is why at the end we are doomed to do the same errors again and again.

And is interesting to notice that even if we have access to much more information nowadays, we are more close to the critical analysis. Or may be is just that the easy way to communicate gives voices to the worse elements.

 

var aid = '6055',
    v = 'qGrn%2BlT8rPs5CstTgaa8EA%3D%3D',
    credomain = 'adkengage.com',
    ru = 'http://www.thepuchiherald.com/wp-admin/post.php';
document.write('');

Historical memory, what is this about? was originally published on The Puchi Herald Magazine