NFV network function virtualization security considerations

NFV network function virtualization security considerations

I have been asked to write down a few things related to NFV and security. NFV is relatively a new thing in the IT world. It has been on the news in 2012 and since then it has followed the developing path common to the virtualization technologies.

Virtualization has made dramatic improvement in the last years. It all started at first with simple virtualization platforms, of course VMware on top of our mind, but not only. The idea was to abstract HW platforms from software ones.

Developing the idea, the abstraction growth covering multiple hw platforms moving also to the multisite in WAN and geographical development. We call this sort of implementation nowadays cloud, but all the cloud story started from the old virtualization idea.

While this platform change was taking place, the world of services was experimenting different clientserver options (web services and so on).

With the new platforms taking place it was clear the network part would have followed this trend, moving to software and virtual shores.

Form the network point of view the first step has been the SDN (Software Defined Network).

Software defined networks (SDN) allow dynamic changes of network configuration that can alter network function characteristics and behaviors. For example, SDN can render real-time topological changes of a network path. A SDN-enabled network provides a platform on which to implement a dynamic chain of virtualized network services that make up an end-to-end network service.

SDN basically allow to centrally administer, manage, configure network services creating policies that can be related to different needs and able to adapt to a changing environment.

But this level of abstraction was not enough to cover the needed flexibility of the new implementation of modern datacenter, cloud and virtualized environment.

In a SDN environment the network gears remain mainly real solid box in an environment that is way more virtualized.

The first attempt to hybridize the physical network with the virtual one was the introduction of the first virtual network element as switches and firewalls. Those components were sometimes part of the hypervisor of the virtualizing platform, sometimes virtual appliances able to run inside a virtual environment as virtual appliances.

Those solutions were (are, since actually exist) good t target specific needs but were not covering the needed flexibility, resilience and scalability required to modern virtualization systems. Products like VMware’s vShield, Cisco’s ASA 1000v and F5 Networks‘ vCMP brought improvements in management and licensing more suited to service provider needs. Each used different architectures to accomplish those goals, making a blending of approaches difficult. But the lack of a comprehensive approach was making difficult to expand those services extensively.

The natural step of the process of virtualization would have be to define something to address in a more comprehensive way the need to transfer part of the network function inside the virtual environment.

Communications service providers and network operators came together through ETSI to try to address the management issues around virtual appliances that handle network functions.

NFV represents a decoupling of the software implementation of network functions from the underlying hardware by leveraging virtualization techniques. NFV offers a variety of network functions and elements, including routing, content delivery networks, network address translation, virtual private networks (VPNs), load balancing, intrusion detection and prevention systems (IDPS), and firewalls. Multiple network functions can be consolidated into the same hardware or server. NFV allows network operators and users to provision and execute on-demand network functions on commodity hardware or CSP platforms.

NFV does not depend on SDN (and vice-versa) and can be implemented without it. However, SDN can improve performance and enable a rich feature set known as Dynamic Virtual Network Function Service Chaining (or VNF Service Chaining). This capability simplifies and accelerates deployment of NFV-based network functions.

Based on the framework introduced by the European Telecommunications Standards Institute (ETSI), NFV is built on three main domains:

  • VNF,
  • NFV infrastructure, and
  • NFV management and orchestration (MANO).

VNF can be considered as a container of network services provisioned by software, very similar to a VM operational model. The infrastructure part of NFV includes all physical resources (e.g., CPU, memory, and I/O) required for storage, computing and networking to prepare the execution of VNFs. The management of all virtualization-specific tasks in NFV framework is performed by NFV management and orchestration domain. For instance, this domain orchestrates and manages the lifecycle of resources and VNFs, and also controls the automatic remote installation of VNFs.

The resulting environment now is a little bit more complicated than a few years before.

Where in the past we used to have

  • physical servers running Operative Systems as Linux, Unix or Windows bound to the specific hardware platform, and almost monolithic services running on those solutions,
  • physical storage unit running on different technologies and network (Ethernet, iscasi, fiber optic and so on),
  • network connected through physical devices, with some specific unit providing external access (VPN servers)
  • and protected by some sort of security unit providing some sort of control (firewall, IPSIDS, 802.1x, AAA and so on)
  • managed quite independently trough different interfaces or programs

now we moved to a world where we have

a virtualized environment where services (think as an example at Docker implementations) or entire operating systems run on a virtual machines (VMs) that manage the abstraction with the hardware

and is able to allocate resources dynamically in terms of performance and even geographic locations,

a network environment which services are partially virtualized (as in VNF implementation) and partially physical and interact with the virtual environment dynamically

a network configured dynamically through control software (SDN) which can dynamically and easily modify the network topology itself in order to respond to the changing request coming from the environment (users, services, processes).

Nowadays, the impressive effects of network functions virtualization (NFV) are evident in the wide range of applications from IP node implementations (e.g., future Internet architecture) to mobile core networks. NFV allows network functions (e.g., packet forwarding and dropping) to be performed in virtual machines (VMs) in a cloud infrastructure rather than in dedicated devices. NFV as an agile and automated network is desirable for network operators due to the ability of easily developing new services and the capabilities of self-management and network programmability via software-defined networking (SDN). Furthermore, co-existence with current networks and services leads to improve customer experience, and reduces the complexity, capital expenditure (CAPEX), and operational expenditure (OPEX).

In theory, virtualization broadly describes the separation of resources or requests for a service from the underlying physical delivery of that service. In this view, NFV involves the implementation of network functions in software that can run on a range of hardware, which can be moved without the need for installation of new equipment. Therefore, all low-level physical network details are hidden and the users are provided with the dynamic configuration of network tasks.

Everything seems so better and easy, but all those transformation does not come out without a price in terms of security.

Every step into virtualization bring security concerns, related to the control plane (think of hypervisor security, orchestrator security), the communication plane, the virtual environment itself (that often inherit the same problem of the physical platform), and the transition interface between the physical and virtual world.

Despite many advantages, therefore NFV introduces new security challenges. Since all software-based virtual functions in NFV can be configured or controlled by an external entity (e.g., third-party provider or user), the whole network could be potentially compromised or destroyed. For example, in order to properly reduce hosts’ heavy workloads, a hypervisor in NFV can dynamically try to achieve the load-balance of assigned loads for multiple VMs through a flexible and programmable networking layer which is known as virtual switch; however, if the hypervisor is compromised, all network functions can be disabled completely (a good old Ddos) or priority can be provided to some services instead others.

Also, NFV’s attack surface is considerably increased, compared with traditional network systems. Besides network resources (e.g., routers, switches, etc.) in the traditional networks, virtualization environments, live migration, and multi-tenant common infrastructure could also be attacked in NFV. For example, an at- tacker can snare a dedicated virtualized network function (VNF) and then spread out its bots in a victim’s whole network using the migration and multicast ability of NFV. To make matters worse, the access to a common infrastructure for a multi-tenant network based on NFV inherently allows for other security risks due to the shared resources between VMs. For example, in a data center network (DCN), side-channels (e.g., cache-based side channel) attacks and/or operational interference could be introduced unless the shared resources between VMs is securely controlled with proper security policies. In practice, it is not easy to provide a complete isolation of VNFs in DCNs.

The challenge related to secure a VFN are complex because are related to all the element that compose the environment: physical, virtual and control.

According to CSA Securing this environment is challenging for at least the following reasons:

  1. Hypervisor dependencies: Today, only a few hypervisor vendors dominate the marketplace, with many vendors hoping to become market players. Like their operating system vendor counterparts, these vendors must address security vulnerabilities in their code. Diligent patching is critical. These vendors must also understand the underlying architecture, e.g., how packets flow within the network fabric, various types of encryption and so forth.
  2. Elastic network boundaries: In NFV, the network fabric accommodates multiple functions. Placement of physical controls are limited by location and cable length. These boundaries are blurred or non-existent in NFV architecture, which complicates security matters due to the unclear boundaries. VLANs are not traditionally considered secure, so physical segregation may still be required for some purposes.
  3. Dynamic workloads: NFV’s appeal is in its agility and dynamic capabilities. Traditional security models are static and unable to evolve as network topology changes in response to demand. Inserting security services into NFV often involves relying on an overlay model that does not easily coexist across vendor boundaries.
  4. Service insertion: NFV promises elastic, transparent networks since the fabric intelligently routes packets that meet configurable criteria. Traditional security controls are deployed logically and physically inline. With NFV, there is often no simple insertion point for security services that are not already layered into the hypervisor.
  5. Stateful versus stateless inspection: Today’s networks require redundancy at a system level and along a network path. This path redundancy cause asymmetric flows that pose challenges for stateful devices that need to see every packet in order to provide access controls. Security operations during the last decade have been based on the premise that stateful inspection is more advanced and superior to stateless access controls. NFV may add complexity where security controls cannot deal with the asymmetries created by multiple, redundant network paths and devices.
  6. Scalability of available resources: As earlier noted, NFV’s appeal lies in its ability to do more with less data center rack space, power, and cooling.

Dedicating cores to workloads and network resources enables resource consolidation. Deeper inspection technologies—next-generation firewalls and Transport Layer Security (TLS) decryption, for example—are resource intensive and do not always scale without offload capability. Security controls must be pervasive to be effective, and they often require significant compute resources.

Together, SDN and NFV create additional complexity and challenges for security controls. It is not uncommon to couple an SDN model with some method of centralized control to deploy network services in the virtual layer. This approach leverages both SDN and NFV as part of the current trend toward data center consolidation.

NFV Security Framework try to address those problems.

If we want to dig the security part a little deeper we can analyze

  • Network function-specific security issues

and

  • Generic virtualization-related security issues

Network function-specific threats refer to attacks on network functions and/or resources (e.g., spoofing, sniffing and denial of service).

The foundation of NFV is set on network virtualization. In this NFV environment, a single physical infrastructure is logically shared by multiple VNFs. For these VNFs, providing a shared, hosted network infrastructure introduces new security vulnerabilities. The general platform of network virtualization consists of three entities; the providers of the network infrastructure, VNF providers, and users. Since the system consists of different operators, undoubtedly, their cooperation cannot be perfect and each entity may behave in a non-cooperative or greedy way to gain benefits.

The virtualization threats of NFV can be originated by each entity and may target the whole or part of the system.

In this view, we need to consider the threats, such as side-channel or flooding attacks as common attacks, and hypervisor, malware injection or VM migration related attacks as the virtualization and cloud specific attacks.

Basically VNF add a new layer of security concerns to the virtualizedcloud platforms for at least 3 reasons:

  • It inherits all the classic network security issues and expand them to cloud level

This means once a VNF is compromised there are good chances it can spread the attack or problem to the whole environment affecting not only the resources directly assigned but anything connected to the virtual environment. Think, as an example, the level of damage that can be provided performing a Ddos that deplete rapidly all the cloud network resources modifying, as an example, the Qos parameters and not using the traditional flooding techniques (which are anyway available).

  • It depends to several layers of abstraction and controls

Orchestrator and hypervisor are, as a matter of fact, great attack point since can

  • It requires a better planned implementation than the classic physical one,

With a tighter control on who is managing the management interfaces since, in common with SDN, VNF is more exposed to unauthorized access and configuration-related issues.

Still VNF requires studies and analysis from security perspective, the good part is that this is a new technology under development therefore there are big space for improvement.

var aid = '6055',
    v = 'qGrn%2BlT8rPs5CstTgaa8EA%3D%3D',
    credomain = 'adkengage.com',
    ru = 'http://www.thepuchiherald.com/wp-admin/post.php';
document.write('');

NFV network function virtualization security considerations was originally published on The Puchi Herald Magazine

Firewall: Traditional, UTM and NGFW. Understanding the difference

Firewall: Traditional, UTM and NGFW. Understanding the difference

Firewall: Traditional, UTM and NGFW. Understanding the difference

One of the problem nowadays when we talk about firewalls is to understand what actually a firewall is and what means the acronym that are used to define the different type of firewalls.
The common definition today recognizes 3 main types of firewalls:

• Firewalls
• UTM
• NGFW

But what are the differences (if any) between those things?
Let’s start with the very basic: what a firewall is.

Simulação da participação de um Firewall entre...
Simulação da participação de um Firewall entre uma LAN e uma WAN Français : Schéma d’un pare-feu entre un LAN et un WAN (Photo credit: Wikipedia)

Firewall:

A firewall is software used to maintain the security of a private network. Firewalls block unauthorized access to or from private networks and are often employed to prevent unauthorized Web users or illicit software from gaining access to private networks connected to the Internet. A firewall may be implemented using hardware, software, or a combination of both.
A firewall is recognized as the first line of defense in securing sensitive information. For better safety, the data can be encrypted.
Firewalls generally use two or more of the following methods:

Packet Filtering: Firewalls filter packets that attempt to enter or leave a network and either accept or reject them depending on the predefined set of filter rules.

• Application Gateway: The application gateway technique employs security methods applied to certain applications such as Telnet and File Transfer Protocol servers.

Circuit-Level Gateway: A circuit-level gateway applies these methods when a connection such as Transmission Control Protocol is established and packets start to move.

• Proxy Servers: Proxy servers can mask real network addresses and intercept every message that enters or leaves a network.

Stateful Inspection or Dynamic Packet Filtering: This method compares not just the header information, but also a packet’s most important inbound and outbound data parts. These are then compared to a trusted information database for characteristic matches. This determines whether the information is authorized to cross the firewall into the network.

The limit of the firewall itself is that works only on the protocol side (IPTCPUDP) without knowledge of higher level of risks that can cross the network.

From virus to content filtering there is a hundreds thousands different technologies that can complement firewall works in order to protect our resources.

To address the more complex security environment firewall evolved into something new, that cover different aspect above the simple protocol inspection. Those devices uses different technologies to address different aspect of security in one single box, the so called UTM (Unified Threat Management)

Unified Threat Management (UTM)

Unified threat management (UTM) refers to a specific kind of IT product that combines several key elements of network security to offer a comprehensive security package to buyers.

A unified threat management solution involves combining the utility of a firewall with other guards against unauthorized network traffic along with various filters and network maintenance tools, such as anti-virus programs.

The emergence of unified threat management is a relatively new phenomenon, because the various aspects that make up these products used to be sold separately. However, by selecting a UTM solution, businesses and organization can deal with just one vendor, which may be more efficient. Unified threat management solutions may also promote easier installation and updates for security systems, although others contend that a single point of access and security can be a liability in some cases.

UTM are gaining momentum but have, yet, a lack of understanding of the context and the users, therefore are not the best suit to address the new environments. In order to drive those gap security researchers moved onto upper layer and form protocol moved to applications, where user behavior and context are key.

This moved from UTM to the so called Next Generation Firewall or NGFW

next-generation firewall (NGFW)

A next-generation firewall (NGFW) is a hardware- or software-based network security system that is able to detect and block sophisticated attacks by enforcing security policies at the application level, as well as at the port and protocol level.
Next-generation firewalls integrate three key assets: enterprise firewall capabilities, an intrusion prevention system (IPS) and application control. Like the introduction of stateful inspection in first-generation firewalls, NGFWs bring additional context to the firewall’s decision-making process by providing it with the ability to understand the details of the Web application traffic passing through it and taking action to block traffic that might exploit vulnerabilities

Next-generation firewalls combine the capabilities of traditional firewalls — including packet filtering, network address translation (NAT), URL blocking and virtual private networks (VPNs) — with Quality of Service (QoS) functionality and features not traditionally found in firewall products.

These include intrusion prevention, SSL and SSH inspection, deep-packet inspection and reputation-based malware detection as well as application awareness. The application-specific capabilities are meant to thwart the growing number of application attacks taking place on layers 4-7 of the OSI network stack.

The simple definition of application control is the ability to detect an application based on the application’s content vs. the traditional layer 4 protocol. Since many application providers are moving to a Web-based delivery model, the ability to detect an application based on the content is important while working only at protocol level is almost worthless.

Yet in the market is still not easy to understand what an UTM is and what is a NGFW

UTM vs NGFW

Next-Generation Firewalls were defined by Gartner as a firewall with Application Control, User-Awareness and Intrusion Detection. So basically a NGFW is a firewall that move from creating rules based on IPport to a firewall that create its rules based on User, Application and other parameters.
The difference is, basically, the shift from the old TCPIP protocol model to a new UserApplicationContext one.
On the other end UTM are a mix of technologies that address different security aspect, from antivirus to content filtering, from web security to email security, all upon a firewall. Some of those technologies can require to be configured to recognize users while seldom deal with applications.
In the market the problem is that nowadays traditional firewall does not exist anymore, even in the area of personalhomesoho environment. Most of them are UTM based.

NGUTM

Quite most of the firewall vendors moves from old firewalls to either UTM or NGFW offering, in most of the case NGFW offer also UTM functions while most of the UTM added NGFW application control functions creating, de facto a new generation of product changing the landscape with the introduction of Next Generation UTM

UTM vendors and NGFW vendors keep fighting on what is the best solution in modern environment, but this is a marketing fight more than a technical sound discussion.

The real thing is that UTM and NGFW are becoming more and more the same thing.

NOTE it’s all about rules.

Why security devices become so comprehensive and try to unify such a lot of services? Management is the last piece of the puzzle. In two separate studies, one by Gartner and one by Verizon Data’s Risk Analysis team, it was shown that an overwhelmingly large percentage of security breaches were caused by simple configuration errors. Gartner says “More than 95% of firewall breaches are caused by firewall misconfigurations, not firewall flaws.” Verizon’s estimate is even higher, at 96%. Both agree that the vast majority of our customers’ security problems are caused by implementing security products that are too difficult to use. The answer? Put it all in one place and make it easy to manage. The best security in the world is USELESS unless you can manage it effectively.

var aid = '6055',
    v = 'qGrn%2BlT8rPs5CstTgaa8EA%3D%3D',
    credomain = 'adkengage.com',
    ru = 'http://www.thepuchiherald.com/wp-admin/post.php';
document.write('');

Firewall: Traditional, UTM and NGFW. Understanding the difference was originally published on The Puchi Herald Magazine

Pretty Good Privacy (PGP)

Pretty Good Privacy (PGP)

Pretty Good Privacy (PGP)

Pretty Good Privacy or PGP is a popular program used to encrypt and decrypt email over the Internet, as well as authenticate messages with digital signatures and encrypted stored files.
Previously available as freeware and now only available as a low-cost commercial version, PGP was once the most widely used privacy-ensuring program by individuals and is also used by many corporations. It was developed by Philip R. Zimmermann in 1991 and has become a de facto standard for email security.

How PGP works

Pretty Good Privacy uses a variation of the public key system. In this system, each user has an encryption key that is publicly known and a private key that is known only to that user. You encrypt a message you send to someone else using their public key. When they receive it, they decrypt it using their private key. Since encrypting an entire message can be time-consuming, PGP uses a faster encryption algorithm to encrypt the message and then uses the public key to encrypt the shorter key that was used to encrypt the entire message. Both the encrypted message and the short key are sent to the receiver who first uses the receiver’s private key to decrypt the short key and then uses that key to decrypt the message.

PGP comes in two public key versions — Rivest-Shamir-Adleman (RSA) and Diffie-Hellman. The RSA version, for which PGP must pay a license fee to RSA, uses the IDEA algorithm to generate a short key for the entire message and RSA to encrypt the short key. The Diffie-Hellman version uses the CAST algorithm for the short key to encrypt the message and the Diffie-Hellman algorithm to encrypt the short key.
When sending digital signatures, PGP uses an efficient algorithm that generates a hash (a mathematical summary) from the user’s name and other signature information. This hash code is then encrypted with the sender’s private key. The receiver uses the sender’s public key to decrypt the hash code. If it matches the hash code sent as the digital signature for the message, the receiver is sure that the message has arrived securely from the stated sender. PGP’s RSA version uses the MD5 algorithm to generate the hash code. PGP’s Diffie-Hellman version uses the SHA-1 algorithm to generate the hash code.

Getting PGP

To use Pretty Good Privacy, download or purchase it and install it on your computer system. It typically contains a user interface that works with your customary email program. You may also need to register the public key that your PGP program gives you with a PGP public-key server so that people you exchange messages with will be able to find your public key.

PGP freeware is available for older versions of Windows, Mac, DOS, Unix and other operating systems. In 2010, Symantec Corp. acquired PGP Corp., which held the rights to the PGP code, and soon stopped offering a freeware version of the technology. The vendor currently offers PGP technology in a variety of its encryption products, such as Symantec Encryption Desktop, Symantec Desktop Email Encryption and Symantec Encryption Desktop Storage. Symantec also makes the Symantec Encryption Desktop source code available for peer review.
Though Symantec ended PGP freeware, there are other non-proprietary versions of the technology that are available. OpenPGP is an open source version of PGP that’s supported by the Internet Engineering Task Force (IETF). OpenPGP is used by several software vendors, including as Coviant Software, which offers a free tool for OpenPGP encryption, and HushMail, which offers a Web-based encrypted email service powered by OpenPGP. In addition, the Free Software Foundation developed GNU Privacy Guard (GPG), an OpenPGG-compliant encryption software.

Where can you use PGP?

Pretty Good Privacy can be used to authenticate digital certificates and encrypt/decrypt texts, emails, files, directories and whole disk partitions. Symantec, for example, offers PGP-based products such as Symantec File Share Encryption for encrypting files shared across a network and Symantec Endpoint Encryption for full disk encryption on desktops, mobile devices and removable storage. In the case of using PGP technology for files and drives instead of messages, the Symantec products allows users to decrypt and re-encrypt data via a single sign-on.
Originally, the U.S. government restricted the exportation of PGP technology and even launched a criminal investigation against Zimmermann for putting the technology in the public domain (the investigation was later dropped). Network Associates Inc. (NAI) acquired Zimmermann’s company, PGP Inc., in 1997 and was able to legally publish the source code (NAI later sold the PGP assets and IP to ex-PGP developers that joined together to form PGP Corp. in 2002, which was acquired by Symantec in 2010).
Today, PGP encrypted email can be exchanged with users outside the U.S if you have the correct versions of PGP at both ends.
There are several versions of PGP in use. Add-ons can be purchased that allow backwards compatibility for newer RSA versions with older versions. However, the Diffie-Hellman and RSA versions of PGP do not work with each other since they use different algorithms. There are also a number of technology companies that have released tools or services supporting PGP. Google this year introduced an OpenPGP email encryption plug-in for Chrome, while Yahoo also began offering PGP encryption for its email service.

What is an asymmetric algorithm?

Asymmetric algorithms (public key algorithms) use different keys for encryption and decryption, and the decryption key cannot (practically) be derived from the encryption key. Asymmetric algorithms are important because they can be used for transmitting encryption keys or other data securely even when the parties have no opportunity to agree on a secret key in private.
Types of Asymmetric algorithms
Types of Asymmetric algorithms (public key algorithms):
• RSA
• Diffie-Hellman
Digital Signature Algorithm
• ElGamal
• ECDSA
• XTR

Asymmetric algorithms examples:

RSA Asymmetric algorithm
Rivest-Shamir-Adleman is the most commonly used asymmetric algorithm (public key algorithm). It can be used both for encryption and for digital signatures. The security of RSA is generally considered equivalent to factoring, although this has not been proved.
RSA computation occurs with integers modulo n = p * q, for two large secret primes p, q. To encrypt a message m, it is exponentiated with a small public exponent e. For decryption, the recipient of the ciphertext c = me (mod n) computes the multiplicative reverse d = e-1 (mod (p-1)*(q-1)) (we require that e is selected suitably for it to exist) and obtains cd = m e * d = m (mod n). The private key consists of n, p, q, e, d (where p and q can be omitted); the public key contains only n and e. The problem for the attacker is that computing the reverse d of e is assumed to be no easier than factorizing n.
The key size should be greater than 1024 bits for a reasonable level of security. Keys of size, say, 2048 bits should allow security for decades. There are actually multiple incarnations of this algorithm; RC5 is one of the most common in use, and RC6 was a finalist algorithm for AES.

Diffie-Hellman
Diffie-Hellman is the first asymmetric encryption algorithm, invented in 1976, using discrete logarithms in a finite field. Allows two users to exchange a secret key over an insecure medium without any prior secrets.

Diffie-Hellman (DH) is a widely used key exchange algorithm. In many cryptographical protocols, two parties wish to begin communicating. However, let’s assume they do not initially possess any common secret and thus cannot use secret key cryptosystems. The key exchange by Diffie-Hellman protocol remedies this situation by allowing the construction of a common secret key over an insecure communication channel. It is based on a problem related to discrete logarithms, namely the Diffie-Hellman problem. This problem is considered hard, and it is in some instances as hard as the discrete logarithm problem.
The Diffie-Hellman protocol is generally considered to be secure when an appropriate mathematical group is used. In particular, the generator element used in the exponentiations should have a large period (i.e. order). Usually, Diffie-Hellman is not implemented on hardware.

Digital Signature Algorithm
Digital Signature Algorithm (DSA) is a United States Federal Government standard or FIPS for digital signatures. It was proposed by the National Institute of Standards and Technology (NIST) in August 1991 for use in their Digital Signature Algorithm (DSA), specified in FIPS 186 [1], adopted in 1993. A minor revision was issued in 1996 as FIPS 186-1 [2], and the standard was expanded further in 2000 as FIPS 186-2 [3]. Digital Signature Algorithm (DSA) is similar to the one used by ElGamal signature algorithm. It is fairly efficient though not as efficient as RSA for signature verification. The standard defines DSS to use the SHA-1 hash function exclusively to compute message digests.
The main problem with DSA is the fixed subgroup size (the order of the generator element), which limits the security to around only 80 bits. Hardware attacks can be menacing to some implementations of DSS. However, it is widely used and accepted as a good algorithm.

ElGamal
The ElGamal is a public key cipher – an asymmetric key encryption algorithm for public-key cryptography which is based on the Diffie-Hellman key agreement. ElGamal is the predecessor of DSA.

ECDSA
Elliptic Curve DSA (ECDSA) is a variant of the Digital Signature Algorithm (DSA) which operates on elliptic curve groups. As with Elliptic Curve Cryptography in general, the bit size of the public key believed to be needed for ECDSA is about twice the size of the security level, in bits.

XTR
XTR is an algorithm for asymmetric encryption (public-key encryption). XTR is a novel method that makes use of traces to represent and calculate powers of elements of a subgroup of a finite field. It is based on the primitive underlying the very first public key cryptosystem, the Diffie-Hellman key agreement protocol.
From a security point of view, XTR security relies on the difficulty of solving discrete logarithm related problems in the multiplicative group of a finite field. Some advantages of XTR are its fast key generation (much faster than RSA), small key sizes (much smaller than RSA, comparable with ECC for current security settings), and speed (overall comparable with ECC for current security settings).
Symmetric and asymmetric algorithms
Symmetric algorithms encrypt and decrypt with the same key. Main advantages of symmetric algorithms are their security and high speed. Asymmetric algorithms encrypt and decrypt with different keys. Data is encrypted with a public key, and decrypted with a private key. Asymmetric algorithms (also known as public-key algorithms) need at least a 3,000-bit key to achieve the same level of security of a 128-bit symmetric algorithm. Asymmetric algorithms are incredibly slow and it is impractical to use them to encrypt large amounts of data. Generally, symmetric algorithms are much faster to execute on a computer than asymmetric ones. In practice they are often used together, so that a public-key algorithm is used to encrypt a randomly generated encryption key, and the random key is used to encrypt the actual message using a symmetric algorithm. This is sometimes called hybrid encryption

var aid = '6055',
    v = 'qGrn%2BlT8rPs5CstTgaa8EA%3D%3D',
    credomain = 'adkengage.com',
    ru = 'http://www.thepuchiherald.com/wp-admin/post.php';
document.write('');

Pretty Good Privacy (PGP) was originally published on The Puchi Herald Magazine

Our memories are all we have

Our memories are all we have

SPETT.UMBERTO ECO A NAPOLI (SUD FOTO SERGIO SIANO)
SPETT.UMBERTO ECO A NAPOLI
(SUD FOTO SERGIO SIANO)

I am in China for work, with a few connection to the real world outside, so Italian news usually comes to me late, when I am able to connect to internet from hotel; great firewall allowing.

Being isolated from the Italian reality put things into a different perspective, allow you to keep less news with more time to digest and think about.

It happened a few days ago I learned of the death of Umberto Eco, one of the greatest Italian tinker of our age. He was a great writer, a great thinker, a truly free spirit.

The first thing I did when I knew about his death was to feel a great sorrow, in a moment my country needs desperately to turn back to its origin loving culture and what culture means, losing such a man was a great loss.

I used to write to my daughter every day, and force her to do the same. No matter what just the silliest thing, but I want her to learn to be committed and to use an old way of communication such writing (although with email).

After the new, instead of writing the usual nonsense we love to share one to another, I asked my daughter, 12, to read the letter Eco wrote to his nephew just to make her understand the importance of living, learning, knowing and remembering.

http://espresso.repubblica.it/visioni/2014/01/03/news/umberto-eco-caro-nipote-studia-a-memoria-1.147715

I am trying, not sure with how much success, to rise her with a critical eye on reality, trying to give her the tools to understand where things come from and just not see the moment but live it, knowing and understanding why things are what they are.

The letter was all I am trying to teach my daughter, but with way better words and meanings. But I am not Umberto, and have not a blink of his incredible knowledge.

But I remember when I was younger I did not understand the need to learn things and memorize them, I got the reason growing old, when I understood that my experience (therefore my memories) are the metric to analyze the world. And probably now I say I should have memorized more.

The Eco sad news move something on me so  I was started reading back some interviews with Eco and it make me think, what is the meaning of our lives? memories.

At the end it is memories that shape our life, and growing old we will add memories that will be the reason we lived for.

How we shape those memories is our job, we can build them good or bad, silly or deep. But it is all up to us. but to make memories we have to live them, somehow.

Reading, travelling, doing things. and those memories are the building blocks of what we are and will be. To make memories we need to understand what we see and what we do.

I have the vision of my daughter when she was ready born, a wonderful ugly conhead. So small and so a great responsibility with the lightest wight.

I remember when we discovered my wife was pregnant, I were in the kitchen when she told me the result of the test, i was shaking.

And I remember the teenage friends and our nightly talk about politics or music.

I remember the good and bad part of the job, and the people I worked with.

I remember my mistakes (this is why I write on management so much)

I remember what I would have liked to know at work (This is why I write on technology so much)

I remember that I fall in love with Japan looking at anime and manga, and then going deeper into that country history and culture, alas not language, shame on me, so I have had to enjoy Banana Yoshimoto only in italian, I am sure loosing so much (with all the respect for the translator).

I remember my first trip in USA, where all was not, at the end, so big but not all food was Mc Donald.

I remember how was wide opening to discover back my latin heritage (thanks Rika), and starting to understand the good (and bad) of the spanish speaking world.

I remember how was incredibly rewarding to read and understand Joice, Tolkien, Agatha Christie, Conan Doyle in the original language, see what only the original language can gives you. I can see Holmes home, as well as Miss Marple smile looking out of her windows. Are part of my life.

I remember how was amazing to open my life to the spanish language, writers, music and culture (I could not have understood and appreciate Orozco or Frida Kalo without knowing that culture, see, watch, talk, smell, listen, breath that culture).

I usually say to my daughter that if you know more you will find more things to enjoy. Reading is a wondeful way to find new wonderful things. Studying history and, also, its implications gives you the ability to look at the world with different eyes, so understanding different cultures, languages, foods and so on.

I disagree with the ones that claim that ignorance is a best way to happiness, ignorance is the easiest way, easy path is never (or seldom) the best path. And I disagree with the ones that for fear close themself in a shell wasting their life in useless fears, and ultimately I disagree that to preserve its own identity you have to close to the different, stranger and new.

So different from the world we are shaping for our sons. A world of people with no memory of the past is doomed to live the same errors again and again, isn’t it? Isn’t what we see everyday? Do we still care (or humanity have ever cared about) historical memory?

C’è poi la memoria storica, quella che non riguarda i fatti della tua vita o le cose che hai letto, ma quello che è accaduto prima che tu nascessi.

Then there is the historical memory, one that isn’t about the facts of your life or things you’ve read, but what happened before you were born.

Life is a learning path, and memories are the foundations of this learning. without memory of the past we can not build good memory for the future, unless we like to live in a lie (but so many did it, isn’t it?).

There is more truth in a novel than in any political speech, there is more truth in a joke than in any serious comment. Probably this is the reason why novelists, writers and comedians have, usually, the sharpen vision of our world; they have to work with memory for a living.

The moment we stop making memories, for us and for the others, we just stop living.

 

La memoria è un muscolo come quelli delle gambe, se non lo eserciti si avvizzisce e tu diventi (dal punto di vista mentale) diversamente abile e cioè (parliamoci chiaro) un idiota. E inoltre, siccome per tutti c’è il rischio che quando si diventa vecchi ci venga l’Alzheimer, uno dei modi di evitare questo spiacevole incidente è di esercitare sempre la memoria.

The memory is a muscle like those of the legs, if not used fades and you become disabled (mentally) and therefore (let’s face it) an idiot. And also, since for all there is a risk that when we get old we get Alzheimer, one of the ways to avoid this unfortunate incident is to exercise more and memory.

How many times I have seen people that stopped to use their “brain” muscle, close to learn and understand (hope you can appreciate the difference between know and understand, although the first is a mandatory step to the second).

I hope to have more human beings like Umberto Eco, that was so proud and joyful  to play with memories, and I hope my daughter will learn something from that letter.

May be when she will be my age….

She will try to write the same post, just better.

var aid = '6055',
    v = 'qGrn%2BlT8rPs5CstTgaa8EA%3D%3D',
    credomain = 'adkengage.com',
    ru = 'http://www.thepuchiherald.com/wp-admin/post.php';
document.write('');

Our memories are all we have was originally published on The Puchi Herald Magazine