You are on page 1of 46

CRYPTOGRAPHY: A MEANS OF SECURITY BY OKORODUDU ESEOGHENE

FOS/07/08/129099 A SERMINAL REPORT SUBMITTED TO THE DEPARTMENT OF MATHEMATICS AND COMPUTER SCIENCE FACULTY OF SCIENCE, DELTA STATE UNIVERSITY ABRAKA. IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE AWARD OF THE DEGREE OF BACHELOR OF SCIENCE [B.SC] COMPUTER SCIENCE. AUGUST, 2011.

CERTIFICATION This is to certify that this seminar work was carried out by Okorodudu Eseoghene under the supervision of Mr. N.O. Ogini, lecturer department of Mathematics and Computer Science, Delta State University, Abraka, during the 2010/2011 academic session.

Mr. N.O. Ogini Seminar/Project

Dr. A.O. Atonuje Head of Department

Date

Date

DEDICATION This seminar is dedicated God Almighty for his infinite Mercy.

ACKNOWLEDGEMENT

ABSTRACT

This Seminal Report aims to provide a broad review of cryptography, a means of security with. Cryptography, a means of security is a subject too wide ranging to coverage about how to protect information in digital form and to provide security services. However, a general overview of cryptography and its various types is provided and various algorithms are discussed. A detailed review of the subject of network security and cryptography in digital signatures is then presented. The purpose of a digital signature is to provide a means for an entity to bind its identity to a piece of information. The common attacks on digital signature was reviewed. The first method was the RSA signature scheme, which remains today one of the most practical and versatile techniques available. Fiat-Shamir signature schemes, DSA and related signature schemes are two other methods reviewed. Digital signatures have many applications in information security, including authentication, data integrity, and non-repudiation was reviewed.

INTRODUCTION
What is Cryptography? Cryptography is the science of using mathematics to encrypt and decrypt data. Cryptography enables you to store sensitive information or transmit it across insecure networks (like the Internet) so that it cannot be read by any one except the intended recipient. While cryptography is the science of securing data, Cryptanalysis is the sci ence of analyzing

and breaking secure communication. Classical cryptanalysis involves an interesting combination of analytical reasoning, application of mathematical tools, pattern finding, patience, determination, and luck. Cryp tanalysts are also called attackers. Cryptology embraces both cryptography and cryptanalysis. A related discipline is Steganography, which is the science of hiding messages rather than making them unreadable. Steganography is not cryptography; it is the secrecy of the mechanism used to hide the a form of coding. It relies on

message. If, for example, you encode a secret

message by putting each letter as the first letter of the first word of every sentence, its secret until someone knows to look for it, and then it provides no security at all.

Kinds of Cryptography :
There are two kinds of cryptography in this world: cryptography that will stop your kid

sister from reading your files, and cryptography that will stop major governments from reading your files (Strong and Weak cryptography). Cryptography can be strong or weak, as explained above. Cryptographic strength is measured in the time and resources it would require to recover the plaintext. The result of strong

cryptography is cipher text that is very difficult

to decipher without possession of the

appropriate decoding tool. How diffi cult? Given all of todays computing power and available timeeven a billion computers doing a billion checks a secondit is not possible to decipher the result of strong cryptography before the end of the universe. One would think, then, that strong cryptography would hold up rather well extremely determined cryptanalyst. Whos really to say? No against even an

one has proven that the

strongest encryption obtainable today will hold up under tomorrows computing power. However, the strong cryptography employed by PGP is the best available today. Vigilance and conservatism will protect you better, however, than claims of impenetrability.

Principle of Cryptography:
A cryptographic algorithm, or cipher, is a mathematical function used in the encryption and decryption process. A cryptographic algorithm works in com bination with a keya word, number, or phraseto encrypt the plaintext. The same plaintext encrypts to different cipher text with different keys. The security of encrypted data is entirely dependent on two things: the strength of the cryptographic algorithm and the secrecy of the key. A cryptographic algorithm, plus all possible keys and all the protocols that comprise a cryptosystem. make it work,

E n c r y p tio n a n d D e c r y p tio n
Data that can be read and understood without any special measures is called or clear text. The method of disguising plaintext in such a way as to plaintext hide its

substance is called encryption. Encrypting plaintext results in unread able gibberish called cipher text. You use encryption to make sure that infor mation is hidden from

anyone

for

whom

it

is

not

intended,

even

those

who

can see the encrypted data. The process of reverting cipher text to its original plaintext is called decryption. The following figure shows this process.

Cipher text
P lain Text

Plain Text

Encryption

decryption

T Y P E S O F C R Y P R O G RC oP G eY :tio n a l c r y p to g r a p h y A nv n
In conventional cryptography, also called secret-key or symmetric-key encryption, one key is used both for encryption and decryption. The Data Encryption Standard (DES) in an example of a conventional cryptosystem that has been widely deployed by the U.S. Government and the banking industry. It is being replaced by the Advanced Encryption Standard (AES). The following figure is an illustration of the conventional encryption process.

Plain Text

Cipher Text

Plain Text

Encryption

Decryption

Example of Conventional Cryptography: Caesars cipher


An extremely simple example of conventional cryptography is a substitution cipher. A substitution cipher substitutes one piece of information for another. done by offsetting letters of the alphabet. This is most frequently Two exam-

ples are Captain Midnights Secret Decoder Ring, which you may have owned when you were a kid, and Julius Caesars cipher. In both cases, the algorithm is to offset the alphabet and the key is the number of characters to offset it. For example, if we encode the word SECRET using Caesars key value of 3, we offset the alphabet so that the 3rd letter down (D) begins the alphabet. So starting with ABCDEFGHIJKLMNOPQRSTUVWXYZ and sliding everything up by 3, you get DEFGHIJKLMNOPQRSTUVWXYZABC where D=A, E=B, F=C, and so on. Using this scheme, the plaintext, SECRET encrypts as VHFUHW. To else to read the cipher text, you tell them that the key is 3. Obviously, this is exceedingly weak cryptography by todays standards, but it Caesar and it illustrates how conventional cryptography works. worked for allow someone

Key Management and Conventional Encryption


Conventional encryption has benefits. It is very fast. It is especially useful for encrypting data that is not going anywhere. However, conventional encryp tion alone as a means for transmitting secure data can be quite expensive simply due to the difficulty of secure key distribution.

Recall a character from your favorite spy movie: the person with a locked briefcase handcuffed to his or her wrist. What is in the briefcase, anyway? Its probably not the secret plan itself. Its the key that will decrypt the secret data. For a sender and recipient to communicate securely using conventional encryption, they must agree upon a key and keep it secret between them different physical locations, they must trust a courier, selves. If they are in

the Bat Phone, or some other secure

communications medium to prevent the disclosure of the secret key during transmission. Anyone who overhears or intercepts the key in transit can later read, modify, and forge all information encrypted or authenticated with that key. From DES to Captain Midnights Secret Decoder Ring, the persistent problem with con ventional encryption is key distribution: how do you get the key to the recipi ent without someone intercepting it?

P u b lic -k e y

c r y p to g r a p h y
The problems of key distribution are solved by public-key cryptography, the was introduced by Whitfield Diffie and Martin Hellman in concept of which

1975. (There is now evidence that

the British Secret Service invented it a few years before Diffie and Hellman, but kept it a military secretand did nothing with it.) Public-key cryptography uses a pair of keys: a public key, which encrypts data, and a corresponding private key, for decryption. Because it uses two keys, it is sometimes called asymmetric cryptography. You publish your pub lic key to the world while keeping your private key secret. Anyone with a copy of your public key can then encrypt

information that only you can read, even people you have never met. It is computationally infeasible to deduce the private key from the public key.

Public Key

Private Key

Anyone who has a public key can encrypt information but cannot decrypt it. Only the person who has the corresponding private key can decrypt the information.

Plain Text

Cipher Text

Plain Text

Encryption

Decryption

The primary benefit of public-key cryptography

is that it allows

people who

have no preexisting security arrangement to exchange

messages securely . The need for sender and receiver to share secret keys via some secure chan-

nel is eliminated; all communications involve only public keys, and no private key is ever transmitted or shared. Some examples of public-key cryptosystems are Elgamal (named for its inventor, Taher Elgamal), RSA (named for its inventors, Ron Rivest, Adi Shamir, and Leonard Adleman), Diffie-Hellman (named, you guessed it, for its inventors), and DSA, the Digital Signature Algorithm, (invented by David Kravitz). Because conventional cryptography was once the only available means for relaying secret information, the expense of secure channels and key distribution relegated its use only to those who could afford it, such as governments and large banks (or small children with secret decoder rings). Public-key encryption is the technological revolution that provides strong

cryptography to the adult masses. Remember the courier with the locked briefcase handcuffed to his wrist? Public-key encryption puts him out of business (probably to his relief).

Practical Application of Public Keys

H o w P G Pretty Good Privacy) W o r k s P (


PGP combines some of the best features of both conventional and publickey cryptography. PGP is a hybrid cryptosystem. When a user encrypts plaintext with PGP, PGP first compresses the plaintext. Data compression saves modem transmission time and disk space and, more importantly, strengthens cryptographic security. Most cryptanalysis techniques exploit patterns found in the plaintext to crack the cipher. Compres sion reduces these patterns in the plaintext, thereby greatly enhancing resistance to cryptanalysis. (Files that are too short to compress or which do not compress well are not compressed.) PGP then creates a session key, which is a one-time-only secret key. This key is a random number generated from the random movements of your mouse and the keystrokes you type. The session key works with a very secure, fast conventional encryption algorithm to encrypt the plaintext; the result is cipher text. Once the data is encrypted, the session key is then encrypted to the recipients public key. This public key-encrypted session key is transmit ted along with the cipher text to the recipient.

Session key is encrypted with public key

Plain Text is encrypted with Session key

Cipher Text + encrypted Session key

Decryption works in the reverse. The recipients copy of PGP uses his or her private key to recover the session key, which PGP then uses to decrypt the conventionally encrypted cipher text.

Encrypted message

Encrypted session key

Recipient Private Key use in decrypting the session key Original Plain Text

Session key used in decrypting cipher Text

The combination of the two encryption methods combines the convenience of public-key encryption with the speed of conventional encryption. Conventional encryption is about 10,000 times faster than public-key encryption. Public-key encryption in turn provides a solution to key

distribution and data transmission issues. Used together, performance and key distribution are improved without any sacrifice in security.

Keys
A key is a value that works with a cryptographic algorithm to produce a specific cipher text. Keys are basically really, really, really big numbers. Key size is measured in bits; the number representing a 2048-bit key is darn huge. In public-key cryptography, the bigger the key, the more secure the cipher text. However, public key size and conventional cryptographys secret key size are totally unrelated. A conventional 80-bit key has the equivalent strength of a 1024-bit public key. A conventional 128-bit key is equivalent to a 3000-bit public key. Again, the bigger the key, the more secure, but the algorithms used for each type of cryptography are very different and thus comparison is like that of apples to oranges. While the public and private keys are mathematically related, its very difficult to derive the private key given only the public key; however, deriving the pri vate key is always possible given enough time and computing power. This makes it very important to pick keys of the right size; large enough to be secure, but small enough to be applied fairly

quickly. Additionally, you need to consider who might be trying to read your files, how determined they are, their resources might be. Larger keys will be cryptographically secure for a longer period of time. If what you want to encrypt needs to be hidden for many years, you might want to use a very large key. Of course, who knows how long it will take how much time they have, and what

to determine your key using tomorrows faster, more efficient computers? There was a time when a 56-bit symmetric key was considered extremely safe. Current thinking is that 128-bit keys will be safe indefinitely, at least until someone invents a usable quantum computer. We also believe that 256-bit keys will be safe indefinitely, even if someone invents a quantum computer. This is why the AES includes options for 128 and 256-bit keys. But history tells is that its quite possible someone will think this statement amusingly quaint in a few decades. Keys are stored in encrypted form. PGP stores the keys in two files on your hard disk; one for public keys and one for private keys. These files are called key rings. As you use PGP, you will typically add the public keys of your recipients to your public key ring. Your private keys are stored on your private key ring. If you lose your private key ring you will be unable to decrypt any

information encrypted to keys on that ring. Consequently, its a good idea to keep good backups.

D ig it a l s ig n a t u r e s
A major benefit of public key cryptography is that it provides a method for employing digital signatures. Digital signatures let the recipient of information verify the authenticity of the informations origin, and also

verify that the information was not altered while in transit. Thus, public key digital signa tures provide authentication and data integrity. These features are every bit as fundamental to cryptography as privacy, if not more. A digital signature serves the same purpose as a seal on a document, or a handwritten signature. However, because of the way it is created, it is superior to a seal or signature in an important way. A digital signature not only attests to the identity of the signer, but it also shows that the contents of the information signed has not been modified. A physical seal or handwritten signature cannot do that. However, like a physical seal that can be created by anyone with possession of the signet, a digital signature can be created by anyone with the private key of that signing key pair. Some people tend to use signatures more than they use encryption. For example, you may not care if anyone knows that you just deposited N10,000 in your account, but you do want to be darn sure it was the bank teller you were dealing with.
The basic manner in which digital signatures are created is shown in the following figure. The signature algorithm uses your private key to create the signature and the public key to verify it. If the information can be decrypted with key, then it must have originated with you. your public

Private key

Public Key

Original Text

Verified Text

Signing Signed Text Verifying

Hash functions
The system described above has some problems. It is slow, and it produces an enormous volume of dataat least double the size of the original information. An improvement on the above scheme is the addition of a oneway hash function in the process. A one-way hash function takes variablelength inputin this case, a message of any length, even thousands or millions of bitsand produces a fixed-length output; say, 160 bits. The hash function ensures that, if the information is changed in any wayeven by just one bit an entirely different output value is produced. PGP uses a cryptographically strong hash function on the plaintext the user is signing. This generates a fixed-length data item known as a message digest. (Again, any change to the information results in a totally different digest.)

Then PGP uses the digest and the private key to create the signature. PGP transmits the signature and the plaintext together. Upon receipt of the message, the recipient uses PGP to recompute the digest, thus verifying the signature. PGP can encrypt the plaintext or not; signing plaintext is useful if some of the recipients are not interested in or capable of verifying the signature. As long as a secure hash function is used, there is no way to take someones signature from one document and attach it to another, or to alter a signed message in any way. The slightest change to a signed document will cause process to fail. the digital signature verification

Hash Function

Plain Text

Digest Signed with Private Key

Message Digest Plain Text and Signature

Private key used for signing

Public Key Ciphers One of the most exciting developments in cryptography has been the advent of public key cryptosystems. Public key systems Public key cryptosystems are based on asymmetric ciphers. A symmetric cipher, such as all the block ciphers in common use, use a single key for both encryption and decryption. If you know the key, you can decrypt the message - it's as simple as that. Asymmetric ciphers, on the other hand, use at least two different keys - one for encryption, and one for decryption. If you possess only the encryption key, it is impossible to use it to decrypt the data. Likewise, if you possess only the decryption key, it is impossible to encrypt data that will decrypt with that key. These unique properties make some fascinating possibilities. Typically, we publish one of two keys, called the "public key", and thus the common name for these systems. The other key we keep secret; this is termed the "private key". Now, if the encryption key is published, then anyone can use it to encrypt messages which can only be read by the private key holder. Even though the encryption key is public knowledge, it can't be used to decrypt the messages it encodes. Likewise, if we publish the decryption key, then anyone can decode its messages, but only the private key holder can encrypt them. This can be used to implement a digital signature scheme; by running a hash function over a message, then encrypting the signature with the private (encrypting) key, a signed message

has been created that can be verified by anyone with the public (decrypting) key, but can't be tampered with without invalidating the signature. Note that this is similar to, but different from, HMAC (RFC 2104) which implements a signature using conventional, symmetric ciphers and a shared secret key. All asymmetric ciphers are based on bizarre mathematical properties; the elaborate bit shuffling of block ciphers won't cut it here. One of the consequences of this is that asymmetric ciphers are slow, and are therefore rarely used to encrypt large blocks of data. Instead, the usual approach is to encrypt data with a conventional cipher using a randomly generated key; that key is first communicated using the public key cipher, effectively increasing the amount of data the public key can encrypt. Another advantage of this approach is that it exposes only a single, fairly small, public key operation to an attacker; see the Key Management page for more discussion on this topic. Since all asymmetric ciphers depend on mathematical properties, they require data to be represented in the form of numbers. Typically, a particular set of key values will be chosen to allow numbers from 1 up until a certain power of two, and the larger the numbers, the more secure the cipher. Commonly used limits, for both RSA and El Gamel, are 2512 (considered breakable), 2768, 21024 (recommended), and 22048 (for extremely valuable keys). PKCS #1 (RFC 2437) specifies, among other things, how to convert a block of data (up to a size limit) into a single large integer that is then used in the cipher algorithms.

RSA RSA was the first and most widely used public key cryptosystem. Developed in 1977 by three M.I.T. professors, it's based on the mathematical properties of modulo arithmetic. Modulo arithmetic is much like normal arithmetic, but only uses integers no larger than a limiting number, the modulus. Any result larger than the modulus has the modulus subtracted from it repeatedly until it is less than the modulus. Thus, instead of the numbers forming a line, as is conventional, modulo numbers can be though of as forming a ring, when the largest numbers loops back to 0.

For example, 8 + 8 mod 15 = 1 since 16 is larger than the modulus (15) and 16 - 15 = 1. Likewise, 4 7 mod 15 = 13 since 28 - 15 = 13. Exponentiation is

similarly defined; 33 mod 15 = 12 since 33 = 27 and 27 - 15 = 12. If the result were larger, we'd just subtract out the appropriate number of m's (as the modulus is usually written) to get back into the range 0 to m-1. Modulo fields have some bizarre properties, though. For example, if the modulus is prime, division is defined, even though all the numbers are integers. Thus 3 2 mod 11 = 7 (!) since 7 2 mod 11 = 3. RSA depends on a property of modulo fields related to exponentiation. It can be shown that every modulo field has a special power, called the Euler totient function and written (m), so that any number raised to this power equals 1, i.e. x(m) mod m = 1. Furthermore, (m) is difficult to compute without knowing the prime factorization of m. If m is the product of two primes, typically called p and q, then (m) = (p-1)(q-1). This is how RSA works. Pick two large prime numbers p and q. Multiply them together to form an even larger modulus m. Now compute (m) = (p-1)(q-1). Since x(m) mod m = 1, it stands to reason that x(m)+1 mod m = x. We now have a special power, typically very large, that will produce an identity transformation on any number. Factor this special power into two factors. Since (xy)z = x(yz) for any numbers, raising a number of one of these two factors produces gibberish, and raising the gibberish to the second factor produces the original number (since yz = (m)+1). Publish one of these powers (y); this is the public exponent and in conjunction with m forms the public key. The other power (z) is kept secret, this secret exponent allows the original number to be recovered.

Here's an example, using very small numbers. Let's choose 33 (3 11) to be our modulus, so (m) = 2 10 = 20 and (m)+1 = 21. Thus, we know that x21 mod 33 = x, for any x. Now, factor 21 = 3 7. So, to raise a number to the twenty-first power, we can raise it first to the third power, then raise that result to the seventh power. We'll use 3 as our public exponent, and keep 7 as our private exponent. Now, our published public key consists of the modulus (33) and the public key (3). We can encrypt any number between 1 and 32 (one less than the modulus). Let's encrypt 15. Since 153 = 3375 and 33 102 = 3366 153 mod 33 = 3375 - 3366 = 9 So 15 encrypts as 9. Now let's decrypt it, using the private exponent: 97 = 4782969 and 33 144938 = 4782954 so 97 mod 33 = 4782969 - 4782954 = 15 Even based on small numbers, we end up dealing with large enough numbers to require a calculator, so you can see that when based on large numbers, RSA becomes very difficult to crack!

El Gamel El Gamel is probably the second most widely-used public key cipher. It gained popularity because of the patented nature of RSA (the patent has now expired). Like RSA, El Gamel operates using modulo arithmetic, but depends on the difficulty of the discrete logarithm problem. As you may remember, taking a logarithm is the inverse of exponentiation. So, if x = yz, then z = logy x; i.e, z (the power) is the logarithm of the x (the result of exponentiation). Using normal arithmetic, logarithms are often long decimal numbers. The discrete logarithm problem computes logarithms in a modulo field - all of the numbers involved are integers. Furthermore, it is computationally very difficult to compute discrete logarithms, unlike conventional logarithms, which can be computed fairly easily. So, if x = yz mod m, then it is nearly impossible to invert this calculation and compute z, given only x, y, and m. Just as RSA will be cracked if anyone figures out how to quickly factor large prime numbers, El Gamel will be cracked if anyone can devise a scheme to easily compute discrete logarithms. Here's how El Gamel works. Pick a modulo m (a very large prime number), and two random numbers b (the base) and s (the secret key) between 1 and m-1. Now compute the public key y = bs mod m, and publish m, b, and y, keeping s secret. Presumably, the difficulty of computing discrete logarithms prevents someone from figuring out s from the published information. Now, to send a

message M (a number between 1 and m-1), the sender picks a random number k between 1 and m-1, and computes: y1 = bk mod m and y2 = yk M mod m and sends both y1 and y2; this is the encrypted message. To decrypt the message requires knowledge of s, which allows the following computation: y1-s y2 mod m = b-ks bks M mod m = M Essentially, bks is a message key, with one of its factors (k and s) known to each side in the exchange, used to encode the message in y2. y1 encodes k in such a way that the private key can be used to compute the message key and recover the original message. El Gamel can also be used for authentication. Here's an example of El Gamel in operation, again using very small numbers. We'll use 31 as our modulus, 6 as our base, and 5 as our secret exponent. Our public key is: y = bs mod m = 65 mod 31 = 7776 mod 31 = 26 So, we publish 31 (m), 6 (b), and 26 (y). Again, we'll encrypt 15, and we'll pick 2 as k, our secret message key y1 = bk mod m = 62 mod 31 = 36 mod 31 = 5 y2 = yk M mod m = (262 15) mod 31 = (676 15) mod 31 = 10140 mod 31 = 3

So, our message encrypts as (5,3). Note that a single number encrypts into two numbers; this doubling of size is one of the disadvantages of El Gamel. Now let's decrypt it y1-s y2 mod m = (5-5 3) mod 31 = (255 3) mod 31 = (9765625 3) mod 31 = 29296875 mod 31 = 15 In the decryption, I used the fact that in the mod 31 field, 5 and 25 are inverses (since 5 25 mod 31 = 1) to change the negative exponent into a positive one. Remember, so long as the modulus is prime (a requirement for El Gamel), division is defined, so numbers can be inverted and negative exponents are legal.

Other public key systems include knapsack ciphers (largely broken) and elliptic curve cryptosystems (not widely used)

The Key Length for Absolute Security


Now, the term absolute security is a relative one. The length of the key required to protect your data depends on who is actually trying to crack that data and how much time and resources he will be willing to spend to crack that data. In short, it depends on how desperate the person who wants to crack the data is.

Due to this, there has been a raging debate on how long a security key should be to provide absolutely foolproof security to the users of the key. There have been many numbers that have been suggested by various security analysts worldwide trying to predict how many digits a key must have to be absolutely uncrackable. But, there are some points to be considered before we go any further.

The first point is that computing power keeps on going up exponentially every few years. So, due to that, any code that may seem uncrackable today may well be cracked about 20 years from now. So, before selecting a key, the person has to take into account the question of for how many years he wants this data to be secure. He also needs to think about whether this data would be of any significance after the expiration date of the key. So that, even if the key is cracked after the estimated crack time, would the data have any relevant significance. All these queries can be effectively settled by citing one single example. The example of the infamous RSA129 code.

In 1977, three scientists, Ron Rivest, Adi Shamir and Leonard Adelman came up with a 129 digit number, which they claimed would take millions of years to crack into its two prime number factors, no matter how much computing power was utilised on it. This infamous 129-digit number came to be known as the RSA 129. This number was.

114, 381, 625, 757, 888, 867, 669, 235, 779, 976, 146, 612, 010, 218, 296, 721, 242, 362, 562, 561, 842, 935, 245, 733, 897, 830, 597, 123, 563, 958, 705, 058, 989, 075, 147, 599, 290, 026, 879, 543, 541

They were sure that any message enciphered using this number as the public key could take millions of years to crack. But, they were wrong!

In 1991, a group of about 600 to 700 people from all over the world came together with the sole intent of cracking this number. They began a systematic assault on this number and in about a year cracked it into its two prime number factors. These factors were a 64-digit prime number

3, 490, 529, 510, 847, 650, 949, 147, 849, 619, 903, 898, 133, 417, 764, 638, 493, 387, 843, 990, 820, 577

and a 65-digit prime number.

32, 769, 132, 993, 266, 709, 549, 961, 988, 190, 834, 461, 413, 177, 642, 967, 992, 942, 539, 798, 288, 533

This gave the computing industry some important lessons. The first one was that, one can never be sure of security in this era of ever-increasing computing powers. Thus, security keys are subject to the inclination of the person who is trying to crack them and the resources that he has up his

sleeve.

So,

security

keys

are

subject

to

continuous

upgrades

and

modifications.

And the second lesson was that a 130-digit number is definitely not sufficient for protecting very secret data. A government with the computing power and the inclination could easily crack all your data.

Since, increasing the number of digits can increase the effort required to find the factors of that number exponentially, increasing just a few digits can considerably increase the time required to crack a number. So, today experts believe that a 250 or 300 digit number with two prime factors can actually take millions of years to crack with all the foreseeable amount of computing power that could be developed.

But, if we manage to develop a key today and say that this key would be uncrackable would be foolhardy to say the least. Just as the RSA129 was cracked 15 years after its invention, our code might be cracked as well. And the future might laugh at our obscenely small code just as today we wonder in awe and amazement at the cracking of the RSA129

Security Pitfalls in Cryptography

Attacks Against Cryptographic Designs A cryptographic system can only be as strong as the encryption algorithms, digital signature algorithms, one-way hash functions, and message

authentication codes it relies on. Break any of them, and you've broken the system. And just as it's possible to build a weak structure using strong materials, it's possible to build a weak cryptographic system using strong algorithms and protocols. We often find systems that "void the warranty" of their cryptography by not using it properly: failing to check the size of values, reusing random parameters that should never be reused, and so on. Encryption algorithms don't necessarily provide data integrity. Key exchange protocols don't necessarily ensure that both parties receive the same key. In a recent research project, we found that some--not all--systems using related cryptographic keys could be broken, even though each individual key was secure. Security is a lot more than plugging in an algorithm and expecting the system to work. Even good engineers, well-known companies, and lots of effort are no guarantee of robust implementation; our work on the U.S. digital cellular encryption algorithm illustrated that. Random-number generators are another place where cryptographic systems often break. Good random-number generators are hard to design, because their security often depends on the particulars of the hardware and software. Many products we examine use bad ones. The cryptography may be strong,

but if the random-number generator produces weak keys, the system is much easier to break. Other products use secure random-number generators, but they don't use enough randomness to make the cryptography secure. Recently Counterpane published new classes of attacks against randomnumber generators, based on our work with commercial designs. One of the most surprising things we've found is that specific random-number generators may be secure for one purpose but insecure for another; generalizing security analyses is dangerous. In another research result, we looked at interactions between individually secure cryptographic protocols. Given a secure protocol, we show how to build another secure protocol that will break the first if both are used with the same keys on the same device. Attacks Against Implementations Many systems fail because of mistakes in implementation. Some systems don't ensure that plaintext is destroyed after it's encrypted. Other systems use temporary files to protect against data loss during a system crash, or virtual memory to increase the available memory; these features can accidentally leave plaintext lying around on the hard drive. In extreme cases, the operating system can leave the keys on the hard drive. One product we've seen used a special window for password input. The password remained in the window's memory even after it was closed. It didn't matter how good that product's cryptography was; it was broken by the user interface.

Other systems fall to more subtle problems. Sometimes the same data is encrypted with two different keys, one strong and one weak. Other systems use master keys and then one-time session keys. We've broken systems using partial information about the different keys. We've also seen systems that use inadequate protection mechanisms for the master keys, mistakenly relying on the security of the session keys. It's vital to secure all possible ways to learn a key, not just the most obvious ones. Electronic commerce systems often make implementation trade-offs to enhance usability. We've found subtle vulnerabilities here, when designers don't think through the security implications of their trade-offs. Doing account reconciliation only once per day might be easier, but what kind of damage can an attacker do in a few hours? Can audit mechanisms be flooded to hide the identity of an attacker? Some systems record compromised keys on "hotlists"; attacks against these hotlists can be very fruitful. Other systems can be broken through replay attacks: reusing old messages, or parts of old messages, to fool various parties. Systems that allow old keys to be recovered in an emergency provide another area to attack. Good cryptographic systems are designed so that the keys exist for as short a period of time as possible; key recovery often negates any security benefit by forcing keys to exist long after they are useful. Furthermore, key recovery databases become sources of vulnerability in themselves, and have to be designed and implemented securely. In one

product we evaluated, flaws in the key recovery database allowed criminals to commit fraud and then frame legitimate users. Attacks Against Passwords Many systems break because they rely on user-generated passwords. Left to themselves, people don't choose strong passwords. If they're forced to use strong passwords, they can't remember them. If the password becomes a key, it's usually much easier--and faster--to guess the password than it is to bruteforce the key; we've seen elaborate security systems fail in this way. Some user interfaces make the problem even worse: limiting the passwords to eight characters, converting everything to lower case, etc. Even passphrases can be weak: searching through 40-character phrases is often much easier than searching through 64-bit random keys. We've also seen key-recovery systems that circumvent strong session keys by using weak passwords for keyrecovery. Attacks Against Hardware Some systems, particularly commerce systems, rely on tamper-resistant hardware for security: smart cards, electronic wallets, dongles, etc. These systems may assume public terminals never fall into the wrong hands, or that those "wrong hands" lack the expertise and equipment to attack the hardware. While hardware security is an important component in many secure systems, we distrust systems whose security rests solely on assumptions about tamper resistance. We've rarely seen tamper resistance techniques that

work, and tools for defeating tamper resistance are getting better all the time. When we design systems that use tamper resistance, we always build in complementary security mechanisms just in case the tamper resistance fails. The "timing attack" made a big press splash in 1995: RSA private keys could be recovered by measuring the relative times cryptographic operations took. The attack has been successfully implemented against smart cards and other security tokens, and against electronic commerce servers across the Internet. Counterpane and others have generalized these methods to include attacks on a system by measuring power consumption, radiation emissions, and other "side channels," and have implemented them against a variety of public-key and symmetric algorithms in "secure" tokens. We've yet to find a token that we can't pull the secret keys out of by looking at side channels. Related research has looked at fault analysis: deliberately introducing faults into cryptographic processors in order to determine the secret keys. The effects of this attack can be devastating. Attacks Against Trust Models Many of our more interesting attacks are against the underlying trust model of the system: who or what in the system is trusted, in what way, and to what extent. Simple systems, like hard-drive encryption programs or telephone privacy products, have simple trust models. Complex systems, like electroniccommerce systems or multi-user e-mail security programs, have complex (and subtle) trust models. An e-mail program might use uncrackable cryptography

for the messages, but unless the keys are certified by a trusted source (and unless that certification can be verified), the system is still vulnerable. Some commerce systems can be broken by a merchant and a customer colluding, or by two different customers colluding. Other systems make implicit

assumptions about security infrastructures, but don't bother to check that those assumptions are actually true. If the trust model isn't documented, then an engineer can unknowingly change it in product development, and compromise security. Many software systems make poor trust assumptions about the computers they run on; they assume the desktop is secure. These programs can often be broken by Trojan horse software that sniffs passwords, reads plaintext, or otherwise circumvents security measures. Systems working across computer networks have to worry about security flaws resulting from the network protocols. Computers that are attached to the Internet can also be vulnerable. Again, the cryptography may be irrelevant if it can be circumvented through network insecurity. And no software is secure against reverse-engineering. Often, a system will be designed with one trust model in mind, and implemented with another. Decisions made in the design process might be completely ignored when it comes time to sell it to customers. A system that is secure when the operators are trusted and the computers are completely under the control of the company using the system may not be secure when the operators are temps hired at just over minimum wage and the computers

are untrusted. Good trust models work even if some of the trust assumptions turn out to be wrong. Attacks on the Users Even when a system is secure if used properly, its users can subvert its security by accident--especially if the system isn't designed very well. The classic example of this is the user who gives his password to his co-workers so they can fix some problem when he's out of the office. Users may not report missing smart cards for a few days, in case they are just misplaced. They may not carefully check the name on a digital certificate. They may reuse their secure passwords on other, insecure systems. They may not change their software's default weak security settings. Good system design can't fix all these social problems, but it can help avoid many of them. Attacks Against Failure Recovery Strong systems are designed to keep small security breaks from becoming big ones. Recovering the key to one file should not allow the attacker to read every file on the hard drive. A hacker who reverse-engineers a smart card should only learn the secrets in that smart card, not information that will help him break other smart cards in the system. In a multi-user system, knowing one person's secrets shouldn't compromise everyone else's. Many systems have a "default to insecure mode." If the security feature doesn't work, most people just turn it off and finish their business. If the on-

line credit card verification system is down, merchants will default to the lesssecure paper system. Similarly, it is sometimes possible to mount a "version rollback attack" against a system after it has been revised to fix a security problem: the need for backwards compatibility allows an attacker to force the protocol into an older, insecure, version. Other systems have no ability to recover from disaster. If the security breaks, there's no way to fix it. For electronic commerce systems, which could have millions of users, this can be particularly damaging. Such systems should plan to respond to attacks, and to upgrade security without having to shut the system down. The phrase "and then the company is screwed" is never something you want to put in your business plan. Good system design considers what will happen when an attack occurs, and works out ways to contain the damage and recover from the attack. Attacks Against the Cryptography Sometimes, products even get the cryptography wrong. Some rely on proprietary encryption algorithms. Invariably, these are very weak.

Counterpane has had considerable success breaking published encryption algorithms; our track record against proprietary ones is even better. Keeping the algorithm secret isn't much of an impediment to analysis, anyway--it only takes a couple of days to reverse-engineer the cryptographic algorithm from executable code. One system we analyzed, the S/MIME 2 electronic-mail standard, took a relatively strong design and implemented it with a weak

cryptographic algorithm. The system for DVD encryption took a weak algorithm and made it weaker. We've seen many other cryptographic mistakes: implementations that repeat "unique" random values, digital signature algorithms that don't properly verify parameters, hash functions altered to defeat the very properties they're being used for. We've seen cryptographic protocols used in ways that were not intended by the protocols' designers, and protocols "optimized" in seemingly trivial ways that completely break their security. Attack Prevention vs. Attack Detection Most cryptographic systems rely on prevention as their sole means of defense: the cryptography keeps people from cheating, lying, abusing, or whatever. Defense should never be that narrow. A strong system also tries to detect abuse and to contain the effects of any attack. One of our fundamental design principles is that sooner or later, every system will be successfully attacked, probably in a completely unexpected way and with unexpected consequences. It is important to be able to detect such an attack, and then to contain the attack to ensure it does minimal damage. More importantly, once the attack is detected, the system needs to recover: generate and promulgate a new key pair, update the protocol and invalidate the old one, remove an untrusted node from the system, etc. Unfortunately, many systems don't collect enough data to provide an audit trail, or fail to protect the data against modification. Counterpane has done considerable

work in securing audit logs in electronic commerce systems, mostly in response to system designs that could fail completely in the event of a successful attack. These systems have to do more than detect an attack: they must also be able to produce evidence that can convince a judge and jury of guilt. A simple Application to Demonstrate Cryptography, a means of Security

The above is the use of Cryptography Algorithm to encrypts a string which display the result of the encryption.

While these Interface describe the decryption process of the string.

CONCLUSION
In conclusion, Cryptography a means of security is the art of protecting information by transforming it (encrypting it) into an unreadable format, called cipher text. Only those who possess a secret key can decipher (or decrypt) the message into plain text. Encrypted messages can sometimes be broken by cryptanalysis, also called code breaking, although modern cryptography techniques are virtually unbreakable. As the Internet and other forms of electronic communication become more prevalent, electronic security is becoming increasingly important. Cryptography is used to protect e-mail messages,

credit card information, and corporate data. One of the most popular cryptography systems used on the Internet is Pretty Good Privacy because it's effective and free. Cryptography systems can be broadly classified into symmetric-key systems that use a single key that both the sender and recipient have, and public-key systems that use two keys, a public key known to everyone and a private key that only the recipient of messages uses. REFERENCES
American Mathematical Monthly. Editors. "Bibliography of Cryptography." 50 (May 1943): 345-346. [Petersen] Bamford, J. (1983). The Puzzle Palace: Inside the National Security Agency, America's most secret intelligence organization. New York: Penguin Books. Bamford, J. (2001). Body of Secrets : Anatomy of the Ultra-Secret National Security Agency from the Cold War Through the Dawn of a New Century. New York: Doubleday. Barr, T.H. (2002). Invitation to Cryptology. Upper Saddle River, NJ: Prentice Hall. Bauer, F.L. (2002). Decrypted Secrets: Methods and Maxims of Cryptology, 2nd ed. New York: Springer Verlag. Bundy, William P. "From the Depths to the Heights." Cryptologia 6, no. 1 (Jan. 1982): 65-74. David Kahn, Cryptologia 29.1 (Jan. 2005), calls Shulman "the premier bibliographer of cryptology." Shulman died 30 October 2004. Denning, D.E. (1982). Cryptography and Data Security. Reading, MA: Addison-Wesley. Diffie, W., & Landau, S. (1998). Privacy on the Line. Boston: MIT Press. Electronic Frontier Foundation. (1998). Cracking DES: Secrets of Encryption Research, Wiretap Politics & Chip Design. Sebastopol, CA: O'Reilly & Associates. Federal Information Processing Standards (FIPS) 140-2. (2001, May 25). Security Requirements for Cryptographic Modules. Gaithersburg, MD: National Intitute of Standards and Technology (NIST). Retrieved from http://csrc.nist.gov/publications/fips/fips140-2/fips1402.pdf Ferguson, N., & Schneier, B. (2003). Practical Cryptography. New York: John Wiley & Sons. Ferguson, N., Schneier, B., & Kohno, T. (2010). Cryptography Engineering: Design Principles and Practical Applications. New York: John Wiley & Sons. Flannery, S. with Flannery, D. (2001). In Code: A Mathematical Journey. New York: Workman Publishing Company. Ford, W., & Baum, M.S. (2001). Secure Electronic Commerce: Building the Infrastructure for Digital Signatures and Encryption, 2nd ed. Englewood Cliffs, NJ: Prentice Hall. Garfinkel, S. (1995). PGP: Pretty Good Privacy. Sebastopol, CA: O'Reilly & Associates. Grant, G.L. (1997). Understanding Digital Signatures: Establishing Trust over the Internet and Other Networks. New York: Computing McGraw-Hill. Grabbe, J.O. (1997, October 10). Cryptography and Number Theory for Digital Cash. Retrieved from http://wwwswiss.ai.mit.edu/6.805/articles/money/cryptnum.htm Kahn, D. (1983). Kahn on Codes: Secrets of the New Cryptology. New York: Macmillan. Kahn, D. (1996). The Codebreakers: The Story of Secret Writing, revised ed. New York: Scribner. Kaufman, C., Perlman, R., & Speciner, M. (1995). Network Security: Private Communication in a Public World. Englewood Cliffs, NJ): Prentice Hall.

Kessler, G.C. (1999, October). Basics of Cryptography and Applications for Windows NT. Windows NT Magazine. Kessler, G.C. (2000, February). Roaming PKI. Information Security Magazine. Kessler, G.C., & Pritsky, N.T. (2000, October). Internet Payment Systems: Status and Update on SSL/TLS, SET, and IOTP. Information Security Magazine. Koblitz, N. (1994). A Course in Number Theory and Cryptography, 2nd ed. New York: Springer-Verlag. Levy, S. (1999, April). The Open Secret. WIRED Magazine, 7(4). Retrieved from http://www.wired.com/wired/archive/7.04/crypto.html Levy, S. (2001). Crypto: When the Code Rebels Beat the Government Saving Privacy in the Digital Age. New York: Viking Press. Mao, W. (2004). Modern Cryptography: Theory & Practice. Upper Saddle River, NJ: Prentice Hall Professional Technical Reference. Marks, L. (1998). Between Silk and Cyanide: A Codemaker's War, 1941-1945. New York: The Free Press (Simon & Schuster). Pekelney, Richard. "Excellent, Exceptional, Enormous Crypto Source." Cryptologia 29, no. 3 (Jul. 2005): 255256. Petersen: "Review of several important books on Ultra by an American who served at Bletchley Park." Schneier, B. (1996). Applied Cryptography, 2nd ed. New York: John Wiley & Sons. Schneier, B. (2000). Secrets & Lies: Digital Security in a Networked World. New York: John Wiley & Sons. Singh, S. (1999). The Code Book: The Evolution of Secrecy from Mary Queen of Scots to Quantum Cryptography. New York: Doubleday. Smith, L.D. (1943). Cryptography: The Science of Secret Writing. New York: Dover Publications. Spillman, R.J. (2005). Classical and Contemporary Cryptology. Upper Saddle River, NJ: Pearson Prentice-Hall. Stallings, W. (2006). Cryptography and Network Security: Principles and Practice, 4th ed. Englewood Cliffs, NJ: Prentice Hall. Trappe, W., & Washington, L.C. (2006). Introduction to Cryptography with Codin Theory, 2nd ed. Upper Saddle River, NJ: Pearson Prentice Hall. Young, A., & Yung, M. (2004). Malicious Cryptography: Exposing Cryptovirology. New York: John Wiley & Sons.

You might also like