You are on page 1of 53

CHAPTER 1 SYNOPSIS

This paper presents a state-of-the-art implementation of lossless image compression algorithm LOCO-R, which is based on the LOCO-I (low complexity lossless compression for images) algorithm developed by Weinberger, Seroussi and Sapiro, with modifications and betterment, the algorithm reduces obviously the implementation complexity. Experiments illustrate that this algorithm is better than Rice Compression typically by around 15 percent. Lossless image compression is well-established as a means of reducing data without compromising the data quality. LOCO-R (Low Complexity LOssless COmpression for Images) is the algorithm at the core of the new ISO/ITU standard for lossless and near-lossless compression of continuous-tone images, JPEG-LS.The aim is to reduce to increase the speed at which images can be compressed. Currently, the available space-qualified hardware designed for lossless compression is based on the Rice Compression algorithm. Lossless image compression is well-established as a means of reducing the volume of image data without compromising the data quality. Missions often desire hardware to perform such compression, the aim is to reduce the demand on processors and to increase the speed at which images can be compressed. Currently, the available pace-qualified hardware designed for lossless compression is based on the Rice Compression algorithm. Improved LOCO-I to reduce the complexity of implementation, regarded as LOCO-R. For natural images, the compression achieved by LOCO-R exceeds Rice Compression in compression rate. the volume of image

CHAPTER 2 PROJECT DESCRIPTION


2.1 PROBLEM DEFINITION Loss compression is a data encoding method which discards (loses) some of the data, in order to achieve its goal, with the result that decompressing the data yields content that is different from the original, This can be overcome by encrypting the image contents to nearer values of pixel without deleting it will improve size and quality without loss of information. Lossless image compression is well-established as a means of reducing the volume of image data without compromising the data quality. The LOCO-R algorithm works as LOCO-I and takes rectangular image block with 8 pixel values. The compressed image produced is a sequence of bits from which the original image can be reconstructed. 2.2 PROJECT DETAILS The LOCO-R algorithm described in this section is based on the LOCO-I algorithm[5], it takes as input a rectangular image with 8-bit pixel values (the pixel values are within the range to 255). The compressed image produced is a sequence of bits from which the original image can be reconstructed. Let wd be the image width and ht be the image height. Pixels are identified by coordinates (x, y)with x in the range [O, wd-l] and y in the range [O, ht-l]. The LOCO-R algorithm is based on predictive compression. During compression, the pixels of the image are processed in raster scan order. Specifically, y is increment through the range[O, ht-l], and for each y value, x is incremented through the range[O, wd-l]. (Thus, the y dimensions, the slowly varying dimension.)The first two pixels, with coordinates (0,0) and (1,0), are simply put into the output bit stream un coded. For all other pixels of the image, the processing that occurs can be conceptually divided into four steps:

That four steps are: Classify the pixel into one of several contexts according to the values of (usually 5 )previously encoded pixels. Estimate the pixel value from (usually 3) previously encoded pixels, and add a correction( called the bias), which depends on the context. Map the difference between the estimate and the actual pixel value to a non-negative integer, and encode this integer using variable length codes. Update the statistics for the context based on the new pixel value.

Context The context of a pixel p is based on the values of 5previous pixels a through e as shown in Figure Assume for now that p is sufficiently far from the image edges that although e all exist. The context is detennined by the values Q7(a-c),Q7(d-a),Q7(c-b) and Q3(b-e),where Q7 and Q3 are quantization functions taking on 7 and 3 possible values, respectively . Estimation The second step in the main encoding loop is to estimate the value of the pixel to be encoded. A more accurate estimate will produce a smaller, and thus more compress residual. A preliminary estimate is computed with a fixed(non adaptive)estimator, and the adaptively computed bias is added to form the final estimate. Encoding The Residual The Residual after the encoder determines the estimate of the current pixel value, the difference between the actual pixel value and the estimate is lossless encoded. factor with natural images. Golomb codes are simple entropy codes that are well suited to encoding quantities with distributions encoding is determined from previous values occurring within the same context.

updating context data After the encoding operation takes place, the data associated with the context are updated. This process is continued for each and every pixel by incrementing the value of pixel count for each pixel estimated moderate values are set to improve the performance.

CHAPTER 3 COMPUTATIONAL ENVIRONMENT


3.1 SOFTWARE SPECIFICATION 3.2 Operating System Language Front End : Windows 2000/xp2/xp3. : JDK 1.5 : Java Swing.

HARDWARE SPECIFICATION Processor RAM Hard Disk : Pentium-III and above processors. : 128MB. : 10 GB.

3.3 SOFTWARE FEATURES Why is JAVA? Java architecture provides a portable, robust, high performing environment for development. Java provides portability by compiling the byte codes for the Java Virtual Machine, which is then interpreted on each platform by the run-time environment. Java is a dynamic system, able to load code when needed from a machine in the same room or across the planet. During run-time the Java interpreter tricks the byte code file into thinking that it is running on a Java Virtual Machine. In reality this could be a Intel Pentium Windows 95 or Sun SARC station running Solaris or Apple Macintosh running system and all could receive code from any computer through Internet and run the Applets. With most programming languages, you either compile or interpret a program so that you can run it on your computer. The Java programming language is unusual in that a program is both compiled and interpreted. With the compiler, first you translate a program into an intermediate language called Java byte codes the platform-independent codes interpreted by the interpreter on the Java platform. The interpreter parses and runs each Java byte code instruction on 5

the computer. Compilation happens just once; interpretation occurs each time the program is executed. The following figure illustrates how this works.

fig 3.1: working of java You can think of Java byte codes as the machine code instructions for the Java Virtual Machine (Java VM). Every Java interpreter, whether its a development tool or a Web browser that can run applets, is an implementation of the Java VM. Java byte codes help make write once, run anywhere possible. You can compile your program into byte codes on any platform that has a Java compiler. The byte codes can then be run on any implementation of the Java VM. That means that as long as a computer has a Java VM, the same program written in the Java programming language can run on Windows 2000, a Solaris workstation.

Java program

Interpreter
0010110100 Myprogram.class

Myprogram.java

Compiler

My Program

Fig 3.2:Picture showing the development process of JAVA program

The Java programming language is a high-level language that can be characterized by all of the following buzzwords: Simple Java was designed to be easy for the Professional programmer to learn and to use effectively. If you are an experienced C++ programmer, learning Java will be even easier. Because Java inherits the C/C++ syntax and many of the object oriented features of C++. Most of the confusing concepts from C++ are either left out of Java or implemented in a cleaner, more approachable manner. In Java there are a small number of clearly defined ways to accomplish a given task. Object-Oriented Java was not designed to be source-code compatible with any other language. This allowed the Java team the freedom to design with a blank slate. One outcome of this was a clean usable, pragmatic approach to objects. The object model in Java is simple and easy to extend, while simple types, such as integers, are kept as high-performance non-objects. Simple Architecture neutral Object oriented Portable Distributed High performance Interpreted Multithreaded Robust Dynamic Secure

Robust The multi-platform environment of the Web places extraordinary demands on a program, because the program must execute reliably in a variety of systems. The ability to create robust programs was given a high priority in the design of Java. Java is strictly typed language; it checks your code at compile time and run time.

Java virtually eliminates the problems of memory management and deallocation, which is completely automatic. In a well-written Java program, all run time errors can and should be managed by your program.

CHAPTER 4 FEASIBILITY STUDY


The feasibility of the project is analyzed in this phase and business proposal is put forth with a very general plan for the project and some cost estimates. During system analysis the feasibility study of the proposed system is to be carried out. This is to ensure that the proposed system is not a burden to the company. For feasibility analysis, some understanding of the major requirements for the system is essential. Three key considerations involved in the feasibility analysis are OPERATIONAL FEASIBILITY TECHNICAL FEASIBILITY ECONOMICAL FEASIBILITY

4.1 OPERATIONAL FEASIBILITY For natural images, the compression achieved by LOCOR is better than Rice Compression typically by around 15percent. Table compares the compression achieved by LOCO-R and Rice Compression on a set of representative images. Tests on other natural images yield similar results. As LOCO-R is designed for 8-bit (gray-scale) images, all images tested were of this type. Table Comparison of the compression achieved by LOCO-R and Rice Compression, Rate is defined as the average number of bits per pixel in the compressed image Rate, bits/pixel Image . Image description Lunar Mars(Path finder) Mars(Viking Orbitter) Mars(Viking lander) Venus Rice Compression 4.48 4.61 3.07 4.79 4.16 LOCO-R 5.44 5.72 3.67 5.32 4.37

Table 4.1 : Performance comparison of Loco-R . 9

The algorithm and implementation can easily be adapted to handle modifications in various parameters. In particular, pixel bit depths greater than 8 could be accommodated. Some parameters of the design could be made controllable with the interface to allow greater flexibility. Overall, the OCO-R algorithm represents significant progress toward producing lossless imagecompression with improved compression performance as compared with currently available solution. 4.2 TECHNICAL FEASIBILITY: study is carried out to check the technical feasibility, that is, the technical requirements of the system. Any system developed must not have a high demand on the available technical resources. This will lead to high demands on the available technical resources. This will lead to high demands being placed on the client. The developed system must have a modest requirement, as only minimal or null changes are required for implementing this system. 4.3 ECONAMICAL FEASIBILITY: Proposed system is economically very compression and cheap than that of rice loco-I(low complexity image compression algorithm for

images)algorithm . It also enough minimum requirements such as java platform and less hard disk space With any supported operating system.

10

CHAPTER 5 SYSTEM ANALYSIS


5.1 EXISTING SYSTEM The Loss compression is a data encoding method which discards(loses) some of the data, in order to achieve its goal, with the result that decompressing the data yields content that is different from the original, though similar enough to be useful in some way. It is possible to compress many types of digital data in a way which reduces the size of a computer file needed to store it or the bandwidth needed to stream it, with no loss of the full information contained in the original file. These algorithm example include Rice compression and LOCO-I. 5.1.1 Disadvantages in existing systems These algorithm remove unnecessary information and encrypt contents to reduce the volume of the image , It cause difficulty to take back original image. These algorithms hardware. increase the demand on the processors and on the

5.2 PROPOSED SYSTEM The LOCO-R algorithm works as LOCO-I and takes rectangular image block with 8 pixel values. The compressed image produced is a sequence of bits from which the original image can be reconstructed. Let wd be the image width and ht be the image height. Pixels are identified by coordinates (x, y) with x in the range [O, wd-l] and y in the range [O, ht-l]. Lossless image compression is well-established as a means of reducing the volume of image data without compromising the data quality. Missions often desire hardware to perform such compression, the aim is to reduce the demand on processors and to increase the speed at which images can be compressed. 11

Currently,

the

available

space-qualified

hardware

designed

for

lossless

compression.

The algorithm and implementation can easily be adapted to handle modification in various parameters. In particular, pixel bit depths greater than 8 could be accommodated. Some parameters of the design could be made controllable with the interface to allow greater flexibility. Overall, the LOCO-R algorithm represents significant progress toward producing lossless imagecompression with improved compression performance as compared with currently available solution. For the images like lunar, Viking orbiter, Viking lander, europa ,venus type it take high Cpu times than other type thus performance is effected. 5.2.1 Advantage of proposed system For natural images, the compression achieved by LOCOR is better than Rice Compression typically by around 15percent. The algorithm and implementation can easily be adapted to handle modifications in various parameters. In particular, pixel bit depths greater than 8 could be accommodated. Some parameters of the design could be made controllable with the interface to allow greater flexibility.

12

5.3 ANALYSIS TOOLS 5.3.1 Class diagram The class diagram is the main building block in object oriented modeling. It is used both for general conceptual modeling of the systematics of the application, and for detailed modeling translating the models into programming code.

The upper part holds the name of the class. The middle part contains the attributes of the class. The bottom part gives the methods or operations the class can take .

Fig 5.1 : System structure class diagram.

13

5.3.2 Use Case Diagram Use case Diagram tells about the behavior of the system. Use case diagram involves actors and use cases, actor An actor is a person, organization, or external system that plays a role in one or more interactions with the system where as use cases are a sequence of actions that provide something of measurable value to an actor and is drawn as a horizontal ellipse.

Fig 5.3 : use case diagram for system

14

5.3.3 Sequence Diagram A sequence diagram shows, as parallel vertical lines (lifelines), different processes or objects that live simultaneously, and, as horizontal arrows, the messages exchanged between them, in the order in which they occur. This allows the specification of simple runtime scenarios in a graphical manner. Some systems have simple dynamic behavior that can be expressed in terms of specific sequences of messages between a small, fixed number of objects or processes. In such cases sequence diagrams can completely specify the system's behavior.

Fig 5.2 : sequence diagram for system

15

5.3.4 Collaboration Diagram

A collaboration diagram models the interactions between objects or parts in terms of sequenced messages. Communication diagrams represent a combination of information taken from class, sequence and use case diagrams describing both the static structure and dynamic behavior of a system.
Collaboration diagrams use the free-form arrangement of objects and links as used in Object diagrams. In order to maintain the ordering of messages in such a free-form diagram, messages are labeled with a chronological number and placed near the link the message is sent over.

16

Fig 5.5 : Collaboration Diagram

5.3.5 Activity Diagram Activity diagrams are graphical representations of workflows of stepwise activities and actions with support for choice, iteration and concurrency.[In the UML, activity diagrams can be used to describe the business and operational step-by-step workflows of components in a system. An activity diagram shows the overall flow of control. activity diagram has been extended to indicate flows among steps that convey physical matter, Additional changes allow the diagram to better support continuous behaviors and continuous data flows.

Fig 5.4 : Activity diagram for entire system

17

CHAPTER 6 SYSTEM DESIGN


Systems design is the process of defining the architecture, components, modules, interfaces, and data for a system to satisfy specified requirements. Systems design is therefore the process of defining and developing systems to satisfy specified requirements of the user. The logical design of a system pertains to an abstract representation of the data flows, inputs and outputs of the system. This is often conducted via modeling, using an over-abstract model of the actual system. In the context of systems design are included.

The physical design relates to the actual input and output processes of the system. This is laid down in terms of how data is input into a system, how it is verified / authenticated, how it is processed, and how it is displayed as output. Physical design, in this context, does not refer to the tangible physical design of an information system. To use an analogy, a personal computer's physical design involves input via a keyboard, processing within the CPU, and output via a monitor, printer, etc. It would not concern the actual layout of the tangible hardware, which for a PC would be a monitor, CPU, motherboard, hard drive, modems, video/graphics cards, USB slots. The software architecture of a system is the set of structures needed to reason about the system, which comprise software elements, relations among them, and properties. The fundamental organization of a system, embodied in its

components, their relationships to each other and the environment, and the principles governing its design and evolution.

18

The composite of the design architectures for products and their life Cycle processes. A representation of a system in which there is a mapping of functionality onto hardware and software components, a mapping of the software architecture onto the hardware architecture, and human interaction with these components.

6.1 ARCHITECTURE DESIGN :

Input Image

Pixel Manipulation

Data Estimation

Context Data

Output Image [Compressed image] Fig 6.1 :System Architecture

A system architecture can best be thought of as a set of representations of an existing (or To Be Created) system. It is used to convey the informational content of the elements comprising a system, the relationships among those elements, and the rules governing those relationships A system architecture is primarily concerned with the internal interfaces among the system's components or subsystems, and the interface between the system and its external environment, especially the use A system architecture can be contrasted 19

with system architecture engineering, which is the method and discipline for effectively implementing the architecture of a system.

The following are about architecture It is a method because a sequence of steps is prescribed to produce or change the architecture of a system within a set of constraints. It is a discipline because a body of knowledge is used to

inform practitioners as to the most effective way to architect the system within a set of constraints. It is used to convey the informational content of the elements comprising a system, the relationships among those elements and the rules governing those relationships A system architecture is primarily concerned with the internal interfaces among the system's components or subsystems

20

CHAPTER 7 SYSTEM IMPLEMENTATION


Implementation is the stage of the project when the theoretical design is turned out into a working system. Thus it can be considered to be the most critical stage in achieving a successful new system and in giving the user, confidence that the new system will work and be effective. The implementation stage involves careful planning, investigation of the existing system and its constraints on implementation, designing of methods to achieve changeover and evaluation of changeover methods. MAIN MODULES 1. Pixel Contexts. 2. Data Estimation. 3.Updating Context Data.

7.1 IMPLEMENTATION PROCESS PIXEL CONTEXTS (PIXEL MANIPULATION) The each pixel is classified into one of several contexts based on quantized values of differences between pairs of nearby pixels .The context is used to estimate the distribution on the prediction residuals and to determine a small estimated correction to the prediction. Both of these estimates are determined adaptively, based solely on statistics for previous pixels with the same context. 21

The process of dividing context is shown here by taking image and taking each pixel value and by finding its value difference from other.

Input Image

Manipulatin g Nearby Pixel

Context image Fig 7.1 :dividing context

The context is based on the binary concatenation (Q7(ac),Q7(d-a),Q7(cb),Q3(b-e)). When regarded as sign magnitude form, the context can be interpreted as an ordered quadruple of integers. In order to reduce the number of contexts, a context with some quadruple (-c1,-c2,-c3,-c4) is merged with the context with quadruple (-c1,-c2,-c3,-c4).This is accomplished by inverting the signs of Q7(a-c),Q7(da),Q7( c-b) and Q3(b-e), if the first non-zero value of these is negative. For example, (101,010,000,11) is inverted to give(001,110,000,01),and (000, 110,011,00) is inverted to give(000,010,111,00). When inversion occurs, an "invert" flag is set so that the prediction residual (the difference between the estimated and actual pixel values) and the bias can also be inverted. When contexts are combined, the leading bit in the binary concatenation will always be and thus can be dropped, allowing contexts to be described with 10 bits.

22

DATA ESTIMATION The second step in the main encoding loop is to estimate the value of the pixel to be encoded. A more accurate estimate will produce a smaller and thus more compressible residual. A preliminary estimate is computed with a fixed estimator, and the adaptively computed bias is added to form the final estimate.

Final estimate is further used to finding out the encoded pixel value by make addition are subtracting from the actual pixel value.

Context Image

Estimating Pixel value

Data Estimation

Fig 7.2 :Data Estimation Part of the motivation for this estimator is to serve as a primitive edge detector. Note that, for images without many sharp edges, a small improvement might be obtained by using an estimator (such as a + b - c) that is well-suited to smooth images. Similarly, for noisy images, it would be desirable to use a predictor that is better at averaging out noise values. UPDATING CONTEXT DATA

23

After the encoding operation takes place, the data associated with the context are updated. The goal of this process is to ensure that later values with the same context are encoded efficiently. the data associated with the context are updated. The goal of this process is to ensure that later values with the same context are encoded efficiently. As previously mentioned, there are32 bits of data associated with each context

These modules repeatedly perform encoding operations to plot the pixel color.

Estimated Data

Updating Context Data

Compressed Image Fig 7.3 : Updating pixel values After the encoding operation takes place, the data associated with the context are updated. The goal of this process is to ensure that later values with the same

24

context are encoded efficiently. As previously mentioned, there are32 bits of data associated with each context: Occurrence count (count, 6 bit unsigned integer) Magnitude sum of residuals ( 13 bit unsigned integer) Sum of residuals (8 bit signed integer) Bias value (bias, 5 bit signed integer)

During the update process count, sum, and bias may take on values outside their usual ranges The following steps occur during the update process: (1) Increment count by 1. (2) Add ' to sum. If the new sum is greater than 0,bias is increased by 1 (unless it already equals its maximum value, 15) and sum is decreased by count; if the new sum is less than -count, bias is decreased by 1 (unless it equals its minimum value, -16) and sum is increased by count. (3) Clip sum to the range [-128,127]. (4) Increase magnitude sum by I'I. (5) If count is 64, then count, magnitude sum, and sum are all divided by 2.

EXAMPLE :

25

F2

F3 B

F1 R Y

F4

Fig 7.4: fixed estimates and encoding estimate Pixel colors corresponding codes R11 00 00 G00 11 00 B00 11 00 w11 11 11 y11 11 00 calculation of fixed estimate f1(R-Y)(11 00 00) - (11 11 00) = 00 11 00 f2(R-G)(11 00 00) (00 11 00) = 10 01 00 f3(R-B)(11 00 00) (00 00 11) = 10 11 01 f4(R-W)(11 00 00) (11 11 11) = 00 11 11

26

Here we represent using 6bits remaining of them one is used for representing sign and other for representing carry generated but sum which used further is represented using six bits. Calculation of bias : Bias = sum of fixed estimates Bias = (f1+f2+f3+f4) = 10 11 00 Encoding pixels : To encode pixel color yellow Sum = color (R) = 11 00 00 If( Bias < next pixel(actualY)) Encode(y) = color (R) + Bias Encode (y) = 11 00 00 + 10 11 00 = 01 11 00(light yellow) To encode pixel color Green Sum = color (R) = 11 00 00 If( Bias >next pixel(actual G)) Encode (G) = color (R) - Bias Encode (G) = 11 00 00 10 11 00 = 00 01 00(light green) Here sum values are used and carry values are stores for decryption but not for encryption, Image color can be become light to avoid that boundary colors are plotted as usual so that image should look like brighten. This procedure is repeated for each and every pixel values to obtain color values to be plotted.

CHAPTER 8
27

TESTING
The purpose of testing is to discover errors. Testing is the process of trying to discover every conceivable fault or weakness in a work product. It provides a way to check the functionality of components, sub assemblies, assemblies and/or a finished product It is the process of exercising software with the intent of ensuring that the software system meets its requirements and user expectations and does not fail in an unacceptable manner. There are various types of test. Each test type addresses a specific testing requirement. TYPES OF TESTS 8.1 UNIT TESTING Unit testing is usually conducted as part of a combined code and unit test phase of the software lifecycle, although it is not uncommon for coding and unit testing to be conducted as two distinct phases. 8.1.1 Test strategy and approach Field testing will be performed manually and functional tests will be written in detail. 8.1.2 Test objectives All field entries must work properly. Pages must be activated from the identified link. The entry screen, messages and responses must not be delayed. Verify that the entries are of the correct format No duplicate entries should be allowed All links should take the user to the correct page.

8.1.3 Features to be tested

8.2 INTEGRATION TESTING Integration tests are designed to test integrated software components to determine if they actually run as one program. Testing is event driven and is more concerned with the basic outcome of screens or fields. Integration tests demonstrate that although the components were individually satisfaction, as shown by successfully unit testing, the combination of components is correct and consistent. Integration testing is specifically aimed at exposing the problems that arise from the combination of components. 28

8.3 FUNCTIONAL TEST Functional tests provide systematic demonstrations that functions tested are available as specified by the business and technical requirements, system documentation, and user manuals. Functional testing is centered on the following items: Valid Input Invalid Input Functions Output exercised. Systems/Procedures : interfacing systems or procedures must be invoked. Organization and preparation of functional tests is focused on : identified classes of valid input must be accepted. : identified classes of invalid input must be rejected. : identified functions must be exercised. : identified classes of application outputs must be

requirements, key functions, or special test cases. In addition, systematic coverage pertaining to identify Business process flows; data fields, predefined processes, and successive processes must be considered for testing. Before functional testing is complete, additional tests are identified and the effective value of current tests is determined. 8.4 SYSTEM TESTING System testing ensures that the entire integrated software system meets requirements. It tests a configuration to ensure known and predictable results. An example of system testing is the configuration oriented system integration test. System testing is based on process descriptions and flows, emphasizing predriven process links and integration points. 8.4.1 white box testing White Box Testing is a testing in which in which the software tester has knowledge of the inner workings, structure and language of the software, or at least its purpose. It is purpose. It is used to test areas that cannot be reached from a black box level. 8.4.2 black box testing Black Box Testing is testing the software without any knowledge of the inner workings, structure or language of the module being tested. Black box tests, as 29

most other kinds of tests, must be written from a definitive source document, such as specification or requirements document, such as specification or requirements document. It is a testing in which the software under test is treated, as a black box .you cannot see into it. The test provides inputs and responds to outputs without considering how the software works. 8.5 ACCEPTANCE TESTING User Acceptance Testing is a critical phase of any project and requires significant participation by the end user. It also ensures that the system meets the functional requirements. TEST RESULTS: All the test cases mentioned above passed successfully. No defects encountered.

30

CHAPTER 9 SCREEN LAYOUTS

Screen 9.1 : Load image 31

Fig 9.2 :Loading image

32

Fig 9.3: Loaded image

33

Fig 9.3 : compress image

34

Fig 9.4 : compress image details

35

Fig 9.5: output

36

Fig 9.6: Showing compress image

37

CHAPTER 10 SAMPLE SOURCE CODE


CannyEdgeDetector.java import java.awt.image.BufferedImage; import java.io.BufferedReader; import java.io.File; import java.io.FileInputStream; import java.io.FileOutputStream; import java.io.IOException; import java.io.InputStreamReader; import java.util.Arrays; import java.util.Vector; import javax.imageio.ImageIO;

public class CannyEdgeDetector {

private final static float GAUSSIAN_CUT_OFF = 0.005f; private final static float MAGNITUDE_SCALE = 100F; private final static float MAGNITUDE_LIMIT = 1000F; private final static int MAGNITUDE_MAX = (int) (MAGNITUDE_SCALE * MAGNITUDE_LIMIT); private int height; private int width; private int picsize; private int[] data; private int[] magnitude; private BufferedImage sourceImage; private BufferedImage edgesImage; private float gaussianKernelRadius; private float lowThreshold; 38

private float highThreshold; private int gaussianKernelWidth; private boolean contrastNormalized; private float[] xConv; private float[] yConv; private float[] xGradient; private float[] yGradient; // constructors public CannyEdgeDetector() { lowThreshold = 2.5f; highThreshold = 7.5f; gaussianKernelRadius = 2f; gaussianKernelWidth = 16; contrastNormalized = false; } // accessors public BufferedImage getSourceImage() { return sourceImage; } public void setSourceImage(BufferedImage image) { sourceImage = image; } public BufferedImage getEdgesImage() { return edgesImage; }

39

public void setEdgesImage(BufferedImage edgesImage) { this.edgesImage = edgesImage; }

public float getLowThreshold() { return lowThreshold; } public void setLowThreshold(float threshold) { if (threshold < 0) throw new IllegalArgumentException(); lowThreshold = threshold; } public float getHighThreshold() { return highThreshold; }

public void setHighThreshold(float threshold) { if (threshold < 0) throw new IllegalArgumentException(); highThreshold = threshold;

public void setContrastNormalized(boolean contrastNormalized) { this.contrastNormalized = contrastNormalized; } // methods public void process() { width = sourceImage.getWidth(); height = sourceImage.getHeight(); picsize = width * height; 40

initArrays(); readLuminance(); if (contrastNormalized) normalizeContrast(); computeGradients(gaussianKernelRadius, gaussianKernelWidth); int low = Math.round(lowThreshold * MAGNITUDE_SCALE); int high = Math.round( highThreshold * MAGNITUDE_SCALE); performHysteresis(low, high); thresholdEdges(); writeEdges(data); } // private utility methods private void initArrays() { if (data == null || picsize != data.length) { data = new int[picsize]; magnitude = new int[picsize]; xConv = new float[picsize]; yConv = new float[picsize]; xGradient = new float[picsize]; yGradient = new float[picsize]; sumY += kernel[xOffset] * (data[index - yOffset] + data[index + yOffset]); sumX += kernel[xOffset] * (data[index - xOffset] + data[index + xOffset]); yOffset += width; xOffset++; } yConv[index] = sumY; xConv[index] = sumX;}} 41

for (int x = initX; x < maxX; x++) { for (int y = initY; y < maxY; y += width) { float sum = 0f; int index = x + y; for (int i = 1; i < kwidth; i++) sum += diffKernel[i] * (yConv[index - i] yConv[index + i]); xGradient[index] = sum; } } for (int x = kwidth; x < width - kwidth; x++) { for (int y = initY; y < maxY; y += width) { float sum = 0.0f; int index = x + y; int yOffset = width; float yGradient[indexS]); float yGradient[indexW]); float yGradient[indexE]); float yGradient[indexNE]); float yGradient[indexSE]); seMag = hypot(xGradient[indexSE], neMag = hypot(xGradient[indexNE], eMag = hypot(xGradient[indexE], wMag = hypot(xGradient[indexW], sMag = hypot(xGradient[indexS],

float yGradient[indexSW]);

swMag

hypot(xGradient[indexSW],

42

float nwMag = hypot(xGradient[indexNW], yGradient[indexNW]); float tmp; if (xGrad * yGrad <= (float) 0 /*(1)*/ ? Math.abs(xGrad) >= Math.abs(yGrad) /*(2)*/ ? (tmp = Math.abs(xGrad * gradMag)) >= Math.abs(yGrad * neMag - (xGrad + yGrad) * eMag) /*(3)*/ && tmp > Math.abs(yGrad * swMag - (xGrad + yGrad) * wMag) /*(4)*/ : (tmp = Math.abs(yGrad * gradMag)) >= Math.abs(xGrad * neMag - (yGrad + xGrad) * nMag) /*(3)*/ && tmp > Math.abs(xGrad * swMag - (yGrad + xGrad) * sMag) /*(4)*/ : Math.abs(xGrad) >= Math.abs(yGrad) /*(2)*/ ? (tmp = Math.abs(xGrad * gradMag)) >= Math.abs(yGrad * seMag + (xGrad - yGrad) * eMag) /*(3)*/ && nwMag + (xGrad - yGrad) * wMag) /*(4)*/ : (tmp = Math.abs(yGrad * gradMag)) >= Math.abs(xGrad * seMag + (yGrad - xGrad) * sMag) /*(3)*/ && nwMag + (yGrad - xGrad) * nMag) /*(4)*/ ){ magnitude[index] gradMag); } else { magnitude[index] = 0;}}}} 43 = gradMag >= MAGNITUDE_LIMIT ? MAGNITUDE_MAX : (int) (MAGNITUDE_SCALE * tmp > Math.abs(xGrad * tmp > Math.abs(yGrad *

private float hypot(float x, float y) { if (x == 0f) return y; if (y == 0f) return x; return (float) Math.sqrt(x * x + y * y); } private float gaussian(float x, float sigma) { return (float) Math.exp(-(x * x) / (2f * sigma * sigma)); } private void performHysteresis(int low, int high) { Arrays.fill(data, 0); int offset = 0; for (int x = 0; x < width; x++) { for (int y = 0; y < height; y++) { if (data[offset] == 0 && magnitude[offset] >= high) { follow(x, y, offset, low); } offset++; } } }

sourceImage.getData().getDataElements(0, 0, width, height, null); for (int i = 0; i < picsize; i++) { int p = pixels[i]; int r = (p & 0xff0000) >> 16; int g = (p & 0xff00) >> 8; int b = p & 0xff; data[i] = luminance(r, g, b); } else if (type == BufferedImage.TYPE_BYTE_GRAY) { 44

byte[] pixels = (byte[]) sourceImage.getData().getDataElements(0, 0, width, height, null); for (int i = 0; i < picsize; i++) { data[i] = (pixels[i] & 0xff); } } else if (type == BufferedImage.TYPE_USHORT_GRAY) { short[] pixels = (short[]) sourceImage.getData().getDataElements(0, 0, width, height, null); for (int i = 0; i < picsize; i++) { data[i] = (pixels[i] & 0xffff) / 256; } } else if (type == BufferedImage.TYPE_3BYTE_BGR) { byte[] pixels = (byte[]) sourceImage.getData().getDataElements(0, 0, width, height, null); int offset = 0; for (int i = 0; i < picsize; i++) { int b = pixels[offset++] & 0xff; int g = pixels[offset++] & 0xff; int r = pixels[offset++] & 0xff; data[i] = luminance(r, g, b); } } else { throw new IllegalArgumentException("Unsupported image type: " + type); } } private void normalizeContrast() { int[] histogram = new int[256]; for (int i = 0; i < data.length; i++) { histogram[data[i]]++;} int[] remap = new int[256]; 45

int sum = 0; int j = 0; for (int i = 0; i < histogram.length; i++) { sum += histogram[i]; int target = sum*255/picsize; for (int k = j+1; k <=target; k++) { remap[k] = i; } j = target; } for (int i = 0; i < data.length; i++) { data[i] = remap[data[i]]; } } private void writeEdges(int pixels[]) { if (edgesImage == null) { edgesImage = new BufferedImage(width, height, BufferedImage.TYPE_INT_ARGB); } edgesImage.getWritableTile(0, 0).setDataElements(0, 0, width, height, pixels); } public BufferedImage mainMethod(String img3) throws Exception { BufferedImage image=ImageIO.read(new File(img3)); String str1=""; while(str1.equals("yes")){ Vector inputFiles = new Vector(); File f=new File("C:/h.txt"); 46

FileInputStream fis=new FileInputStream(f); BufferedReader br=new BufferedReader(new InputStreamReader(fis)); String str=br.readLine(); while(str!=null) { inputFiles.add(str); str=br.readLine(); } FileOutputStream fff=new FileOutputStream("C:/h1.txt",false); for(int i=0;i<inputFiles.size();i++) { String ss=inputFiles.get(i).toString(); BufferedImage bff=ImageIO.read(new File(ss)); CannyEdgeDetector detector = new CannyEdgeDetector(); detector.setLowThreshold(5.f); detector.setHighThreshold(10f); detector.setSourceImage(bff); detector.process(); BufferedImage edges = detector.getEdgesImage(); } } return(image); Steganography.java import java.io.File; import java.awt.Point; import java.awt.Graphics2D; import java.awt.image.BufferedImage; import java.awt.image.WritableRaster; import java.awt.image.DataBufferByte; 47

import javax.imageio.ImageIO; import javax.swing.JFileChooser; import javax.swing.JFrame; import javax.swing.JOptionPane; /* *Class Steganography */ public class Steganography { /* *Steganography Empty Constructor */ public Steganography() { } /* *Encrypt an image with text, the output file will be of type .png *@param path modify *@param original The name of the image to modify *@param ext1 png) *@param stegan *@param type encoding */ public boolean encode(String path,byte[] conten) { 48 The output name of the file integer representing either basic or advanced The extension type of the image to modify (jpg, The path (folder) containing the image to

*@param message The text to hide in the image

BufferedImage

image_orig

= getImage(path);

//user space is not necessary for Encrypting BufferedImage image = user_space(image_orig); image = add_text(image,conten); String recdata=""; try{ JFileChooser chooser = new JFileChooser(); int selected = chooser.showSaveDialog(new JFrame()); if(selected == JFileChooser.APPROVE_OPTION) { File f = chooser.getSelectedFile(); System.out.println(f.getAbsolutePath()); recdata=f.getAbsolutePath(); } }catch (Exception ee) { ee.printStackTrace(); } return(setImage(image,new File(recdata),"png")); }

length = (length << 1) | (image[i] & 1); } byte[] result = new byte[length]; //loop through each byte of text for(int b=0; b<result.length; ++b ) { //loop through each bit within a byte of text for(int i=0; i<8; ++i, ++offset){ 49

//assign bit: [(new byte value) << 1] OR [(text byte) AND result[b] = (byte)((result[b] << 1) | (image[offset] & 1)); } } return result; }

50

CHAPTER 11 CONCLUSION AND FUTURE ENHANCEMENT


.. 11.1 CONCLUSION LOCO-R algorithm represents significant progress toward producing lossless image-compression with improved compression performance as compared with currently available solution. The algorithm and implementation can easily be adapted to handle modifications in various parameters. In particular, pixel bit depths greater than 8 could be accommodated .Some parameters of the design could be made controllable with the interface to allow greater flexibility. Overall, the LOCO-R algorithm represents significant progress toward producing lossless imagecompression with improved compression performance as compared with currently available solution. 11.2 FUTURE ENHANCEMENTS

The algorithm and implementation can easily be adapted to handle modifications in various parameters an pixel bit depths greater than 8 could be accommodated Some parameters of the design could be made controllable with the interface to allow greater flexibility.

51

CHAPTER 12 BIBILOGRAPHY
REFERENCE BOOKS: [I] R. F. Rice, Some Practical Universal Noiseless Coding Techniques, Part III, Module PSI l 4,K+, JPL Publication 91-3, Jet Propulsion Laboratory, Pasadena , California, November 1991. [2] M. J. The paper in berger, G. Seroussi, and G. Sapiro, "LOCO-I: A Low Complexity, Context-Based, Lossless Image Compress- ion Algorithm", [3] M. J. The paperinberger, G. Seroussi, and G. Sapiro, "The LOCO-I Lossless Image Compression Algorithm: Principles and Standardization into JPEG-LS", IEEE Transactions on Image Processing, vol. 9, no. 8, pp. \309-\324, August 2000. [4] Information Technology- Lossless and Near-Lossless Compression of Continuous-Tone Still Images, ISOIlEC 14495-1, ITU Recommendation T.87, 1999. [5] M. J. The paper in berger, G. Seroussi, and G. Sapiro, "LOCO-I: A Low Complexity, Context-Based, Lossless Image Compress- ion Algorithm", Proc. of the 1996 Data Compression Conference (DCC '96), Snowbird, Utah, pp. 141-149, March 1996. [6] M. Rabbani and P. Jones, Digital Image Compression Techniques, Bellingham, Washington: SPIE Publications, 1991. [7] S. W. Golomb, "Run-Length Encodings", IEEE Transactions on Information Theory, vol. IT-12, no. 3, pp. 399-401, July 1996 VI-

52

WEBSITES : [ 8] [9] [10] http://www.ccsds.org/blue http://java.sun.com http://www.java2s.com/

53

You might also like