You are on page 1of 54

An Android Based Application of Skew Detection and Correction of

a Captured Paper Image by Using Accelerometer Sensor

This thesis is submitted in partial fulfillment of the requirement for the degree of Bachelor of
Science in Computer Science & Engineering.

S.M. RAKIBUL HAQUE

ID: 1204094

Supervised by
A.H.M Ashfak Habib
Asst.Professor
Department of Computer Science & Engineering (CSE)
Chittagong University of Engineering & Technology (CUET)

Department of Computer Science & Engineering (CSE)


Chittagong University of Engineering & Technology (CUET)
Chittagong-4349, Bangladesh
November, 2017

1
An Android Based Application of Skew Detection and Correction of
a Captured Paper Image by Using Accelerometer Sensor

This thesis is submitted in partial fulfillment of the requirement for the degree of Bachelor of
Science in Computer Science & Engineering.

S.M. RAKIBUL HAQUE

ID: 1204094

Supervised by
A.H.M Ashfak Habib
Asst.Professor
Department of Computer Science & Engineering (CSE)
Chittagong University of Engineering & Technology (CUET)

Department of Computer Science & Engineering (CSE)


Chittagong University of Engineering & Technology (CUET)
Chittagong-4349, Bangladesh
November, 2017

2
The thesis titled “Skew detection and correction of a captured paper image by using
accelerometer sensor ” submitted by Roll No. 1204094, Session 2015-2016 has been accepted
as satisfactory in partial fulfillment of the requirement for the degree of Bachelor of Science in
Computer Science & Engineering (CSE) as B.Sc. Engineering to be awarded by the Chittagong
University of Engineering & Technology (CUET).

Board of Examiners

1. Chairman

Dr. Mohammad Shamsul Arefin


Professor & Head
Department of Computer Science & Engineering (CSE)
Chittagong University of Engineering & Technology (CUET)

2. Member

A.H.M Ashfak Habib (Ex-officio)


Asst.Professor
Department of Computer Science & Engineering (CSE)
Chittagong University of Engineering & Technology (CUET)

3. Member

Dr. Asaduzzaman (External)


Professor,
Department of Computer Science & Engineering (CSE)
Chittagong University of Engineering & Technology (CUET)

3
Statement of Originality

It is hereby declared that the contents of this project is original and any part of it has not been
submitted elsewhere for the award of any degree or diploma.

------------------------------------ ------------------------------------
Signature of the Supervisor
Signature of the Candidate
Date: Date:

4
Acknowledgement

I am grateful to almighty God who has given me the ability to complete my project and to intend
myself in performing the completion of B.Sc. Engineering degree. I am indebted to my
supervisor A.H.M Ashfak Habib , Asst.Professor of the Department of Computer Science and
Engineering, Chittagong University of Engineering and Technology, for his encouragement,
proper guidance, constructive criticisms and endless patience throughout the progress of the
project. He supported me by providing books, conference and journal papers and effective
advices. From very beginning sir always encouraged me with proper guideline so the project
never seemed a burden to me. My sincerest acknowledgement extends to Dr. Asaduzzaman,
Professor of the Department of Computer Science and Engineering, Chittagong University of
Engineering and Technology for his encouragement and cooperation. He motivated me to
complete final thesis within time. Finally I want to express my gratitude to all other teachers of
our department for their sincere and active cooperation in completing the project work.

5
Abstract

Skew detection and correction of documents is a problematic step in document image analysis.
Many methods have been proposed by researchers for estimating the angle at which a document
image is rotated (document skew) in binary image documents. A simple solution for skew
detection was to determine the location of at least two corners of the original document and
compute the skew angle from the point. However, this can be error-prone because of non-linear
distortions that occur when document were not on flat surface during capturing. Therefore, this
project aims to develop the skew detection and correction technique during image capture .
Accelerometer sensor is used to measure the skew and guide the system to correct the skew angle
each and every side of image. To take a picture of paper or any text document first of all set the
mobile on the surface of the paper and save the value of accelerometer on X,Y,Z axis in the
SharedPreferences database. Now taking the picture by moving the camera according to
direction of X,Y axis that is given onto the screen at which direction have to be move. When the
new value of accelerometer come to the threshold value of the previous one on both X,Y
direction simultaneously then an alarm will be shown to take the picture without any skew. Here
some external option attach to the system. It can be retaken picture if the capture image didn’t
good as expectation.

6
Table of Contents
Chapter 1: Introduction 9
1.1 Preliminaries…………………………………………………………………………………9
1.2 Problem definition………..…………………………….……………………..…………...9
1.3 Motivation...............................................................................................................................9
1.4 Objective and task outlines……….……………..…………………………….………….10
1.5 Organization of the report…………………….. ...............................................................10

Chapter 2: Literature Review 11


2.1 Introduction……………………………………………………………………………......11
2.2 Related work…….………………………………………………………………………...15
2.3 Used resource………………….………………………………………………………...…19
2.4 Limitation of the previous work.……………………………………….……………….....30

Chapter 3: Methodology 31
3.1 Abstract view……………………………………………..………………………...…......31
3.2 Analytical representation of the system…………………………………………...……....32
3.3 Application flow diagram …………………………………………………………………34

Chapter 4: Implementation 36
4.1 Home Screen and Menu……………………………………………………………………36
4.2 Custom Camera Display…………………………………………………………………....38
4.3 Display the green alarm…………………………………………………………………….39
4.4 Retake the picture …………………………………………………………………………40
4.5 Storage the picture………………………………………………………………………….41
4.6 Experimental Result and Performance Analysis……………………………………………42

Chapter 5: Conclusion 44
5.1 conclusion……………………………………………………………………………….….44
5.2 Future Recommendation……………………………………………….…..……………....45

Bibliography 46
Appendices 48

7
List of Figures

Figure 2.1: Before and After Skew Correction…………………………………………………..11


Figure 2.2: The illustration of the scanning process of an opened book page………...………....12
Figure 2.3: Skewed number plates (left) and Skew corrected number plates by using
Hough Transform (right)………………………………………………….……………………...13
Figure 2.4: Skew correction using CAM SCANNER APP …………………………..…...…....15
Figure 2.5: Skew correction using openCv……………..………………………………………...16
Figure 2.6 : Direction of Axis of Accelerometer in android phone………………..…………......23
Figure 2.7: Accelerometer value in three axis …………………………………………………....24
Figure 2.8 : Take the accelerometer value from device……………………………………….......25
Figure 2.9: Android Logo………………………………………………………………………...26
Figure 3.1: Overview of interactive application……………………..…………………………....31
Figure3.2: Before and After Skew Correction………………...……….…………………….…..32
Figure 3.3 : During Capturing image skew value show………….…….………………………....33
Figure 3.4: Application Flow Diagram………………………………………………………........34
Figure 4.1: Menu Options………………………………………………………………………....36

Figure 4.2: Custom Camera Display……………………………………………………………....37


Figure 4.3 Display alarm button………………………………………………………………......38
Figure 4.4: Captured image preview……………………………………………………………....39
Figure 4.5: Save the picture in storage……………………………………………………….........40

List of Table
Table4.1: List of image measurement and system output…………………………………………42

8
Chapter 1
Introduction
1.1 Preliminaries
Skew is a one of the major problem to print a paper image or for scanning purposes. Taking a
picture randomly must be found some skew angle. This skew angle can be resolve in many ways.
Many researchers proposed some solution to solve skew and slant problem from an image of
paper/document.
A simple solution for skew detection was to determine the location of at least two corners of the
original document and compute the skew angle from the point. However, this can be error-prone
because of non-linear distortions that occur when document were not on flat surface during
capturing. Also, the entire scan surface may be obscured by the input document or the input may
itself have been produced from a skewed original. In either case, deriving the skew angle from the
corners or edges of the page is problematic.

1.2 Problem Definition


Skew is any deviation of the image from that of the original document, which is not parallel to the
horizontal or vertical. Skew Correction remains one of the vital part in Document Processing.
Without skew correction can’t be read a document properly.

1.3 Motivation
A literature survey of the existing solutions to the problem of skew detection leads to the following
conclusions:
• Solutions that provide accurate skew angles are slow.
• Solutions that reduce the required time result in lesser accuracy in skew angle determination.
So a trade-off between accuracy and time complexity is the motivation for the work.

9
1.4 Objectives and task outlines
On this project we propose an application in which capture image using accelerometer data. The
objectives of this study are as follows,

 To design a smartphone based skew detection technique using accelerometer sensor.


 To correct the skew angle each and every side of the image which have to be printed or
other purposes.
 To make the application user friendly to all people as they can use it for important
purposes.
 To make any pdf document this app can be used for capture image from any paper or book
page.
 To make the time consuming application that more efficient from others.
 Considering accelerometer sensor which is very productive to give accurate result.
 To print text document image without any skew problem.

1.5 Organization of the Report

The rest of this report can be outlined as follows:

Chapter 2 Gives an overview of past work related to this problem and brief description of
necessary component to make the application. Like what is skew and what is importance to
remove it. Some method which describe the technique to develop image without any skew
problem. Ex: PCA, Hough Transform, Scan line.

Chapter 3 discusses the full process which is required to make the full application. Like abstract
view of the project, analytical representation of the system, to clear the whole process preview of
an flowchart .

Chapter 4 describes the experimental results and evaluation of the system. After developing the
application try to show an overview of implementation of this project.

Chapter 5 presents the conclusions and future evaluation. What to do in future to make the
application more friendly and efficient to detect skew and correct the skew problem.

10
Chapter 2

Literature Review

2.1 Introduction
According to investigation of some research paper skew detection and prevention of any
captured image and scanned document.
2.1.1 What is skew of an image

Skew is a pretty cool word, and it sounds like it means something really weird, and it kind of
does. It means lines that go any which way. It describes any two lines that aren't
either parallel or perpendicular to each other, and that don't cross each other anywhere (so they
have to be in different planes). Most lines are skew to each other.

Figure 2.1: Before and After Skew Correction

11
2.1.2 Why is skew determination and correction important?
There are a variety of circumstances in which it is useful to determine the text skew and
orientation: _Improves text recognition. Many systems will fail if presented with text oriented
sideways or upside-down. Performance of recognition systems also degrades if the skew is more
than a few degrees. _Simplifies interpretation of page layout. It is easier to identify textlines and
text columns if the image skew is known or the image is deskewed. _Improves baseline
determination. The textline baselines can be found more robustly if the skew angle is accurately
known. Improves visual appearance. Images can be displayed or printed after rotation to remove
skew. Multiple page document images can also be oriented consistently, regardless of scan
orientation or the design of duplex scanners.
2.1.3 Review of Illumination and Skew Correction Techniques for Scanned
Documents
Whenever we scan an open book, we often find that the quality of the scanned documents is
degraded. Various types of scanning artifacts ruin the legibility of the document. Fig.1 shows the
illustration of the scanning process of an opened book page. Fig 1a shows the side view of an
opened book, Fig 1b shows the profile of illumination on a single page. . Fig 1c shows the image
showing scanning shading, dark borders and skew artifacts [6].

Figure 2.2: The illustration of the scanning process of an opened book page.
12
Principal Component analysis (PCA)
The method of PCA [7] is divided into five sub-modules viz. Pre-processing, PCA, Skew
Correction, 3D Correction and Character Segmentation. Vehicle number plate localization is
using a preprocessing algorithm which makes use of the property of number plate in which all
the irrelevant areas of car image are removed by masking the central pixel in a group of same
pixel row wise & column wise respectively. Principal component analysis (PCA) is
efficient in identifying patterns in data of high dimensions and highlight the similarities and/or
differences in the data. PCA is proved successful in the fields of face recognition and image
compression.

Hough Transform
Hough Transform [8] is also proved to be an efficient approach to remove the skew of the
number plates. The process includes capturing the images with a camera, detecting the skew
angle, applying Skew correction algorithm, Edge detection algorithm is applied, Implementing
Hough Rectangular transform to the canny images obtained, Segmentation of the image
containing the number plate in a separate window. Fig.2.2 shows the Hough Transform
process over vehicle number plates.

Figure 2.3: Skewed number plates (left) and Skew corrected number plates by using Hough
Transform (right)

13
2.2 Related Work
2.2.1 CAM SCANNER:
One of the most popular app to use for scanning paper. It can solve skew problem and also
increase visibility of font each and every word. But the problem is it can solve it after capturing
image by the technique of cropping.so I want to build an app which can detect the skew problem
and solve simultaneously during capture the image from a page to print or scan. To build this app
I want to use accelerometer sensor. An accelerometer is a sensor which measures the tilting
motion and orientation of a mobile phone. The accelerometer is used to ensure photographs are
presented in the correct way- portrait or landscape - depending on the way the phone is held.
Accelerometers are also increasingly used as a means of user input, most noticeably in games
where tilting and rotating the handset can control onscreen action.

Figure 2.4: Skew correction using CAM SCANNER APP


14
In the above figure we see a cam scanner photo editing. Here it is seen that after capturing the
picture show a crop option to select the layer portion of the image. And then it cropped it and
modify it by solving skew correction and slant problem. Cropping and take the text visibility
more bright to see the image of the picture clear to all people .If ant text are not clear format it
can be solved by this apps to make the font of text bright and visible to all people.

2.2.2 Text skew correction with OpenCV and Python:

Figure 2.5: Skew correction using openCv

Many studies have been done in this sector in last few years. Most of the studies tried to explore
the causes and types of skew. Very few works are done to solve this problem using modern
technology.

In 2015 Basavanna M, S. S. Gornale[9] tried to find the skew detection and skew correction .
Skew is in exorable introduced into the scanned document during scanning, and it has direct
effect on the reliability and efficiency of the segmentation and feature extraction stages for
various applications. Hence, skew detection and correction in document images are critical steps
before layout analysis. In this work a novel skew detection method is presented for skew
detection and correction scanned document Images using Principal Component Analysis (PCA).

15
In 1987 H. S. Baird[10]tried to solve this related problem. The skew introduced makes more
difficult the visualisation of images by human users. Besides that, it increases the complexity of
any sort of automatic image recognition, degrades the performance of OCR tools, increases the
space needed for image storage, etc. Thus, skew correction is an important part of any document
processing system being a matter of concern of researchers for almost two decades now.

In 2014 Bishakha Jain1, Mrinaljit Borah2 [11]the skew detection and correction of scanned
document images written in Assamese language using the horizontal and vertical projection
profile analysis and brings out the differences after implementation of both the techniques.

In 2010 Gaofeng Meng[12] estimating the skew angles of document images. Rather than to
derive a skew angle merely from text lines, the proposed method exploits various types of visual
cues of image skew available in local image regions. The visual cues are extracted by Radon
transform and then outliers of them are iteratively rejected through a floating cascade.

In 1997[13] B. Gatos, N. Papamarkos, and C. Chamzas proposes a computationally efficient


procedure for skew detection and text line position determination in digitized documents, which
is based on the cross-correlation between the pixels of vertical lines in a document. The
determination of the skew angle in documents i essential in optical character recognition
systems.

In 1994[14] Ray smith proposes A Simple and Efficient Skew Detection Algorithm via Text
Row Algorithm where accurate skew detection algorithm based on a simple and robust method
for finding rows of text independently of the skew angle of the image. After finding the rows of
text, it is possible to obtain an accurate estimate of the skew angle of each text line and thereby
estimate the skew angle of the whole page to a high degree of accuracy.

In 1962[15] A common method of skew detection is to simplify the Hough transform[4] by


applying it to only a subset of pixels in the image. The basic concept behind methods based on
the Hough transform is the same:

16
1. Select some subset of pixels of the image which are few in number and most likely to form
straight lines parallel to the baselines of the text rows.
2. For as many different directions as is necessary to achieve the desired accuracy, project the
points selected in step 1 parallel to each direction in turn.
3. The direction for which the highest spikes are achieved in the projection is the direction of the
page skew.
In 1987, Baird[16] suggested using bounding boxes of connected components to estimate image
skew. The coordinates of a token point, on the bottom center of bounding box, were selected, and
a function Stokens of skew angle was computed from these coordinates. Specifically, the
function Stokens(_)is the sum of squares of the number of such points computed along a set of
lines with angle to the raster direction. Baird simulated a vertical shear on the set of points and
performed the sums over points with the same y-coordinate. Aside from a constant (independent
of _), the function Stokens is the variance of the number of tokens on a line, as a function of the
angle. This variance should be a maximum in the direction where the tokens for each text line
tend to fall near the same line.

In 1988, Postl[17] described a method for determining skew that contains the basic method we
use. Straight lines of the image are traversed at a set of angles relative to the raster direction, and
a function SÆ(_)is computed that has a maximum when the scan direction _is along the
textlines.
Unlike Baird, who computes tokens from connected components, Postl uses every pixel in the
image. The function SÆ is similar to Baird’s function. An angle _is chosen and pixel sums are
found along lines in the image at this angle. Instead of squaring the sum of tokens, Postl squares
the difference between sums of ON pixels on adjacent lines, and the function SÆis found by
summing over all lines. It can be seen that Postl’s function SÆ(_) is, aside from a constant, just
the variance in the difference between pixel sums on adjacent lines at angle _.
More recently, Chen and Haralick[18] used a more involved method that started with threshold
reduction, applied recursive morphological closings and openings to close up textlines and
remove ascenders and descenders, determined connected components, fit the best line to the
points in each set of connected components, and estimated a global skew by discarding outlier
lines When run on a large database, The skew error reported was greater than 0.3 degrees on
about 10 percent of the images in the UW English Document Image Database [19]
17
2.3 Used resource
2.3.1 Accelerometer sensor
One of the most common inertial sensors is the accelerometer, a dynamic sensor capable of a
vast range of sensing. Accelerometers are available that can measure acceleration in one, two, or
three orthogonal axes. They are typically used in one of three modes:
 As an inertial measurement of velocity and position;
 As a sensor of inclination, tilt, or orientation in 2 or 3 dimensions, as referenced from the
acceleration of gravity (1 g = 9.8m/s2);
 As a vibration or impact (shock) sensor.
There are considerable advantages to using an analog accelerometer as opposed to
an inclinometer such as a liquid tilt sensor – inclinometers tend to output binary information
(indicating a state of on or off), thus it is only possible to detect when the tilt has exceeded some
thresholding angle.

Principles of Operation
Most accelerometers are Micro-Electro-Mechanical Sensors (MEMS). The basic principle of
operation behind the MEMS accelerometer is the displacement of a small proof mass etched into
the silicon surface of the integrated circuit and suspended by small beams. Consistent with
Newton's second law of motion (F = ma), as an acceleration is applied to the device, a force
develops which displaces the mass. The support beams act as a spring, and the fluid (usually air)
trapped inside the IC acts as a damper, resulting in a second order lumped physical system. This
is the source of the limited operational bandwidth and non-uniform frequency response of
accelerometers. For more information, see reference to Elwenspoek, 1993.
Types of Accelerometer
There are several different principles upon which an analog accelerometer can be built. Two very
common types utilize capacitive sensing and the piezoelectric effect to sense the displacement of
the proof mass proportional to the applied acceleration.

18
Capacitive
Accelerometers that implement capacitive sensing output a voltage dependent on the distance
between two planar surfaces. One or both of these “plates” are charged with an electrical current.
Changing the gap between the plates changes the electrical capacity of the system, which can be
measured as a voltage output. This method of sensing is known for its high accuracy and
stability. Capacitive accelerometers are also less prone to noise and variation with temperature,
typically dissipate less power, and can have larger bandwidths due to internal feedback circuitry.
(Elwenspoek 1993)

Piezoelectric
Piezoelectric sensing of acceleration is natural, as acceleration is directly proportional to force.
When certain types of crystal are compressed, charges of opposite polarity accumulate on
opposite sides of the crystal. This is known as the piezoelectric effect. In a piezoelectric
accelerometer, charge accumulates on the crystal and is translated and amplified into either an
output current or voltage.Piezoelectric accelerometers only respond to AC phenomenon such as
vibration or shock. They have a wide dynamic range, but can be expensive depending on their
quality (Doscher 2005) Piezo-film based accelerometers are best used to measure AC
phenomenon such as vibration or shock, rather than DC phenomenon such as the acceleration of
gravity. They are inexpensive, and respond to other phenomenon such as temperature, sound,
and pressure (Doscher 2005)

Overview of other types that are less used in audio applications


Piezoresistive
Piezoresistive accelerometers (also known as Strain gauge accelerometers) work by measuring
the electrical resistance of a material when mechanical stress is applied. They are preferred in
high shock applications and they can measure acceleration down to 0Hz. However, they have a
limited high frequency response.

Hall effect
Hall effect accelerometers work by measuring the voltage variations caused by the change in
magnetic field around them.

19
Heat transfer

Heat transfer accelerometers consist in a single heat source centered in a substrate and suspended
across cavity. They include equally spaced thermo resistors on the four side of the heat source.
They measure the internal changes in heat due to an acceleration. When there is zero
acceleration, the heat gradient will be symmetrical. Otherwise, under acceleration, the heat
gradient will become asymmetrical due to convection heat transfer

A typical accelerometer has the following basic specifications:

 Analog/digital
 Number of axes
 Output range (maximum swing)
 Sensitivity (voltage output per g)
 Dynamic range
 Bandwidth
 Amplitude stability
 Mass

Analog vs. digital


The most important specification of an accelerometer for a given application is its type of output.
Analog accelerometers output a constant variable voltage depending on the amount of
acceleration applied. Older digital accelerometers output a variable frequency square wave, a
method known as pulse-width modulation. A pulse width modulated accelerometer takes
readings at a fixed rate, typically 1000 Hz (though this may be user-configurable based on the IC
selected). The value of the acceleration is proportional to the pulse width (or duty cycle) of the
PWM signal. Newer digital accelerometers are more likely to output their value using multi-wire
digital protocols such as I2C or SPI.
For use with ADCs commonly used for music interaction systems, analog accelerometers are
usually preferred.
Number of axes: Accelerometers are available that measure in one, two, or three dimensions. The
most familiar type of accelerometer measures across two axes. However, three-axis
accelerometers are increasingly common and inexpensive.

20
Output range: To measure the acceleration of gravity for use as a tilt sensor, an output range of
±1.5 g is sufficient. For use as an impact sensor, one of the most common musical applications,
±5 g or more is desired.
Sensitivity: An indicator of the amount of change in output signal for a given change in
acceleration. A sensitive accelerometer will be more precise and probably more accurate.

Dynamic range
The range between the smallest acceleration detectable by the accelerometer to the largest before
distorting or clipping the output signal.

Bandwidth
The bandwidth of a sensor is usually measured in Hertz and indicates the limit of the near-unity
frequency response of the sensor, or how often a reliable reading can be taken. Humans cannot
create body motion much beyond the range of 10-12 Hz. For this reason, a bandwidth of 40-60
Hz is adequate for tilt or human motion sensing. For vibration measurement or accurate reading
of impact forces, bandwidth should be in the range of hundreds of Hertz. It should also be noted
that for some older microcontrollers, the bandwidth of an accelerometer may extend beyond the
Nyquist frequency of the A/D converters on the MCU, so for higher bandwidth sensing, the
digital signal may be aliased. This can be remedied with simple passive low-pass filtering prior
to sampling, or by simply choosing a better microcontroller. It is worth noting that the bandwidth
may change by the way the accelerometer is mounted. A stiffer mounting (ex: using studs) will
help to keep a higher usable frequency range and the opposite (ex: using a magnet) will reduce it.
Amplitude stability: This is not a specification in itself, but a description of several. Amplitude
stability describes a sensor's change in sensitivity depending on its application, for instance over
varying temperature or time (see below).

Mass
The mass of the accelerometer should be significantly smaller than the mass of the system to be
monitored so that it does not change the characteristic of the object being tested.
Other specifications include:

 Zero g offset (voltage output at 0 g)


21
 Noise (sensor minimum resolution)
 Temperature range

 Bias drift with temperature (effect of temperature on voltage output at 0 g)


 Sensitivity drift with temperature (effect of temperature on voltage output per g)
 Power consumption

Output
An accelerometer output value is a scalar corresponding to the magnitude of the acceleration
vector. The most common acceleration, and one that we are constantly exposed to, is the
acceleration that is a result of the earth's gravitational pull. This is a common reference value
from which all other accelerations are measured (known as g, which is ~9.8m/s^2).

Accelerometer gives us the value in three axis x ,y ,z. its mainly measure the motion of the
phone.

Figure 2.6 : Direction of Axis of Accelerometer in android phone

22
Figure 2.7: Accelerometer value in three axis

Uses
The acceleration measurement has a variety of uses. The sensor can be implemented in a system
that detects velocity, position, shock, vibration, or the acceleration of gravity to determine
orientation (Doscher 2005)

A system consisting of two orthogonal sensors is capable of sensing pitch and roll. This is useful
in capturing head movements. A third orthogonal sensor can be added to the network to obtain
orientation in three dimensional space. This is appropriate for the detection of pen angles, etc.
The sensing capabilities of this network can be furthered to six degrees of spatial measurement
freedom by the addition of three orthogonal gyroscopes.

Verplaetse has outlined the bandwidths associated with various implementations of


accelerometers as an input device. These are:

23
Depending on the sensitivity and dynamic range required, the cost of an accelerometer can grow
to thousands of dollars. Nonetheless, highly accurate inexpensive sensors are available.

Figure 2.8 : Take the accelerometer value from device

2.3.2: Android
Android is a mobile operating system (OS) based on the Linux kernel and currently developed
by Google. With a user interface based on direct manipulation, Android is designed primarily for
touchscreen mobile devices such as smartphones and tablet computers, with specialized user
interfaces for televisions (Android TV), cars (Android Auto), and wrist watches (Android Wear).
The OS uses touch inputs that loosely correspond to real-world actions, like swiping, tapping,
pinching, and reverse pinching to manipulate on-screen objects, and a virtual keyboard. Despite
being primarily designed for touchscreen input, it also has been used in game consoles, digital
cameras, regular PCs and other electronics. Android is the most widely used mobile OS and, as
of 2013, the most widely used OS overall. Android devices sell more than Windows, iOS, and
Mac OS X devices combined, with sales in 2012, 2013 and 2014 close to the installed base of all
24
PCs. As of July 2013 the Google Play store has had over 1 million Android apps published, and
over 50 billion apps downloaded. A developer survey conducted in April/May 2013 found that
71% of mobile developers develop for Android. At Google I/O 2014, the company revealed that
there were over 1 billion active monthly Android users, up from 538 million in June 2013.
Android’s source code is released by Google under open source licenses, although most Android
devices ultimately ship with a combination of open source and proprietary software.

Figure 2.9: Android Logo

Initially developed by Android, Inc., which Google backed financially and later bought in 2005,
Android was unveiled in 2007 along with the founding of the Open Handset Alliancea
consortium of hardware, software, and telecommunication companies devoted to advancing open
standards for mobile devices. Android is popular with technology companies which require a
ready-made, low-cost and customizable operating system for high-tech devices. Android’s open
nature has encouraged a large community of developers and enthusiasts to use the open-source
code as a foundation for community-driven projects, which add new features for advanced users
or bring Android to devices which were officially, released running other operating systems. The
operating system’s success has made it a target for patent litigation as part of the so called
“smartphone wars” between technology companies.

25
2.3.2.1 Android OS: A Walk from Past to Present

Android, Inc. was founded in Palo Alto, California in October 2003 by Andy Rubin (co-founder
of Danger), Rich Miner (co-founder of Wildfire Communications, Inc.), Nick Sears (once VP at
T-Mobile), and Chris White (headed design and interface development at WebTV) to develop, in
Rubin’s words, ”smarter mobile devices that are more aware of its owner’s location and
preferences”. The early intentions of the company were to develop an advanced operating system
for digital cameras, when it was realized that the market for the devices was not large enough,
and diverted their efforts to producing a smartphone operating system to rival those of Symbian
and Windows Mobile. Despite the past accomplishments of the founders and early employees,
Android Inc. operated secretly, revealing only that it was working on software for mobile
phones. That same year, Rubin ran out of money. Steve Perlman, a close friend of Rubin,
brought him $10,000 in cash in an envelope and refused a stake in the company. Google acquired
Android Inc. on August 17, 2005; key employees of Android Inc., including Rubin, Miner, and
White, stayed at the company after the acquisition. Not much was known about Android Inc. at
the time, but many assumed that Google was planning to enter the mobile phone market with this
move. At Google, the team led by Rubin developed a mobile device platform powered by the
Linux kernel. Google marketed the platform to handset makers and carriers on the promise of
providing a flexible, upgradable system. Google had lined up a series of hardware component and
software partners and signaled to carriers that it was open to various degrees of cooperation on
their part. Speculation about Google’s intention to enter the mobile communications market
continued to build through December 2006. An earlier prototype codenamed ”Sooner” had a
closer resemblance to a BlackBerry phone, with no touchscreen, and a physical, QWERTY
keyboard, but was later reengineered to support a touchscreen, to compete with other announced
devices such as the 2006 LG Prada and 2007 Apple iPhone. In September 2007,
InformationWeek covered an Evalueserve study reporting that Google had filed several patent
applications in the area of mobile telephony. On November 5, 2007, the Open Handset Alliance,
a consortium of technology companies including Google, device manufacturers such as HTC,
Sony and Samsung, wireless carriers such as Sprint Nextel and T-Mobile, and chipset makers
such as Qualcomm and Texas Instruments, unveiled itself, with a goal to develop open standards
for mobile devices. That day, Android was unveiled as its first product, a mobile device platform
built on the Linux kernel version 2.6.25. The first commercially available smartphone running
26
Android was the HTC Dream, released on October 22, 2008. In 2010, Google launched its Nexus
series of devices a line of smartphones and tablets running the Android operating system, and
built by manufacturing partners. HTC collaborated with Google to release the first Nexus
smartphone, the Nexus One. Google has since updated the series with newer devices, such as the
Nexus 5 phone (made by LG) and the Nexus 7 tablet (made by Asus). Google releases the Nexus
phones and tablets to act as their flagship Android devices, demonstrating Android’s latest
software and hardware features. On March 13, 2013 Larry Page announced in a blog post that
Andy Rubin had moved from the Android division to take on new projects at Google. He was
replaced by Sundar Pichai, who also continues his role as the head of Google’s Chrome division,
which develops Chrome OS. Since 2008, Android has seen numerous updates which have
incrementally improved the operating system, adding new features and fixing bugs in previous
releases. Each major release is named in alphabetical order after a dessert or sugary treat; for
example, version 1.5 Cupcake was followed by 1.6 Donut. The latest released version, 4.4.4
KitKat, appeared as a security-only update; it was released on June 19, 2014, shortly after the
release of 4.4.3. As of October 2014, newest version of the Android operating system, Android
5.0 Lollipop, is available only as a developer preview. From 2010 to 2013, Hugo Barra served as
product spokesperson for the Android team, representing Android at both press conferences and
Google I/O, Googles annual developer-focused conference. Barras product involvement included
the entire Android ecosystem of software and hardware, including Honeycomb, Ice Cream
Sandwich, Jelly Bean and KitKat operating system launches, the Nexus 4 and Nexus 5
smartphones, the Nexus 7 and Nexus 10 tablets, and other related products such as Google Now
and Google Voice Search, Googles speech recognition product comparable to Apples Siri. In
2013 Barra left the Android team for Chinese smartphone maker Xiaomi.

27
2.3.2.2 Version History by API Level
There are many Android versions by different API levels from beginning to today. There have
been 19 versions released so far. They are given below-

 Android 1.0 (API level 1)



 Android 1.1 (API level 2)

 Android 1.5 Cupcake (API level 3

 Android 1.6 Donut (API level 4)

 Android 2.0 Eclair (API level 5)

 Android 2.0.1 Eclair (API level 6)

 Android 2.1 Eclair (API level 7)

 Android 2.22.2.3 Froyo (API level 8)



 Android 2.32.3.2 Gingerbread (API level 9)

 Android 2.3.32.3.7 Gingerbread (API level 10)

 Android 3.0 Honeycomb (API level 11)

 Android 3.1 Honeycomb (API level 12)

 Android 3.2 Honeycomb (API level 13)

 Android 4.04.0.2 Ice Cream Sandwich (API level 14)

 Android 4.0.34.0.4 Ice Cream Sandwich (API level 15)

 Android 4.1 Jelly Bean (API level 16)

 Android 4.2 Jelly Bean (API level 17)

 Android 4.3 Jelly Bean (API level 18)

 Android 4.4 KitKat (API level 19)

 Android 5.0 Lollipop (API level 21)

In near future, Marshmallow will be released [12].

28
2.3.3 Built in Database Sqlite in Android
The lifeline for every Android application is its database support. A database system is needed to
store structured data, unless the application deals only with simple data. Android uses the SQLite
database system, which is an open-source, stand-alone SQL database, widely used by many
popular applications. SQLite is a lightweight transactional database engine that occupies a small
amount of disk storage and memory, thus, it is a perfect choice for creating databases on many
mobile operating systems such as Android and iOS. The database that is created for an
application is only accessible to itself; other applications will not be able to access it. Once
created, the SQLite database is stored in the /data/data/<package_name>/databases folder of an
Android device.

2.4 Limitation of the previous works


To correct the skew or slant problem of any scanned document can be done with those previous
technique. But there is no technique to solve the skew problem during capturing image. Without
solving the problem at the time of capturing image it’s a long process. Skew detection and
prevention of any image without considering layout analysis is a great problem. By using
accelerometer sensor data in x,y,z axis value can be used for solving the skew problem.

29
Chapter 3
Methodology
In this chapter, the overview of the interactive app, the architecture of the application, the
implementation or working procedure with software requirements have been presented precisely.

3.1 Abstract View

Accelerometer value Transfer the value in Check the new value with
saved on database custom camera interface previous value

Capture the image


Save the image on Move the device
when the green alarm
storage according to direction
show

Figure 3.1: Overview of interactive application

30
3.2 Analytical representation of the system
we will adjust two program in one apps :
1.Custom Camera
2.Accelerometer Sensor

Manual

In manual process just store the accelerometer value by set on the paper whose have to be taken
image. Then up the phone and try to adjusting with the previous value of the accelerometer.
when it reach a approximate value of the previous one then just click it. The main advantage of
manual process is we can crop the image after capturing the image easily to fixed the outer
layer.

Figure 3.2: Before and After Skew Correction

31
Auto
In auto process we have to take the accelerometer value before capturing the image then store it
and when we take the picture we just try to fixed with value of accelerometer and camera will be
captured the picture itself. But the main problem is We can’t crop the Image. That’s why for the
purpose of print or scanning an image won’t be much appropriate.

Figure 3.3 : During Capturing image skew value show

32
3.4 Application flow diagram:
Flowchart for processing the whole condition is given below

start

Begin
Accelerometer
value?

yes
Auto Capture image when fixed
capture with previous value
image

No Crop the image properly

Rotate the mobile according to


direction
Analyse image

Skew detect and correction

END
Fixed with previous value

Capture image

Figure 3.4: Application Flow Diagram

33
In figure 3.4 show the whole process of my application. Here we see when we start the app
display shows the value of accelerometer in X,Y,Z axis .with the button of save value transfer to
the next camera interface. Now to take a picture we have to click on Takepicture button . Now
the new value of accelerometer shows to left side of the camera and the save value of previous
shows on the right side of camera. Now the task is tried to match the value with the previous one.
It will take condition to three axis. Using a threshold value to match with the previous value.
Like when the value of accelerometer come to this threshold range then an alarm will be show on
display to take the picture without any skew or slant problem. In this case skew will be resolve
on both direction. Like X,Y axis consider to resolve skew problem. Z axis wasn’t consider
because of the gravity value of z axis will be always nearly 9.8ms-2.Now if the capture image is
ok then try to save the image into the gallery using this save picture button.

34
Chapter 4
Implementation

In this chapter ,we have provided some snapshots of the project with necessary explanation that
will clearly describe the outcome of our project.

4.1 Home Screen and Menu


When the app open it will show the following display ,where the value of accelerometer show on
the top left corner and a save button is created to save the value for further use.

Figure 4.1: Menu Options

35
For taking a picture click to the ‘’Take a new Picture’’ button to go to the next interface. Save
picture button hide because there is no picture to save onto the display.

4.2 Custom Camera Display

In this interface we can see the camera view on surfaceView. Also included save value of
accelerometer and the new value of accelerometer also show to the left side of the display. And
two button snap it ,Done is used for the purpose of take picture easily.

Figure 4.2: Custom Camera Display

36
4.3 Display the green alarm
When the new value of accelerometer match with the previous save value or come to the
threshold range green signal will be displayed on to the screen.

Figure 4.3 Display alarm button

I take the threshold value (.20) in every axis. When the value of accelerometer come to this range
on 3 axis then it will show the alarm button to snap the photo without any skew on each side.

37
4.4 Retake the picture

If the capturing image not good to use then we can use the Retake button to take the picture once
again. Otherwise click the Done button to go to the previous main interface to save the picture
onto the storage.

Figure 4.4: Captured image preview

38
4.5 Storage the picture
Now the captured image can be used for further used to save it onto the gallery. After save the
image we can take any number of photo by clicking the take picture button with using the
previous save value .If we need to save the value once again we can save the new value on
shared preference database.

Figure 4.5: Save the picture in storage

39
4.6 Experimental Result and Performance Analysis
Precision should be the simplest but efficient key measurement procedure to evaluate the quality
of any application system. It is the fraction of rightly skew corrected image with the ratio of total
input image in our system. So precision can be computed as:

Number of correct image


Precision=
Number of input image

Image No. Bottom width(B) Top width(T) Ratio=B/T Left Bottom


(cm) (cm) Angle(Degree)
01 15.61 15.68 .99 90.05
02 15.22 15.72 .97 92.41
03 15.92 15.94 .99 90.03
04 15.65 15.45 1.01 87.3
05 15.41 15.55 .99 91.3
06 15.6 15.6 1 90
07 15.59 15.51 1.01 88.4
08 15.81 15.31 1.03 86.4
09 15.57 15.45 1.01 88.4

Table 4.1: List of image measurement and system output

40
7
So the precision of the system=. =77.78%
9

Here first of all taking 9 photos of a rectangle text document. Then measure the bottom width
from one side to another and then measure the top width .The ratio of two width are calculated
for different picture. Also measure the skew angle from bottom left corner to make ensure that
how much skew occur in left side. At this process take the all picture and measure in different
angle to calculate the precision of the application to measure the whole accuracy of the
application. It may be found better result if we can do the experiment with different user and take
the average value of width and angle to measure the performance of the application.

41
Chapter 5
Conclusion and Future Research

5.1 Conclusion
Android is a mobile operating system (OS) based on the Linux kernel and currently developed
by GOOGLE. About 51.5 percent people around the world use smart phones with android
operating system. Lately it’s catching u in BANGLADESH; many handset manufacturers are
porting this OS which again means in increased user base. Application development is using java
language which is powerful. The kernel of the OS is derived from LINUX.

Now a days there are many research paper to correct this skew problem to any scanned
document.an android application CAMSCANNER can resolve this problem after capturing the
image. But this application can do this job during capture any image by using accelerometer
sensor.

It can be ensured that to solve this skew and slant problem from a image can be solved easily by
this ‘AcceleroCam’ application. And this application totally unique then any other application
related to skew correction apps.

42
5.2 Future Recommendations
We try our best to make this system user satisfactory. There is a monumental scope for the future
work of the system. The developed and previously tested functionalities can be modified later
with more user friendly functions to make the system more useful. For the future
recommendation, following project may be added to this project:

 Adding Manual focus option


 Improve the image quality
 Taking additional functionality
 Creating different levels according to the difficulties of the objects
 Making the sensor efficiency more faster.

43
Bibliography
[1] A. F. Mollah “A Fast Skew Correction Technique for Camera Captured Business Card
Images’’ School of Mobile Computing and Communication,Jadavpur UniversityKolkata, In
India Conference (INDICON), 2009 Annual IEEE, pages 1–4,2010.
[2] W. Pan, J. Jin, G. Shi and Q. R. Wang, “A System for Automatic Chinese Business Card
Recognition”, in Proc. ICDAR’01, 2001, p. 577-581.
[3] X. Hong-bo and T. Yan, “Skew Detection for Binary Document Images Using Mathematical
Morphyology”, Wuhan University Journal of Natural Sciences, vol. 7, no. 3, pp. 338-340, 2002.
[4] B.T. Avila and R. D. Lins, “A Fast Orientation and Skew Detection Algorithm for
Monochromatic Document Images”, in Proc. ACM Symposium on Document Engineering, 2005.
[5] K. R. Arvind, J. Kumar and A. G. Ramakrishnan, “Entropy Based SkewCorrection of
Document Images”, in Proc. PReMI’07, 2007, pp 495-502.
[6] Meng, G., Xiang, S., Zheng, N., & Pan, C. (2013). Nonparametric illumination correction for
scanned document images via convex hulls. IEEE transactions on pattern analysis and machine
intelligence, 35(7), 1730-1743.
[7] Bodade, R., Pachori, R. B., Gupta, A., Kanani, P., & Yadav, D. (2013, October). A novel
approach for automated skew correction of vehicle number plate using principal component
analysis. In Emerging Trends in Communication, Control, Signal Processing & Computing
Applications (C2SPCA), 2013 International Conference on (pp. 1-6). IEEE.
[8] Arulmozhi, K., Perumal, S. A., Priyadarsini, C. T., & Nallaperumal, K. (2012, December).
Image refinement using skew angle detection and correction for Indian license plates.
In Computational Intelligence & Computing Research (ICCIC), 2012 IEEE International
Conference on (pp. 1-4). IEEE.
[9] Basavanna, M., & Gornale, S. S. Skew Detection and Skew Correction in scanned Document
Image using Principal Component Analysis.
[10] H. S. Baird, The skew angle of printed documents, Proc. Conf. Photographic Scientists and
Engineers, vol. 40, pp. 14-21, 1987.

[11] Jain, B., & Borah, M. (2014). A comparison paper on skew detection of scanned document
images based on horizontal and vertical projection profile analysis. International Journal of
Scientific and Research Publications, 4(6).

44
[12] G. Meng”Skew estimation of document images using bagging”IEEE Transections on Image
Processing Volume 19 Issue 7,july 2010 pages 1837-1846

[13] B. Gatos, N. Papamarkos, and C. Chamzas,” Skew detection and text line position de-
termination in digitized documents, Pattern Recognition” International Journal of Scientific and
Research Publications (1997) vol. 30, no. 9, pp. 1505-1519

[14] Ray smith “A Simple and Efficient Skew Detection Algorithm via Text Row Algorithm’’
Personal Systems Laboratory HP Laboratories Bristol HPL-94-113 December, 1994
[15] Hough, P. Method and means for recognizing complex pictures, U.S. Patent no. 3069654.
1962
[16] H. S. Baird, “The skew angle of printed documents,” Proc. SPIE Symp. on Hybrid Imaging
Systems, Rochester, NY, 1987, pp. 21-24.
[17] W. Postl, “Method for automatic correction of character skew in the acquisition of a text
original in the form of digital scan results,” U.S. Pat. 4,723,297, Feb. 2, 1988.
[18] S. Chen, M. Y. Jaismha, J. Ha, I. T. Phillips and R. M. Haralick, “UW English Document
Image Database – (I) Manual,” Reference Manual, 1993
[19] D. S. Bloomberg and G. Kopec, “Method and apparatus for identification of document
skew,” U.S. Pat. 5,355,420, Oct. 11, 1994.

45
Appendices
Appendix A

MainActivity.java
package com.example.rakibulsadik.customcamera;

import java.io.File;
import java.io.FileOutputStream;
import java.text.SimpleDateFormat;
import java.util.Date;
import java.util.Locale;
import android.app.Activity;
import android.content.Intent;
import android.graphics.Bitmap;
import android.graphics.BitmapFactory;
import android.hardware.Sensor;
import android.hardware.SensorEvent;
import android.hardware.SensorEventListener;
import android.hardware.SensorManager;
import android.os.Bundle;
import android.os.Environment;
import android.provider.MediaStore;
import android.view.View;
import android.view.View.OnClickListener;
import android.widget.Button;
import android.widget.ImageView;
import android.widget.TextView;
import android.widget.Toast;
import com.oreillyschool.android2.cameraadvanced.R;
import android.content.SharedPreferences;

public class MainActivity extends Activity implements SensorEventListener {

private static final int TAKE_PICTURE_REQUEST_B = 100;


private ImageView mCameraImageView;
private Bitmap mCameraBitmap;
private Button mSaveImageButton;
private TextView xText,yText,zText;
private Sensor mySensor;
private SensorManager SM;
SharedPreferences preferences;
SharedPreferences.Editor editor;
float x,y,z;

private OnClickListener mCaptureImageButtonClickListener = new OnClickListener() {


@Override
public void onClick(View v) {
startImageCapture();
}
};

private OnClickListener mSaveValueButtonClickListener = new OnClickListener() {


@Override
public void onClick(View v) {

String xv= String.format("%.2f",x);


String yv= String.format("%.2f",y);

46
String zv= String.format("%.2f",z);

editor = preferences.edit();
editor.putString("X",xv);
editor.putString("Y",yv);
editor.putString("Z",zv);
editor.commit();
Toast.makeText(getApplicationContext(),"Details Saved",Toast.LENGTH_SHORT).show();
}
};

private OnClickListener mSaveImageButtonClickListener = new OnClickListener() {


@Override
public void onClick(View v) {
File saveFile = openFileForImage();
if (saveFile != null) {
saveImageToFile(saveFile);
} else {
Toast.makeText(MainActivity.this, "Unable to open file for saving image.",
Toast.LENGTH_LONG).show();
}
}
};

@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);

mCameraImageView = (ImageView) findViewById(R.id.camera_image_view);

findViewById(R.id.capture_image_button).setOnClickListener(mCaptureImageButtonClickLis
tener);

findViewById(R.id.save_value_button).setOnClickListener(mSaveValueButtonClickListener)
;

mSaveImageButton = (Button) findViewById(R.id.save_image_button);


mSaveImageButton.setOnClickListener(mSaveImageButtonClickListener);
mSaveImageButton.setEnabled(false);
SM = (SensorManager)getSystemService(SENSOR_SERVICE);
mySensor= SM.getDefaultSensor(Sensor.TYPE_ACCELEROMETER);
SM.registerListener(this,mySensor,SensorManager.SENSOR_DELAY_NORMAL);
xText = (TextView)findViewById(R.id.xText);
yText = (TextView)findViewById(R.id.yText);
zText = (TextView)findViewById(R.id.zText);

preferences = getSharedPreferences("MyPrefs", MODE_PRIVATE);

/*x = xText.getText().toString();
y = yText.getText().toString();
z = zText.getText().toString();*/
}

@Override
protected void onActivityResult(int requestCode, int resultCode, Intent data) {
if (requestCode == TAKE_PICTURE_REQUEST_B) {
if (resultCode == RESULT_OK) {
// Recycle the previous bitmap.
if (mCameraBitmap != null) {
mCameraBitmap.recycle();

47
mCameraBitmap = null;
}
Bundle extras = data.getExtras();

byte[] cameraData =
extras.getByteArray(CameraActivity.EXTRA_CAMERA_DATA);
if (cameraData != null) {
mCameraBitmap = BitmapFactory.decodeByteArray(cameraData, 0,
cameraData.length);
mCameraImageView.setImageBitmap(mCameraBitmap);
mSaveImageButton.setEnabled(true);
}
} else {
mCameraBitmap = null;
mSaveImageButton.setEnabled(false);
}
}
}

private void startImageCapture() {

startActivityForResult(new Intent(MainActivity.this, CameraActivity.class),


TAKE_PICTURE_REQUEST_B);

private File openFileForImage() {


File imageDirectory = null;
String storageState = Environment.getExternalStorageState();
if (storageState.equals(Environment.MEDIA_MOUNTED)) {
imageDirectory = new File(

Environment.getExternalStoragePublicDirectory(Environment.DIRECTORY_PICTURES),
"com.oreillyschool.android2.camera");
if (!imageDirectory.exists() && !imageDirectory.mkdirs()) {
imageDirectory = null;
} else {
SimpleDateFormat dateFormat = new SimpleDateFormat("yyyy_mm_dd_hh_mm",
Locale.getDefault());

return new File(imageDirectory.getPath() +


File.separator + "image_" +
dateFormat.format(new Date()) + ".png");
}
}
return null;
}

private void saveImageToFile(File file) {


if (mCameraBitmap != null) {
FileOutputStream outStream = null;
try {
outStream = new FileOutputStream(file);

if (!mCameraBitmap.compress(Bitmap.CompressFormat.PNG,100,outStream))
{

Toast.makeText(MainActivity.this, "Unable to save image to file.",


Toast.LENGTH_LONG).show();
} else {
Toast.makeText(MainActivity.this, "Saved image to: " + file.getPath(),
Toast.LENGTH_LONG).show();
}

48
outStream.close();
} catch (Exception e) {
Toast.makeText(MainActivity.this, "Unable to save image to file.",
Toast.LENGTH_LONG).show();
}
}
}

@Override
public void onSensorChanged(SensorEvent event) {
xText.setText("X:"+String.format(Locale.getDefault(),"%.2f",event.values[0]));
yText.setText("Y:
"+String.format(Locale.getDefault(),"%.2f",event.values[1]));
zText.setText("Z:
"+String.format(Locale.getDefault(),"%.2f",event.values[2]));

x= event.values[0];
y= event.values[1];
z= event.values[2];

@Override
public void onAccuracyChanged(Sensor sensor, int accuracy) {

}
}

Appendix B
CameraActivity.java
import java.io.IOException;
import java.util.Locale;
import android.app.Activity;
import android.content.Intent;
import android.content.SharedPreferences;
import android.graphics.Bitmap;
import android.graphics.BitmapFactory;
import android.hardware.Camera;
import android.hardware.Camera.PictureCallback;
import android.hardware.Sensor;
import android.hardware.SensorEvent;
import android.hardware.SensorEventListener;
import android.hardware.SensorManager;
import android.os.Bundle;
import android.view.SurfaceHolder;
import android.view.SurfaceView;
import android.view.View;
import android.view.View.OnClickListener;
import android.widget.Button;
import android.widget.ImageView;
import android.widget.TextView;
import android.widget.Toast;
import com.oreillyschool.android2.cameraadvanced.R;

public class CameraActivity extends Activity implements PictureCallback,


SurfaceHolder.Callback,SensorEventListener {

public static final String EXTRA_CAMERA_DATA = "camera_data";

49
private static final String KEY_IS_CAPTURING = "is_capturing";

private Camera mCamera;


private ImageView mCameraImage;
private SurfaceView mCameraPreview;
private Button mCaptureImageButton;
private Button mAlartButton;
private byte[] mCameraData;
private boolean mIsCapturing;
private TextView xText,yText,zText,x1,y1,z1;
private Sensor mySensor;
private SensorManager SM;
SharedPreferences preferences;
SharedPreferences.Editor editor;
float x2,y2,z2;
String xValue,yValue,zValue;
public ImageView mDirect;
public ImageView mleftRight;
public ImageView zView;

private OnClickListener mCaptureImageButtonClickListener = new OnClickListener() {


@Override
public void onClick(View v) {

captureImage();

}
};
//if a want to capture image again
private OnClickListener mRecaptureImageButtonClickListener = new OnClickListener()
{
@Override
public void onClick(View v) {
setupImageCapture();
}
};
//for save image
private OnClickListener mDoneButtonClickListener = new OnClickListener() {
@Override
public void onClick(View v) {
if (mCameraData != null) {
Intent intent = new Intent();
intent.putExtra(EXTRA_CAMERA_DATA, mCameraData);
setResult(RESULT_OK, intent);
} else {
setResult(RESULT_CANCELED);
}
finish();
}
};
private OnClickListener mAlartButtonClickListener = new OnClickListener() {
@Override
public void onClick(View v) {

}
};

@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);

50
setContentView(R.layout.activity_camera);

mCameraImage = (ImageView) findViewById(R.id.camera_image_view);


mCameraImage.setVisibility(View.INVISIBLE);

mDirect= (ImageView)findViewById(R.id.direction);
mleftRight= (ImageView)findViewById(R.id.left_right);
// zView=(ImageView)findViewById(R.id.imageView);
mCameraPreview = (SurfaceView) findViewById(R.id.preview_view);
final SurfaceHolder surfaceHolder = mCameraPreview.getHolder();
surfaceHolder.addCallback(this);
surfaceHolder.setType(SurfaceHolder.SURFACE_TYPE_PUSH_BUFFERS);

mCaptureImageButton = (Button) findViewById(R.id.capture_image_button);


mCaptureImageButton.setOnClickListener(mCaptureImageButtonClickListener);
mAlartButton = (Button) findViewById(R.id.signal);

final Button doneButton = (Button) findViewById(R.id.done_button);


doneButton.setOnClickListener(mDoneButtonClickListener);
mIsCapturing = true;

//sensor part added to this custom camera code


SM = (SensorManager)getSystemService(SENSOR_SERVICE);
mySensor= SM.getDefaultSensor(Sensor.TYPE_ACCELEROMETER);
SM.registerListener(this,mySensor,SensorManager.SENSOR_DELAY_NORMAL);
xText = (TextView)findViewById(R.id.xText);
yText = (TextView)findViewById(R.id.yText);
zText = (TextView)findViewById(R.id.zText);
//putting save value in x1,y1,z1
x1 = (TextView)findViewById(R.id.x1Text);
y1 = (TextView)findViewById(R.id.y1Text);
z1 = (TextView)findViewById(R.id.z1Text);
//to save the accelerometer value
preferences = getSharedPreferences("MyPrefs", MODE_PRIVATE);
xValue =preferences.getString("X", null);
yValue= preferences.getString("Y", null);
zValue= preferences.getString("Z", null);
String s = "X - "+xValue+"Y - "+yValue +"Z-"+zValue;

x1.setText("X:"+xValue);
y1.setText("Y:"+yValue);
z1.setText("Z:"+zValue);

String xv= String.format("%.2f",x2);


String yv= String.format("%.2f",y2);
String zv= String.format("%.2f",z2);

@Override
public void onSensorChanged(SensorEvent event) {
xText.setText("X:"+String.format(Locale.getDefault(),"%.2f",event.values[0]));
yText.setText("Y:
"+String.format(Locale.getDefault(),"%.2f",event.values[1]));
zText.setText("Z:
"+String.format(Locale.getDefault(),"%.2f",event.values[2]));

x2= event.values[0];

51
y2= event.values[1];
z2= event.values[2];
if(Double.parseDouble(xValue)>x2)
{
mDirect.setImageResource(R.drawable.down);
}
else{
mDirect.setImageResource(R.drawable.up);
}
if(Double.parseDouble(yValue)>y2)
{
mleftRight.setImageResource(R.drawable.right);
}
else{
mleftRight.setImageResource(R.drawable.left);
}

mAlartButton.setVisibility(View.INVISIBLE);

if(x2>=Double.parseDouble(xValue)-
.20&&x2<=Double.parseDouble(xValue)+.20&&y2>=Double.parseDouble(yValue)-
.20&&y2<=Double.parseDouble(yValue)+.20)
{
mAlartButton.setVisibility(View.VISIBLE);
//captureImage();
}

@Override
public void onAccuracyChanged(Sensor sensor, int accuracy) {

@Override
protected void onSaveInstanceState(Bundle savedInstanceState) {
super.onSaveInstanceState(savedInstanceState);

savedInstanceState.putBoolean(KEY_IS_CAPTURING, mIsCapturing);
}

@Override
protected void onRestoreInstanceState(Bundle savedInstanceState) {
super.onRestoreInstanceState(savedInstanceState);

mIsCapturing = savedInstanceState.getBoolean(KEY_IS_CAPTURING, mCameraData ==


null);
if (mCameraData != null) {
setupImageDisplay();
} else {
setupImageCapture();
}
}

@Override
protected void onResume() {
super.onResume();

if (mCamera == null) {

52
try {
mCamera = Camera.open();
mCamera.setPreviewDisplay(mCameraPreview.getHolder());
if (mIsCapturing) {
mCamera.startPreview();
}
} catch (Exception e) {

Toast.makeText(CameraActivity.this, "Unable to open camera.", Toast.LENGTH_LONG)


.show();
}
}
}

@Override
protected void onPause() {
super.onPause();

if (mCamera != null) {
mCamera.release();
mCamera = null;
}
}

@Override
public void onPictureTaken(byte[] data, Camera camera) {
mCameraData = data;
setupImageDisplay();
}

@Override
public void surfaceChanged(SurfaceHolder holder, int format, int width, int
height) {
if (mCamera != null) {
try {
mCamera.setPreviewDisplay(holder);
if (mIsCapturing) {
mCamera.startPreview();
}
} catch (IOException e) {
Toast.makeText(CameraActivity.this, "Unable to start camera preview.",
Toast.LENGTH_LONG).show();
}
}
}

@Override
public void surfaceCreated(SurfaceHolder holder) {
try{
mCamera= android.hardware.Camera.open();

}
catch(RuntimeException ex)
{
ex.printStackTrace();
}
android.hardware.Camera.Parameters parameters;
parameters=mCamera.getParameters();
parameters.setPreviewFrameRate(20);

parameters.setFocusMode(android.hardware.Camera.Parameters.FOCUS_MODE_CONTINUOUS_PICTU
RE);
//parameters.setJpegQuality(100);

53
mCamera.setParameters(parameters);
}

@Override
public void surfaceDestroyed(SurfaceHolder holder) {
}
//i think this method have to change to capture image by accelerometer value
private void captureImage() {

mCamera.takePicture(null, null, this);


}

private void setupImageCapture() {


mCameraImage.setVisibility(View.INVISIBLE);
mCameraPreview.setVisibility(View.VISIBLE);
mCamera.startPreview();
mCaptureImageButton.setText(R.string.capture_image);
mCaptureImageButton.setOnClickListener(mCaptureImageButtonClickListener);
}

private void setupImageDisplay() {


Bitmap bitmap = BitmapFactory.decodeByteArray(mCameraData, 0,
mCameraData.length);
mCameraImage.setImageBitmap(bitmap);
mCamera.stopPreview();
mCameraPreview.setVisibility(View.INVISIBLE);
mCameraImage.setVisibility(View.VISIBLE);
mCaptureImageButton.setText(R.string.recapture_image);
mCaptureImageButton.setOnClickListener(mRecaptureImageButtonClickListener);
}
}

54

You might also like